Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The death of self-driving cars is greatly exaggerated (understandingai.org)
200 points by tim_sw on June 3, 2023 | hide | past | favorite | 590 comments


Tesla’s insistence on using vision alone is pretty dumb. Elon and Andrej Karpathy argued that since humans can drive using just vision, that’s how we should do it in self driving cars, but I think that’s a flawed argument. The proper question to ask would be - if given additional senses, wouldn’t humans use them for safer driving?


Also humans do sometimes do use additional senses when driving. For example I've had to make a left turn at a T intersection onto the cross street, where I had a stop sign and the cross traffic did not.

This was in California's central valley where it can get very foggy making it very hard to see traffic until it was almost in the intersection.

It was a quiet rural area though and by opening the windows on both sides and turning off the radio I could hear traffic quite a bit before I could see it. I'd sit at the stop listening until I'd heard a car or two go by to be sure that it was quiet enough that I could hear them. Once I'd calibrated my senses to that day's current conditions I was able to make the turn.


Listening for cars in fog is great until a Prius comes along, and coastal areas of California is their natural habitat. But, yeah, if I had extra senses laying around, I'd use them for all sorts of things.


Vehicle noise is dominated by road noise beyond about 25 mph -- that's why the mandatory sound on new EVs or Priuses typically cuts out around 20-25mph.

Presumably this intersection is at high enough speed on the cross traffic that it's the road noise tzs is listening for, not engine noise.


That's why in the EU they have introduced requirements for electric cars to make artificial sounds. Sometimes one of them drives past and it sounds like a spaceship from a sci fi movie lol.


This is one of the most retrograde pieces of legislative nonsense imaginable.

Just when significantly cutting noise pollution from motor vehicles in urban areas is finally within our grasp we toss it away by forcing them to make stupid and annoying UFO noises on the grounds of nebulous safety concerns.

There were any number of ways of solving this that would have been less annoying and better for peoples' health[0].

For one thing, most people are able to use their eyes and will learn soon enough that EVs don't make much noise at low speeds and will keep an eye out for them. How do I know this? I live in Cambridge, UK, which is brimming with cyclists. They don't make much noise either, but you learn to look out for them (and very quickly too).

And for those who are partially sighted or blind some sort of warning device + appropriate signal could no doubt have been engineered and legislated.

But, no, we've gone for stupid noises instead.

[0] We now, of course, know that noise pollution does in fact cause health issues and, I'd argue, these outweigh the safety argument.


I don't think you would make this argument if you had seen up close the damage a car can do to a pedestrian. The problem isn't noise. It's that there are two ton vehicles traveling close to unprotected people. Anything that can mitigate that is a good thing. The argument about noise pollution causing health issues is ridiculous in comparison.

I've heard that some EV drivers turn off the sound because it annoys them. They are potentially setting themselves up for a lifetime of remorse.


It's not ridiculous. You need to consider the numbers involved. If one extra person breaks his leg because he didn't look and see the quiet car before crossing the street, that's a fair trade off for reducing the noise pollution for a million people.


Noise pollution for me is dominated by barking dogs, engine braking, and motorbikes. Making very quite cars less quiet would be comparatively trivial.


Cars are 'less noisy' vs all the things that you mentioned simply because they are omnipresent. They wear you down, grind away at your tranquility, but because they're always there, they do so without you even noticing. A peaceful evening walk is made wearisome by the constant trickle of cars intruding into your hearing. Particularly so in the USA, because the cars there (just like everything else) are vast. Car use in city is a blot on humanity. I wish they would all just go away and be saved for the weekend road trip, or moving house, but not for groceries or commuting.


I have hyperacusis (hypersensitive to noise) and insomnia and I wish all noisy things would go away. I’m one of those people who’s health is severely impacted by noise and I spend a lot of money trying to get away from it. I know that most people don’t have these conditions so they don’t understand how inconsiderate they’re being. It is unrealistic of me to expect others to be more accommodating. If anything people are getting much noisier. I would be exceedingly happy to live in a place where electric cars are the biggest generators of noise pollution. I’ve lived in Europe in walkable cities with little car traffic and it’s worse with drunk revealers singing at the top of their lungs at all hours, parties going until late at night and barking dogs in the early morning. Makes me pine for a HOA controlled gated city suburbia with strict noise controls. Most traffic noise, and especially electric car noise, can be covered up by a noise generator. One place I lived installed an artwork that lit up to different levels depending on the noise pollution, instead of encourage people to be quieter it did the opposite as they competed on who could make the most noise.


You are utterly overblowing one part (noise pollution from EVs, which is really just a bit louder humm, for whatever personal reasons you have) and ignoring the additional, massive and instant benefits of actually saving lives. I definitely appreciate the added noise, so do my small kids, and they don't have to learn this from having school mate killed by ultra quiet car, same for ie elderly. Not living on this planet alone, did you notice?

Noise pollution from ie ambulance, sports car, basically any motorbike, old car, trucks and so on is much much bigger. Where is your outrage for those?

And no, noise pollution from these cars discussed is not causing massive health issues that outweigh people getting maimed and killed by them, thats just your personal preferences (like not owning a car because you are young without a family etc) you would like to push on whole society for whatever personal reasons.


Making life much more annoying at the benefit of a few kids getting run over is not automatically a great tradeoff. As a society, we should be thinking harder about these sorts of conundrums. No one wants to get run over and yeah it's a bad outcome, but how far would you go? How many times do you have to get woken up by an Amber alert before you turn it off?

Maybe his personal reason is that he's absolutely capable of getting out of the way of a car, but he doesn't like noise. That's a reasonable preference. If you absolutely can't guarantee that a kid ends up under a car without the noise, fine, but I doubt that's true.

Sure, we dont live on this planet alone, but that seems like it's more justification for not intentionally making our shared space miserable, not less.


From what I've heard, EVs are about as noisy as regular cars at speeds relevant to the noise pollution issue.

Check out: https://youtu.be/CTV-wwszGw8?t=815


>And for those who are partially sighted or blind some sort of warning device + appropriate signal could no doubt have been engineered and legislated.

What do you have in mind? I can’t think of anything as effective as sound—the blind person needs no special hardware to perceive it, and it can be easily spatially localized.


The EVs/hybrids I've encountered in the US seem to have artificial sound as well when no ICE is running, including the Prius.


Yes, since 2010: https://en.m.wikipedia.org/wiki/Electric_vehicle_warning_sou...

They turn off over ~18MPH, though. After that, tire noise is much louder. As loud as an ICE car.

So, realistically, you’d hear a Prius if it’s going normal road speeds.

Side note: I wish the sound was more pleasant. When multiple cars are moving in a lot I call it the “choir of the damned”. They’re also louder than ICE cars and that’s kinda lame, we moved backwards WRT noise.


I live in the EU, and EVs are much quieter than ICE cars at low speeds in my day-to-day experience. The volume levels need to be raised significantly from my perspective; they're dangerously low here. There are quite a lot of kids on the roads here, and the difference in audibility between ICE and EV cars is concerning.

Are you saying the typical EVs you hear are _louder_ than ICE cars where you live?

As to the pleasantness of the noise - yeah, that seems to be a manufacturer's choice, or perhaps even driver choice? And let's hope just like annoying ring-tones of years past that the current selection dies out soon...


My RAV4 Prime is much louder than an ICE at low speeds, especially in reverse. It’s so loud it’s sparked a bunch of videos and discussion on how to lower the volume: https://m.youtube.com/watch?v=q1UqicqdzFE

Some have figured out how to disable it completely (just pulling the noisemaker causes a fault code to trigger) but quieting it down to roughly the volume of an ICE seems more reasonable to me.


That noise sounds bad, and unreasonable. I suppose I've just been lucky not to live near anybody that has a car making that kind of noise.

Irrespective of the volume it is remarkable why any car designer thought that this was the right _kind_ of noise for the car to emit. Somebody chose that soundtrack, and you kind of have to wonder why...


> Are you saying the typical EVs you hear are _louder_ than ICE cars where you live?

ICE cars are pretty quiet where I live. Either that or there is so much ambient noise that I've lost some of my hearing but either way I don't hear much of an engine noise with recent cars.


Oh they're quiet here too - it's just that (most) EVs tend to be almost inaudible. They'll make a very quiet whirring noise sometimes, but the volume is so low that you'll easily miss em.

I don't know the people driving these cars well; perhaps they modded them (seems unlikely, given the neighborhood), or perhaps they're old enough to have escaped new requirements; regardless - they're too quiet here in my sample size of 1 neighborhood in NL.


Both the regulation and folks' implementation of it is too conservative.


Volvo plug-in hybrids don't make any noise when driving in EV-only mode. They are super stealthy, you have to be super careful when driving around a car park because no one is aware you're around them.


Yeah, I particularly enjoy being able to sneak up on people in a giant estate/station wagon.


I mean, I'm not saying this is a good thing. Just an observation that not all hybrids emit noise when driving with the ICE off.


Regulation in the U.S. requires it: “After several additional delays, the National Highway Traffic Safety Administration issued its final ruling in February 2018. It requires hybrids and electric vehicles travelling at less than 18.6 mph (30 km/h) to emit warning sounds that pedestrians must be able to hear over background noises. The regulation requires full compliance in September 2020, but 50% of "quiet" vehicles must have the warning sounds by September 2019.” https://en.m.wikipedia.org/wiki/Electric_vehicle_warning_sou...


Same in the USA, and it his horribly implemented.

Before this a Prius or any modern car would be dead silent at a stop because of stop-start technology. And even those small 1.6L don't make that much noise when idling.

Now you hear all these cars making their high-pitched UFO sound. And it is VERY irritating.


I once placed a snot rocket on a Prius windshield from my bicycle. It snuck up on me. So I did the responsible thing and bailed into a neighborhood so I wouldn't have to look them in the eye.

Quiet vehicles should emit a peace cry, and I should look before I rocket.


As a cyclist who once got that stuff in his face: yes, please look first. Expecting others to make a noise just in case they might get a bioweapon dumped on them is not nice.


Absolutely, I was in the wrong. But if you're entering somebody's bubble, an "on your left" is also courteous.


Wait - are snot rockets a common thing for cyclists?!


…and they can get that much distance on their rockets?


Not usually. This Prius was practically in my armpit.


I think the idea of EV making additional noise is flawed. The important thing is that people should be looking when using roads and not just relying on hearing. Deaf people are allowed to drive and they can certainly cycle safely as well as long as they just look around a bit more to keep aware of what's around them.

Having fake noise just encourages pedestrians to keep looking at their phones and not use their eyes when crossing roads or cycle lanes and they can injure themselves or others by doing that.

Also, there's far too much noise in busy areas as it is, so it seems unhelpful to deliberately add extra noise to the environment.


The main danger in my experience is that it is harder to hear if the EV is operational when standing still.


If an EV is standing still, then I'd rate the danger level to be zero.


But to be fair, that’s just because you couldn’t easily see in that direction. Presumably a Tesla’s cameras can see in all directions.

A better example would probably be hearing emergency sirens before there was any line of sight to the emergency vehicle.


Tesla cameras can't see through fog any better than eyes.


I think there's a reason why a new sensor suite is rumoured to be imminent. Much better cameras + radar? Key word being "Project Highland".

It won't be a retrofit for older cars, which tells me current owners won't get to experience that next generation on FSD which will be possible.

I never bought mine (Ryzen '22 LR3 with earlier gen radar, now disabled, plus USS - still in use fortunately) for the FSD anyway so I don't mind. I won't blame those who might do though! (This is presently all speculation/rumours until officially confirmed).


Must be out of the loop. HW4, which is higher resolution cameras and Phoenix HD Radar, has been in S/X for months and started showing up in (at least Fremont) Model Y's with a build date around May 25th. Highland has nothing to do with this, and Model 3 sales are still doing fine, so they don't need to drop anything yet to boost model 3 demand.


Dropping prices doesn’t even have to be about demand. Lot of rivals like Lucid and Rivian struggling mightily in this market. You can pinch them out. You also may just want to share production efficiency gains with the customer for the same reason.


Oh thanks, appreciate the updated info.


Model 3 currently has a $3000 discount to boost demand.


A 3k price drop shows they only need a little bit of a demand boost, especially since the discount is only on inventory cars. Highland will be a massive demand bump for them, since it being new likely pushes some Y purchasers to the 3.


It's only 3k because of the newly available 7500 credit.


Is this true though? Earlier today mine picked up on emergency lights that were several hundred feet further away in traffic than I would have seen them. It seems able to enhance the images the cameras capture.


Could also be aware of the position of fixed road equipment via mapping software. It's more plausible than the cameras having some kind of super vision.


It was a police car, not something fixed. I would imagine that the Tesla has zoom built into its camera system, whereas human vision does not.


Realistically the training data contained some amount of emergency lights through fog, so it can identify faint emergency lights through fog as real emergency lights and will appropriately display the warning on-screen.


Also, WAZE and Google maps have had user reported speed traps for some time. So 1 Tesla driving by information the network of the police car and others do not need to observe it to know it's there


Of course they can. Fog doesn't block mid-IR.


MWIR sensors are even more expensive than LIDAR, are military controlled and dofficult to export, and Tesla does not have them.


Lidar can't see through fog either.


That’s precisely the reason why busses and maybe other commercial vehicles must stop at railroad crossings before passing


In my view, the higher-level issues with the FSD Beta program are:

- A failure by Tesla to view the system that they are developing as what it really is - a physical safety-critical system and not "an AI". Those two are distinct systems as, with a physical safety-critical systems, the totality of the systems safety components cannot be fully expressed in software - neither initially nor continuously.

- To build on that point, Tesla is not allowing the Operational Design Domain (ODD) via a robust, well-maintained validation process determine the vehicle hardware as the ODD demands it to be. Instead, Tesla is trying to "front run" it (ignore the demands of the ODD) by largely focusing on hardware costs. The tension from failing to recognize that is why Tesla, in part, has a long history of being forced to (somewhat clandestinely) change the relevant sensor and compute hardware on their vehicles while promising to "solve FSD" (whatever that means) by the end of every year since around 2015 or so.


> and not "an AI"

But what is AI? If it's just "artificial intelligence", it effectively includes all programming with if/then logic gates based on program input.


> it effectively includes all programming with if/then logic gates based on program input.

And? You think that is the totality of a physical safety critical system?


I'm pointing out that calling anything "AI" is both pointless and meaningless. It's a buzzword for board members and shareholders to throw around, since they refer to it as the latest LLM technology, while the phrase just means any complex business logic generated by a program.


It’s generally accepted to mean the use of neural networks which Tesla is obviously using. Good luck even identifying a stop sign with “complex business logic” or “if/else”


Most important road signs have rather distinct shapes, standardized sizes and are angled towards oncoming traffic. Having an object with known shape aligned almost perfectly towards the camera is basically the best case for many primitive object detection algorithms.


True, but it’s equally important that a self-driving car be able to recognize a stop sign that is bent from a previous accident and facing an arbitrary angle (as well as one that is angled towards the car’s lane but applies to a different road).


And stop signs that have been altered in some way. For example, rural stop signs that are peppered with holes from pot shots must still be recognized. Snowy stop signs with the bottom half obscured by accumulated drift. Signs with a non-red sticker reading “WAR” placed below the word “STOP”.

And that’s not even getting into cases where you conditionally act like there’s a stop sign. The city of Houghton,MI has major streets along the side of a hill, and minor streets going up and down the hill. Every winter, sand is put down for traction, and every spring it is cleaned away. If there’s a late-season snowstorm after the spring cleaning, cars going downhill on the minor streets physically cannot stop, so everybody on the major streets looks uphill before crossing.

Short of location-dependent fine-tuned models, I’m not sure how machine learning could replicate the logic of “if snowy in late spring, grant right-of-way to cars headed downhill”.


They're "artificial neural networks" and it would seem to me it's recognizing stop signs by comparing them to images of stop signs. So I tend to lean toward "AI" is the latest "buzzword". I think in truth it's more akin to a search engine reacting to inputs, but from sensor data, than anything close to real "intelligence" of any kind.

I can see how it appears to intelligent, but it lacks reasoning, creativity, and critical thinking.


If I had a fully functional self-driving car, I wouldn't be lamenting that it wasn't creative enough


“Creative” doesn’t necessarily mean “generating new behavior”, but can also mean “generating new hypotheses”. Suppose you see a group of young kids playing in a yard. One tosses a ball up into the air, and the rest run towards it. The first to reach the ball throws it back into the air, and the rest run toward it again.

It requires creativity to recognize the rules of the game as “try to be the first to reach the ball”, to recognize that the thrower may not have time to carefully aim, and that the others might chase the ball regardless of its location. Only if all three of those creative leaps are made, then logical deduction can take over to conclude “if the ball goes in front of me, stop before a kid does the same”.


Also, humans and other animals that rely on vision have eyelids and tear ducts and are able to blink and get stuff out of an eye.

Poor, poor Tesla cameras freak out as soon as the sunshine is too bright or there's snow or rain or ice or mud in the way. You'd think if they're going to rely on vision, every camera mounted on the car would have a way to "squint" in blinding-light conditions, or "wipe" the lens or something when smudges, rain, snow, ice, mud, or bug-splat blocks the view. But then, Tesla is insanely cheap, and all that would require parts, and that would impact margins, and that would impact stock price, and so, this is why we can't have nice things ....


Yeah human eyes cover a huge dynamic range compared to traditional cameras that have all sorts of issues with either too little or too much light (blooming, lens flares). Are a Tesla's cameras of the same quality of the human eye? Can they see in the dark just as well?


Not only can Tesla's cameras not do that as well, they may also just shut down if it's too cold, even if they're not covered in snow.

Current self driving systems sometimes fail in perfect weather conditions on correctly marked, empty roads. There's a long road ahead if it's supposed to actually work in the real world.


Human vision sometimes fails in perfect weather conditions on correctly marked, empty roads.

It doesn't need to be perfect, it just needs to be better than humans.


That's a failure of attention, not vision.

All the sources I'm able to find say there are no cameras in existence that are as good as human vision. Human vision is quite good and adaptive to real world conditions of all sorts.


Would you use the same excuse for calculators or medical devices?

The reason we invented machines in the first place is because they're significantly better and more reliable than humans.


Would be nice if someone was prototyping things like that. Id imagine you could sell actual self driving cars for $200k or maybe a bit more. (Cost of a decent luxury car + 1-2 years of a dedicated full time chauffeur)


It’s called Mercedes Benz. S class.

They go the boring path. Work together with regulators. Prove to them that whatever they are doing is actually save to use. Don’t oversell to their customers.

They go the way of building and retaining trust - with customers and regulation bodies.


And also build a product which is far more expensive than what Tesla does, when you compare features of their E-variants. It's all tradeoffs.


Doing something cheaper than competitors is a good thing when you’re achieving the same goal. Doing something cheaper than competitors when there’s a trade-off to the buyer fills out options in the market. Doing something cheaper than competitors when there’s an externality (e.g. a self-driving car that fails to recognize pedestrians) is morally condemnable.



But humans eyes often look away from the road, close during a sneeze etc, and have a very narrow viewing angle compared to a car surrounded 365 degrees by cameras... so there are plusses and minuses.

Human vision isn't that perfect for driving when it's looking at a mobile phone.


Yeah it's shocking to me how many people overlook this. Even if we pretended that the Tesla sensor suite was capable of FSD, it's not FSD if you have to disengage when the lens gets mud on it. Sensor cleaning is an integral part of actually being able to have driverless operation. When I worked at Argo we spent a lot of time making sure that we were designing failsafe methods for detecting and dealing with obstructions (https://www.axios.com/2021/12/15/self-driving-cars-clean-dir...).


Also I think the biggest discovery is that the “brain” part of the human “eyes plus brain” part is extremely hard, and “sensor which can see depth” probably makes the brain part easier.

That said, Tesla was never in a position to use LiDAR because it has generally been extremely expensive. Solid state lidars are supposedly now hitting low volume testing for 2025 production years. Tesla is a mass manufacturer not a self driving start up, so there was never really an option for them to offer LiDAR without an extremely expensive self driving package.

One thing however that was obviously wrong was Elon’s promises, which were extremely misleading and helped build his fortune thanks to the misunderstanding. (Assuming this inflated stock values)

With solid state LiDAR supposedly becoming available for $500 in the next two years (a promise we have heard since 2016 but one that seems possibly to finally be coming true) we may never end up seeing if Tesla could have ever done it with pure vision - they could go with solid state LiDAR for forward facing driving in the next few years.

That said, over promising is going well for them. Perhaps they will just keep doing that.


If Tesla caves and ships LiDAR, I don’t see how they can get out of refunding all previously sold FSD packages. They can continue working on camera-only FSD but it will be immediately apparent that there’s a massive gulf in safety and performance between that and the LiDAR-equipped option.


To some extent Tesla is already going to need to resolve this hardware discrepancy issue. The new HW4 revision vehicles will have 4D imaging Radar and forward mounted side facing cameras (https://twitter.com/greentheonly/status/1625905220432671015?...) to address the blind spot that they've had since the original hardware (https://youtu.be/DlC2tpRocK8). HW3 vehicles cannot be retrofitted with HW4 (https://twitter.com/greentheonly/status/1625905186387505155?...) so this will likely be a large sore spot in the coming months as HW4 vehicles start hitting the roads and people start noticing the discrepancies.

Tesla of course claims that the HW3 cars will still get FSD at some point, but unless they somehow figure out how to bend light, that blind spot will continue to be an issue on older cars.


Tesla really needs to be made to pay billions for their FSD lies.


Turns out that the HW4 model Y doesn't have radar at all: https://www.notateslaapp.com/news/1456/tesla-s-model-y-with-...


That's not what the article you linked says at all. Some HW4 vehicles that are shipping now are shipping without Radar (likely due to supply chain or cost issues). But HW4 was designed with a Radar in mind, it's present in the code, registered with the FCC and there are physical connectors for it on the HW4 compute module. You can see photos of the interior of the Radar in the thread from Green that I linked earlier or in this article (https://www.teslarati.com/tesla-hardware-4-hd-radar-first-lo...)

The fact that some early HW4 units will ship with the updated camera suite but not the Radar only further adds to my point that some users are going to be left with inferior sensing systems despite having paid the same $10k for FSD as everyone else. The whole "radar will be used to train and improve the vision" argument is just nonsense made up by Elon & Tesla fans. A properly functioning radar camera sensor fusion system will be superior in every way to a camera only solution. And there will be 0 Tesla's that actually achieve "full self driving" (ie. you going to sleep in the back-seat and waking up at your destination) until Tesla adds things like a cleaning system to their existing camera solution for example. The hardware is simply inadequate.


I'm not putting into question the fact that having a radar is significantly superior to vision only, that's obvious.

I'm saying that, as we see, HW4 and radar are two distinct hardware configurations: saying that there will be a large sore spot because HW4 cars have radar is objectively false as not all of them do.


I’m super confused… you’re agreeing that having a radar is superior but you’re disagreeing that it’s going to be a sore spot? How is that possible. Regardless of how you name the configurations (HW4, HW4.5 whatever, the naming is irrelevant) the point is that there will be multiple configurations of sensing suites, some of which will be objectively better than others. That’s the sore spot.

The fact that HW4 and radar are separate configurations by name is not important. HW3 is also included in the mix (you can buy FSD on it) and it has totally different camera placement. So radar aside there’s still a difference in the sensor suite.

People who paid $10k and were promised “FSD” and future hardware upgrades have every right to be pissed off about this.


Well, it’s cheaper to retrofit a $500 LiDAR than it is to refund a $10,000 option. I’m curious how they handled upgrades to the more powerful computer they released. And we can expect they will release another more powerful computer again. The same issues would apply, but they have no choice but to deal with it if they ever want to ship a fully functional system.


Refits might cost less than a full $10,000 refund, but they're definitely going to be expensive enough I could imagine Tesla trying to do everything in their power to avoid.

Installing the LiDAR unit in the front bumper would require replacing the entire existing front bumpers since there's no handy fake grille or whatever you can pop out and replace. You'll also have to paint the bumpers to match and blend onto surrounding panels since these cars have been in the sun long enough that a fresh-from-the-factory bumper will have an obvious color mismatch. If you don't, you'll have pissed off car owners to deal with and, in all likelihood, another class-action suit.

A roof-mount avoids that, but has its own issues. You'll needs techs to drill a freaking hole in the roof, mount the LiDAR unit, hope that they manage to seal the penetration well enough that water won't leak into the car, and then run cables through the interior. A whole hell of a lot of cars will leak, even if the work is performed to spec, so you'll have extend warranty coverage to deal with it. Plus, you've now got an ugly box on the roof that may or may not match the paint properly, whereas presumably the new cars will at least integrate the unit's lines into the body panels so it's not quite as obvious. The same thing goes sticking it on the hood. That's to say nothing of any hardware replacements and the fiddly bits that'll be necessary for refits.


> One thing however that was obviously wrong was Elon’s promises, which were extremely misleading and helped build his fortune thanks to the misunderstanding.

I'm wondering why there isn't a law firm that wants to make a fortune by starting a class action suit.


There's a new one like every month, did you google "class action tesla self driving"?


Why is LiDAR such a big deal? Our Honda Odyssey has had LiDAR for Lane Assist, Brake Assist, etc for a while.

The sensor costs $125 on eBay.


Your odyssey has a radar component not a LiDAR sensor. (Big gap in resolution)


Okay for some reason when I Googled the part is listed as lidar sensor.

https://www.ebay.com/itm/175646053526?mkcid=16&mkevt=1&mkrid...

Honda says it’s millimeter wave.

https://techinfo.honda.com/rjanisis/pubs/web/docs/AJA15434.P...

Our Odyssey uses a combination of multiple radar sensors and a camera to provide excellent sensing. From what I have read, millimeter wave is best of both worlds between LiDAR and Radar.

https://www.engineering.com/ElectronicsDesign/ElectronicsDes...

LiDAR doesn’t seem practical.


The last link reads mostly like a press release from the manufacturer. I suspect there are reasons why it is inferior to LiDAR not listed on that page. LiDAR traditionally can be kind of impractical but solid state LiDAR will help a lot with that.


What does it cost in official dealership? 1250$? Parts from scraped best case or stolen worst case cars are not an indicator.


Prices in official dealership are not really an indicator either.


If you actually try FSD beta you'll very quickly realize that the vast majority (over 95% probably) of disengagements are because the planner is dumb, not because of vision. In other words, on the screen it sees everything correctly, it just decides to do something dumb.

So currently the vision stack is not greatly holding them back.


Exactly, FSD Beta is currently driving me around 90% of the time, and none of the problems are sensor/vision related, but decision related.

Updates are happening around once a month and the decision making is getting noticeably better.


I think this misses the point.

The full system for humans is “vision + brain” and for self-driving its “sensors + planner”.

The Waymo/Cruise philosophy is that since we don’t know how to make the planner human-brain-level, we should shift as much of the load as possible to the sensors, where we have the ability to use things that humans don’t have, like lidar and radar.

To me, Tesla FSD going vision-only is a bet on the progress of AI planning models. If the planner reaches a human-equivalent level, then human-equivalent sensors are fine. Time will tell if this is a good bet, but so far it’s not.


This is 100% true. Lidar improves accuracy by millimeters up close, inches at 10-50 feet away and feet beyond that. That accuracy is more than sufficient. Recognition and classification of objects is not improved at all (that part that matters). And, like parent post said, tesla classifies everything very very well, the real issue is that the planner acts completely crazy all the time and is scary.


Presumably object classification is easier if you have a higher resolution image of the object.


There are two decisions when driving: go ahead around an obstacle or stop. Even as a human I do not need super high resolution to identify the objects asround, as long as I can identify if is in my path or not.

While our eyes can do that pretty reliable, we are organics and get tired - how many hours one can drive until this becomes almost imposible? I had a situation where I would hallucinate and and start believing something is in the street do I did a full stop - nothing happened, but was quite intense. Imagine the other way around - not stopping and hitting something.

A normal radar + some low level ASIC programming would do that without geting tired. My Audi from 2014 is quite good at that and I actually rely on this feature all the time.


Humans have two eyes that move in sync and can measure distance using the auto-focus feature. They can both point at an object and autofocus to figure out how far that object is. I don't see teslas using moving binocular cameras with 10th of a second autorefocusing in order to judge distances. I don't see how it is the same thing. Of course we can play GTA with just vision, but I'd argue the average person crashes in GTA more than they do in a real car.


> Of course we can play GTA with just vision, but I'd argue the average person crashes in GTA more than they do in a real car.

I’m fairly certain that people would drive more safely in GTA if their life was literally at stake.


I once barely survived a ride with a taxi driver in Tijuana who claimed to have learned to drive playing GTA. Based on his real-world driving it seemed plausible although We did not have to visit a spray’n’pay at any point. I never got to seem him play GTA so I can’t verify or refute your hypothesis.


Aren't the eyes basically always focused at infinity while driving?


That's due to the small viewing window, controller, and physics of GTA. In proper simulators with steering wheels and big screens they do fine.


> Elon and Andrej Karpathy argued that since humans can drive using just vision, that’s how we should do it in self driving cars

I thought their argument was a little more like “since roads are designed for human vision, we should take a vision-based approach, too”.

Not saying it’s the right idea, just that’s how I thought they had put it.


I never bought that argument considering roads live in 3 dimensional space and our eyes and brain are constantly trying to decipher 2d space into depth. Seems like an extra hop that would be better cut out.


I agree with your central point but take issue with this characterisation of human vision. For people who have two functioning eyes, the perception of depth is baked in. Our subjective experience of a 2d image is an illusion. In fact, much of our vision isn’t quite what we think; for example, what we think we’re seeing in our peripheral vision may actually get filled in based on inference and prediction.

https://neurosciencenews.com/peripheral-vision-brain-illusio...


> For people who have two functioning eyes, the perception of depth is baked in.

Actually, my understanding is that the depth perception induced by binocular vision is relevant only within a relatively short range (like, single-digit number of feet away), which makes it relatively useless for long-distance depth perception needed for driving.


A bit longer than this-- 10 meters or so.

So it's not useless for e.g. pulling into a parking spot or steering around a close vehicle.

(I'm crosseyed and don't benefit from binocular depth cues. For the most part I do alright, though rarely I'm comically off when someone throws me a ball or I'm picking up something close to me).


I have one eye. Can confirm its the same to me. Also always comically off with baseball and tennis. Pouring tea is also tricky for me unless I am holding both the pot and the cup.


Do you swivel your head like an owl to get parallax? I find if I do that in inclement weather I feel better.


Nah. Maybe sometimes I lean forward and back a little to judge something, but people with binocular vision do that too.

Most of our depth perception isn't from stereopsis or other binocular cues.


Swiveling doesn't reall give parallax, you need lateral linear motion, think of a soccer goalie getting into position.


Rotating your head will give some lateral motion (your eye moves in space since the centre of rotation is your neck). You can see this easily by turning your head while focusing on an object in the foreground, your perspective on it will shift.


Side to side motion is actually what I meant and what I do sometimes in poor weather driving or when I have branches up close and blocking my vision while trying to see beyond them.


stereo cameras would allow a nn to do a similar thing


It's difficult to accept Elon and Andrej's reasoning. I suspect that if one asks a team of engineer to research designing a self-driving car, the team wouldn't come back with the argument to use vision since "human could do it with eyes." I expect a list of options with the pros and cons of each approach, along with an estimated timeline and cost.


More like since lidars are $100,000 (at the time), we can't sell that but we can say our advanced any day now vaporware means we don't need lidar.

Lidar has since dropped in price by a lot, an order of magnitude or more.


That doesn't nullify their argument, though. If Lidar were free, it doesn't mean you need to have Lidar to achieve the same level of performance as humans, and having Lidar doesn't mean the data is so clean that the decision-making aspect of self-driving becomes solvable in a weekend.


There's still two sailiant there:

- why would the goal be "the same level or performance as humans" ?

For context, some towns are actively removing cars from whole areas not just for pollution impact but also for pedestrian safety. Moral issues aside, the status quo is just not enough, it needs to be way better.

- achieving the same level as humans being possible in theory doesn't mean we'll get there in practice.

Having enough hardware to realize something doesn't help if the software is not up to the task. And assuming they "just" solve the software issue could be like assuming 18th century people would "just" discover relativity.

Software becoming as good as human in video processing just feels like a "general AI is around the corner" kind of expectation.


> Having enough hardware to realize something doesn't help if the software is not up to the task. And assuming they "just" solve the software issue could be like assuming 18th century people would "just" discover relativity.

This is what I mean with the last part of my argument. Lidar is supposedly an extremely thorough 3d depth map hopefully capturing at hundreds or thousands of FPS. But even if you have this data, the actual bulk of the problem with current self-driving isn't solved, that being the "business logic" for how to navigate the world smoothly and efficiently and to 'communicate' with other road users.


One tech is currently driving passengers around commercially without drivers and the other isn't.


I'm pretty sure their argument is "this will be cheaper, and therefore more profitable."


There is another argument: this will be cheaper to produce, and therefore cheaper at retail for the same margins, and therefore will sell many more instances, and therefore will save many more lives.

It's possible that the richest person on Earth is more concerned with doing good slash achieving his goals vs obtaining more currency/profit, which it would seem would have little to no marginal utility to him.


Maybe he could have spent a little more time making sure the darn things actually work then.


I am pretty sure that Cruise, Waymo, and Tesla are each and all doing everything they can to "make sure the darn things actually work". It's literally an existential crisis for them if they do not.

They are all, in the terminology, presently "default dead" until they figure it out.


Waymo figured it out though—I've taken several driverless rides with them.


Elon Musk was actually passed by Bernard Arnault (LVMH) this year.


Not anymore!


That’s like saying “if cooking is just following directions in a recipe, we should just follow directions in a recipe.”

The result is subpar food because most recipes have a 1% problem called “seasoning”.

The “seasoning” of driving — the completely unpredictable and intuiting 1% of situations you find yourself behind the wheel where you just have to draw on your intuition and gut instinct — are the reason we need nothing short of AGI for _completely_ self driving vehicles.

I do think, though, trucking is ripe for AI disruption.


Humans also use inertial sensation, vibration, force feedback through the steering wheel, and their “cameras” are constantly changing their focus and position in space to construct a rich context of the environment.

Now, all of these things except the last one have some representation through an electronic or electromechanical sensor, but gluing them all together into what it takes to deal effectively with the intersection of vehicle dynamics and environmental dynamics is very hard.


Humans also don’t have 8 eyes facing every direction at all times. They also get drunk/tired/impatient/angry etc. The reality is the entire argument is silly. Both are very different and Musk/Karpathy argument is misrepresented here. Saying humans only use vision was a response to “its not possible with only vision” not a statement that human vision is good enough and no need to do better. The 8 camera surround is leaps better than human vision. Where they lack is processing the signal. Human brain does that better. But if you have better inputs (we do already) and you believe you can one day match on the processing part, you’ll one day get a much better result. One thats suited to the vision based roads we have now and scales to literally anywhere not geo constrained like Waymo


You really underestimate the quality of our eyes. The dynamic range is astounding and I'm pretty sure the cameras used by Tesla don't come close.


> They also get drunk/tired/impatient/angry

Indeed, but humans also have an incentive to drive well, embodied by local traffic police and local laws, and even before passing their driving test they're made aware of the penalties for not driving well (which, let's remind ourselves, range from "mild ticking off"/"pay $$$" through "forfeit driving licence for a time" all the way to "forfeit liberty for a time")

Where are these incentives for self-driving algorithms?

If your algo breaks the law to a sufficient level, is someone(something?) prevented from driving for a time? Is that really going to be just that one vehicle, or should it be all vehicles with that same algo? If something really bad happens, who is charged; in the worst case, who might end up going to jail?

We all know CEOs tend to believe "this time it's different", that they're special, and that the annoying rulebook is to be viewed as guidance at best. VW/Martin Winterkorn, anyone?


> Where are these incentives for self-driving algorithms?

Surely the equivalent is the reward during training?

> If your algo breaks the law to a sufficient level, is someone(something?) prevented from driving for a time? Is that really going to be just that one vehicle, or should it be all vehicles with that same algo? If something really bad happens, who is charged; in the worst case, who might end up going to jail?

Personal opinion:

Algorithm should learn from fleet and should be shared by fleet; therefore all accidents should be treated like aircraft crashes and investigated extremely thoroughly with a goal of eliminating root cause.

If that cause was CEO demanding corners be cut to boost shareholder value then jail them; if it's that the algorithm had, say, never seen a flying shark drone[0] before, and misclassified it as a something it needed to take evasive manoeuvres to avoid and that led to a crash, then perhaps not (except anything I suggest probably should be in their list of things to check for, so even then perhaps it would still be a CEO-at-fault example…)

[0] https://www.amazon.com/RiToEasysports-Control-Inflated-Infla...


> Surely the equivalent is the reward during training?

Surely the counter-example to when a self-driving vehicle drives straight into a stationary fire truck?[0]

If a human driver did this more than once (and lived to tell the tale!) - yet had no explanation other than "Of course I saw it, but I wasn't sure what it was and didn't realise I needed to avoid hitting it it <shrug>" - wouldn't they lose their driving licence fairly quickly?

[0] https://www.google.com/search?q=tesla+stationary+fire+truck


That's not a useful counter example.

You asked for the incentives for AI; the equivalent isn't the same as for humans.

The nature of the AI doesn't include a concept of prison or licensing, so it can't be threatened with it, for the same reason I can't threaten a human driver with Af'nek-leigh D'Och entRah'negh.

I can however 'punish' (air-quotes necessary because it might not feel like anything) an AI by altering the weights and biases of its network — once done, it then thinks differently.

Don't anthropomorphise it, that's a category error.

Also, the field of "how does it even?" is tiny, which is itself a reason to not grant them control of vehicles, but that's a separate issue.


> You asked for the incentives for AI; the equivalent isn't the same as for humans. The nature of the AI doesn't include a concept of prison or licensing, so it can't be threatened with it [..]

There certainly should be incentives for the humans creating an AI, though.

> Don't anthropomorphise it, that's a category error.

Volkswagen [human!] engineers created the illegal defeat devices in Dieselgate, under the supervision of their [human!] managers. The device is illegal, we punish the humans in charge when laws are broken, not the devices themselves. It should be the same with AI.

If this means software engineering becomes a field where you need mandatory liability insurance to work on AI, is that a bad thing?

In the glorious words of Stelios Haji-Ioannou, "If you think safety is expensive, try [having] an accident"


A camera that is actually better than the human eye is pretty difficult to find, and they cost around ~2000$ each, and even then you'll have worse peak resolution in the day and worse motion characteristics at night. Human eyes are pretty good!


Their logic is on the same level as arguing that birds fly with just wings, so no idea why we’re playing with those silly engines.


Which isn’t a dumb argument at all. It’s what got us flying in the first place, thanks to a brave Geman engineer called Lilienthal.


And? Camera only systems got us first lane centering technologies.


I am sympathetic to this view (I would really love to see just how safe it’s possible to get), but I think the Musk/Karpathy-style argument for vision-only self-driving is quite strong, and it only seems flawed because it has been incorrectly simplified as “humans do driving with ~only vision -> computers should do driving with only vision”.

The proper argument is “humans do driving with ~only vision -> roads are therefore universally designed and built to be driven via by vision -> computers should do driving with only vision”. It is essentially a standards-based argument: since vision is the universal standard for driving, computers must be able to drive using just vision.

So vision is always going to be the core of self-driving. Why not augment with LIDAR anyway?

Well, in situations where vision and LIDAR are both right, you didn’t need LIDAR; in situations where vision is right and LIDAR is wrong, you didn’t need LIDAR and it potentially made you worse off; in situations where vision is wrong and LIDAR is right, you need to spend more on improving your vision; and in situations where both vision and LIDAR are wrong, you need to spend more on improving both, but improving vision is a higher priority. These are all the possible outcomes and none of them make a compelling case for investing in LIDAR.


> hink the Musk/Karpathy-style argument for vision-only self-driving is quite strong

> humans do driving with ~only vision -> roads are therefore universally designed and built to be driven via by vision -> computers should do driving with only vision

What is 'should', is it a moral imperative? Is it a social obligation? Who made this argument, a catholic priest?

Where is consideration of this argument from an engineering perspectove - analysis of advantages disadvantages, where consideration of cost benefit? Where is assesment that, for example, 50% of human crashes are due to poor visibility or spatial awareness and comparison of how well computer handles them?

If I posted this vacuous, unsupported argument here, I would be laughed at, and rightly so.

But if Elon announces something, there is always 10% of the population willing to defend it, no matter how dumb it is.


I don't get it. Do you have an experience in designing navigation systems? In Stereo vision systems? In computer vision? Or is this just a "Musk bad therefore idea he has is bad" counterreaction to what he said?

Karpathy is one of the world's top self driving engineers. This isn't a vacuous argument. People are driving with just vision every single day. The part we're missing is the ChatGPT moment on the computational side.


I have produced 3D maps with lidars, drone mounted near-infrared cameras and with thermal infrared cameras.

You can tell apart grass and green carpet with a simple formula. You can coint trees without machine learning. Yoi can detect which plants are whilting, land that is wet from land that is dry. All of that is easy with the right sensors - becauae they have more data than an RGB camera can produce.

I know people that work with mutispectral imagery, they can tell you that pixel N45 has a spesific substance - concrete, steel or wood - jusy from spectra alone. Thye dont need to know what pixels around it are showing, or classify objects.


Agreed, I have a similar background with both LiDAR and vision for 3d reconstruction and mapping systems, plus I've designed some fairly impactful commercial multispectral software which is now widely used in the agricultural space. And vision can give you perfectly sufficient data to build world models and to localise yourself rapidly and robustly. What I believe is missing on the Tesla side is primarily on the navigation and 'social interaction' component of driving.

It's not like Waymo dropped a LiDAR onto the roofs of their vehicles and started driving unsupervised in traffic the next day. Nor Cruise, nor Uber. The sensing is just a small part of the whole system.


"It is difficult to get a man to understand something, when his salary depends on his not understanding it" seems apt here regarding both Karpathy and Elon. You can call me skeptical, but when there are millions and billions of dollars on the line for the two respectively, I don't know if I believe in Karpathy's expertise (which is in AI, not self-driving per se) and personal integrity sufficiently to believe he is doing what he considers to be the right thing vs. putting profit ahead of human lives.


Do you have any expertise on self driving, remote sensing, computer vision, navigation systems or anything else on this topic? Do you have a Tesla with the FSD package and participate in the beta program?

From what I've heard firsthand Autopilot still steadily improves, irrespective of what people say about their favourite sensing modalities...


radar is not lidar and is present on lots of vehicles that do L2/L3 driving except newer Tesla. optical sensors do not inherently tell you distance as a function of their sensing, whereas radar does.

a vision only approach _may_ be possible at some time, but only with a strong computational model of the human brain and thought process.

also, most people drive poorly— i wouldn’t say vision is the be-all-end-all of autonomous driving. it’s also clear that waymo and cruise have taken a full sensor based approach and are successful, whereas tesla is not.


I originally had “radar/LIDAR” everywhere you see “LIDAR” in that comment but it got really unwieldy halfway through. I think what I said generalizes from the specific example of LIDAR to other forms of sensing pretty well anyway, so you can just sub in radar if you want. The general principle is “vision” (in the sense of cameras feeding 2D image data into something that is probably a neural network) vs “everything else”. I would have said cameras vs sensors but some of the sensors use the visible light spectrum and so their sensors are called cameras. I like your use of “optical”, that might be the cleanest way to point at what I meant.

I broadly agree with your second point, about vision-only presenting big computational challenges. I think you do get some easy wins that bring down the challenge a bit - e.g. you don’t need to model human brains, you just need to model whatever the brain is doing when it’s driving; also the fact that we can teach people to drive without understanding what their brain is doing is a reassurance that we can teach a neural network to drive without understanding what it is doing either, so it frees us from (some) of the modeling of thought processes as well. But it is still a big computational challenge. I heard that Tesla has a server farm with thousands of Nvidia A100s, if true, that could make a dent in the problem for sure.

And yeah, I also wouldn’t say vision is the be-all and end-all when it comes to driving. (It’s a pity that we can’t easily integrate LiDAR, radar, and other sensors into the human brain so we could use them like we do sight and sound in order to drive better.)

My point is more that roads come in all shapes and types and sizes, but one consistent thing about them is that they’re all designed so that humans can use vision to drive on them. Like, you don’t know if future roads/signs/cars will be built in ways that are hard to read with LiDAR, but you can be pretty confident they won’t be built to be hard to see. Road builders, car makers - everyone else involved in the driving industry is designing for vision. It’s implicit, and it’s aimed at human vision, but it’s one of the few universal constraints on driving.

That’s what I mean when I say it’s a standards-based argument, that vision is sort of a “universal interface” for roads. Another “universal interface” for roads might be wheels (with traction), or more specifically tires. You don’t need to have rubber tires, or even wheels at all, to drive on roads - but if you do have tires, you can pretty confident that you can drive on pretty much any road you come across.


This is a compelling argument at the surface level (that roads are designed for humans with vision) that quickly breaks down when you examine how Tesla constructs their self-driving system.

Quick disclaimer that this doesn't reflect the views of my employer, nor does any of what I'm saying about self-driving software apply specifically to our system. Rather I am making broad generalizations about robotics systems in general, and about Tesla's system in particular based on their own Autonomy Day presentations.

When you drive on the road as a human, you rely a lot more on intuition and feel than exact measurements. This is exactly the opposite of how a self-driving car works. Modern robotics systems work by detecting every relevant actor in the scene (vehicles, cyclists, pedestrians etc.), measuring their exact size and velocity, predicting their future trajectories, and then making a centimeter level plan of where to move. And they do all of this 10s of times per second. It's this precision that we rely on when we make claims about how AVs are safer drivers than humans. To improve performance in a system like this, you need better more accurate measurements, better predictions and better plans. Every centimeter of accuracy is important.

By contrast, when you drive as a human it really is as simple as "images in, steering angle out". You just eyeball (pun intended) the rest. At no point in time can you look at the car in the lane next to you and tell its exact dimensions or velocity.

Now perhaps with millions of Nvidia A100s we could try to get to a system that's just "images in, steering angle out" but so far that has proven to be a pipe dream. The best research in the area doesn't even begin to approach the performance that we're able to get with our more classical robotics stack described above, and even Tesla isn't trying to end-to-end learn it all.

That isn't to say it's impossible (obviously, humans do it) but I think one could make a strong argument that "images in, steering angle out" is like epsilon close to just solving the problem of AGI, and perhaps even a million A100s wouldn't cut it ;)


That's not really true. Humans, at critical moments, do make implicit and even explicit plans of movement and follow them. We don't use literal velocity measurements for other objects, true, but in making those plans we do sometimes anticipate their locations at various points in the future, which is really what matters.

The best human drivers do this not at centimeter, but at the millimeter level. Look as downhill (motor)bike racing, Formula 1, WRC, etc..., These drivers can execute millimeter level accuracy maneuveurs that are planned well in advance at over 100km/h.


Yeah that's kind of what I was trying to say. You're right in that we predict the actions of others, but we don't do it in the same way. Even when we execute millimeter level maneuvers, we aren't explicitly measuring anything... Like if you were to ask a driver for instructions on how to repeat that maneuver they wouldn't be able to tell you, they just have a "feel" for it.

Basically humans are really really good at guesstimating with great accuracy (but poor reproducibility) and since we don't use basic measurements in the first place, having better measurement accuracy wouldn't really help us be better drivers on average (it does help for certain scenarios like parking though, where knowing the # of inches remaining to an obstacle can be very useful).

But for everyday driving at speed, we wouldn't even be able to process measurements in real time even if someone was providing them to us. AVs are different and that's basically the gist of what I was trying to say. Because they actually do use, rely on, and process measurements in real time, improving their measurement accuracy (ie. switching from camera based approximate depth, to cm level accurate depth from a LiDAR) can have a meaningful impact on the final performance of the system.


Current generation deep learning systems rely on supervised learning, which requires human labeling.

This is one argument for vision alone. It’s easier for humans to teach the deep neural network what to do if they both see and label the same thing.

It’s harder to build labeling systems that work on representations that humans don’t understand like point clouds or noisy depth maps.

That isn’t to say that other sensors including radar, gps, LiDAR, other spectrum, etc don’t help.

But you have to develop more complex labeling methods or move away from supervised learning.


This doesn't sound like much of a barrier to me. If you're a human training the LiDAR system, couldn't you just consult the image or video to help label whatever the LiDAR is seeing?


Here are some examples: https://github.com/songanz/3D-LiDAR-annotator/blob/master/an...

https://dataloop.ai/platform/lidar/

A good supervised learning process requires teaching humans to label consistently.

Imagine trying to write down precise instructions to train hundreds or thousands of humans to label many different types of objects using a tool like the above. Now hire, train, and manage those humans.

Compare that to having the humans draw rectangles around 2d color pictures of cars.

Also note that such tools need to be built and improved.


Is it possible to transfer learning from vision to LIDAR? Maybe if it's possible to map visual images to LIDAR images and vice-versa (by running a car with both cameras and LIDAR and learning their associations)


Probably everybody does it for themselves. So Tesla tries to have solely vision FSD while Waymo I guess does what you suggest. However seeing with lidar is not like vision only. Maybe at somepoint they share their code as OSS, probably not or very late.


Isn't the typical training data used in self-driving basically things like object labeling/segmentation and motion prediction? I'm not sure why that would be significantly different for visual vs depth-map data.


> Elon and Andrej Karpathy argued that since humans can drive using just vision

Is this maliciously specious or am I missing something? I drive using vision plus decades of life experience and all the tacit knowledge, judgement, and reasoning ability that comes from that. We have not reproduced any of that with math, and getting/stalling 90% of the way there with mimicry is not good enough.


I don't think it's super surprising that someone who sells a product called "Full Self Driving" that isn't fully self driving would also happen to lack rigor in their scientific claims.


It's obvious that they're talking about the input ("sensors") humans use and not presenting an exhaustive list of the things required to drive.


Even just comparing sensors, the mediocre cameras Tesla uses are absolutely pathetic compared to the ability of a human eye. And I'm guessing it'll be some years yet before we have some kind of parity there. Not to mention the computing power to process the data.


In some senses they are far superior to the human eye, peripheral vision for example.


It has more to do with training a neural network. The environmental cues for driving on a road are optimized for human vision. This becomes important when you think about it from a machine learning perspective. Fewer inputs are better for many reasons. Non-visual inputs will sometimes be in disagreement when the visual inputs during training which leads to a worse model.

If everyone could take some deep breathes and press pause on their emotional response to Elon Musk (and not assume everyone who happens to agree with him has a Musk tattoo) then they would fine plenty of rational arguments from an engineering perspective.


I'm still annoyed they got rid of the ultrasonic sensors on the latest model. I test drove a 2022 (or maybe a 2021) and the park assist was pretty good. And then I get the 2023 delivered and there's no park assist for the first 6 months because they removed the ultrasonics but the vision-only software wasn't ready yet. A vision-based park assist finally came in via an OTA update but it's nowhere near as precise as the ultrasonic version was. Like, the estimates of how much distance is in front of me seem to jump around a lot more than I'm actually moving, and it sometimes reports it's degraded when trying to pull out of a tight spot.


Tesla drivers are the product, they train the machine and put up with garbage so that Elon can invest in his future profits. Self selected clones.


Humans not only use vision, they use sensor fusion. Combining what you see, hear, touch, etc. Your body can perceive acceleration, for example.

On top of that, you have theory of mind. For example, you have 4 cars next to you, all of them with opaque windows that do not let you see the driver:

A) a loud sports car with a bunch of modifications, decals and racing related stuff

B) a grandma car with cat related stickers

C) an unmaintained car with collision damage, and loud music coming from it

D) a family station wagon with a baby on board sticker and other family related stuff

Your mind will process what it sees and quickly assign each one of those cars a different personality. A and C will likely be perceived as riskier cars, B and D will be likely perceived as safer cars. You will avoid A and C and remain unconcerned about B and D.

The problem with the self-driving cars right now is that they only perceive the road as bodies that move.


Great point. We use far more than just vision.

I just imagined myself driving without sound. That seems crazy to me. I need to hear cars, kids playing, etc.

And you're right that we subconsciously assign risk values to each car. A heavily modified BMW with decals? Could be an irresponsible young male adult trying to show off on the road. I should probably be prepared to brake or let him go first.


And assumption like that are probably a great source of accidents, our mind needs to take some shortcuts like that and isn't always right. Grandmas car got sold last week and daddy is alone in the car and late for work and will be racing to get in front of you.

I'd rather have a computer keep track of everybody just the same but with millisecond reaction to all changes. Something that I can't do lacking eyes all around and processing power.


Those shortcuts are the reason you exist. Without them, your ancestors would have been eaten.


Reminds me when I drove a cab in my early 20s. I had a psychological bead on other drivers, I knew what they were going to do before they knew it. (And to some extent I still do, I just try not to be a dick about it.)

AI will probably eventually very good at picking up these behavior tendencies. Or at least better than the people who aren't driving all day.


Humans use a lot of reasoning though. I once saw a guy approaching an intersection where he had the red light and from far away I could see he was jamming out to the music and in his own world. I didn’t go on green (getting honked at by the guy behind me) and watched while the guy blew straight though the red light and slammed on the brakes when he was almost fully through it.


Will you also tell us a time where you made a careless mistake that a self-driving car would not make?


It's fairly obvious that cars cannot think at the level of humans and are at a disadvantage sometimes. We also don't need fully autonomous driving to prevent careless mistakes.


I’m not saying I’m better than a self driving car… I’m saying that using vision alone for cost cutting is short sighted and might kill the industry.


The argument is more complicated than Elon makes it out to be. "Humans can drive with vision alone => computers can drive with vision alone" implies computers can do anything the human brain can do. It's not a given, and it's certainly not true for the compute power in a Tesla. It's completely possible that all of the following are true:

* Humans can drive with vision alone

* A sufficiently advanced compute system can drive with cameras alone

* Telsa's FSD computer cannot match human performance without additional sensors


I've watched FSD videos quite a lot, and I've never seen an issue related to sensing / visual detection. It always draws the road and other cars with decent accuracy. All issues happen in the planner, i.e. the car makes a wrong decision, not because it can't see something, but because it doesn't know what to do in that situation.

The hard problems in autonomous driving are not related to sensing, but deciding what to do in weird or complex situations.

See this video for an example of a typical problem in recent FSD (and you can see from the screen that it's not related to sensing / detection): https://youtu.be/eY3z1kgX5hY?t=74


It has trouble detecting speed bumps and potholes. I'm not sure if this is because the vision sensors fail, or because they have not been programmed to detect/display these features properly. Whatever the reason, the planner then accelerates right into them.


Sensor fusion is actually a hard problem. Yes, more different kind of sensors can lead to poorer results. Imagine having two different views in the world at unsynced points in time, and making decisions on that. It isn’t weird there might be a focus on LIDAR, or vision, but not both at the same time, at least for real time decision making.


I don't have to imagine this. My brain is doing sensor fusion every moment of every day. A lot of the time that involves conflicting data, and your brain has to decide on the most reasonable interpretation. When it's not at its best you get things like optical illusions, nausea, etc.

It's a hard problem to solve, but that doesn't mean it's a bad idea. I think most people would agree that human beings are better off with the overlapping set of sensors that are available to us compared to the alternative.


Google did it, so that argument is moot.


That is not how arguments work at all.

I work for Google and I like what Waymo has accomplished (disclaimer: what I work on is nowhere near Waymo). Tesla is making a different bet with different engineering resources. And why would Google give up their secret sauce to Tesla?


Cannot agree more. Even that argument was not only flawed but rather false. We use at least hearing to help driving. I'm not sure I'm alone, after closing all windows, I feel a huge different in the car just like I was disconnected from the outside.

The thing is, Elon has lots of fans.


This argument is really dumb. I understand deaf people can drive, too, but the additional auditory input is very helpful. When an emergency vehicle is far behind me, I hear the sirens and know to look in the mirror and move right to let it pass me. I can hear the bells of a railroad crossing even if I cannot see its lights blinking due to the road curvature, and start reducing my speed so I approach it smoothly. Some stupid junctions like the one described below, I hear a car approaching well before I can see it.

That said, there is no point making a self driving rig if it’s going to only be as good as the best humans are. For adoption, it must be provably better in any situation imaginable: moving obstacles, weather, dust, dark drunk humans in the night, emergency vehicles, no lanes drawn on the road, read all road signs correctly (say my Yaris gives me mistaken readings where maximum mass is confused with speed, it doesn’t know what “built-up” area means with regard to speed limits, it sometimes reads a sign that belongs to an adjacent road). For it all to work, you need more input than vision. And redundancy. Lots of redundancy.


> since humans can drive using just vision, that’s how we should do it in self driving cars

Not the strongest of arguments.

“since humans move around on foot, that’s how we should design machines that help us move stuff around”

“Since humans do long-distance communication by shouting, hand or smoke signals, that’s how we should do it in transatlantic communication”

“since birds can fly using wings, that’s how we should do it in machines that fly”


their argument is specious; i suspect the actual reason is “lidar/other sensor system components are expensive or hard to acquire”.


This is a mischaracterization of what he is saying. Humans are unable to drive without vision. Even Chris Urmson agrees that LiDAR is a crutch that measures distance as opposed to computing it (albeit it only works in perfect weather). Musk is saying that if you are going to use a sensor to measure distance, don’t use photons in the visible light spectrum. Instead use photons that can penetrate objects (microwaves) to capture information that you wouldn’t have. The challenge is building a high-precision RADAR, which Tesla is attempting to do. Some HW4 vehicles have it but it’s unclear whether it is required or not. Ultimately, building a L4 AV requires solving very difficult problems in computer vision, which is exactly what they are trying to do


Musk did not publicly endorse high resolution radar until recently, and using high resolution radar is not a unique feature of Teslas. Waymo currently has 5 high resolution radars on their cars.


Musk endorsed high resolution radar as Tesla was removing the existing radar. https://electrek.co/2022/06/08/tesla-files-use-new-radar-con...

Given what is public now, I'd speculate that perhaps they were working on replacing radar all this time. Then they used vision as an excuse to dump the existing radar early when there was a parts shortage.

I see Musk acting a lot like Jobs in certain ways. They were both very cagey about the future direction of their products. Although Musk didn't exactly backtrack on radar, he wasn't transparent about Tesla's plans either. You don't telegraph your moves to your competitors. Similarly, I'd be very surprised if Tesla wasn't keeping an eye out for lidar crossing a specific cost/function threshold. They're not saying that aloud, though.


This is anachronic. Musk was making the vision argument to claim that Teslas sold at the time (which lacked any kind of LIDAR or radar) had all of the hardware they needed to do FSD, which was sold as "just a few months away via software update". I believe that they still make this claim officially, since it's important to deny that they dif false advertising on this front.


How is this anachronic? Here is a clip from Dec 2020 with Musk making the claims about high resolution radar (skip to the last minute):

https://youtu.be/BFdWsJs6z4c

Tesla has been working on building a high res radar since 2018 and have yet to officially announce anything. Tesla dropped radar in new vehicles in mid-2021.


Musk has been claiming since 2018 that Teslas sold at the time have all the hardware needed to be fully independent driverless taxis. He made this claim very very clearly in conferences and marketing materials. And Teslas sold at the time only had cameras.

That he later may have changed his tune is probably true, as reality does eventually catch up even with someone like him.


It is self-evident that you only need vision to drive. The question is whether you can make a computer smart enough to leverage vision to do it.


it is self evident that you only need legs to move around and gears are just extra weight. why do cars have gears?


Why is it self-evident? If your argument is that people can drive with only vision let me stop you right there and point to the fact that people are terrible drivers and the bill is in millions of injured and dead a year.

The main argument after the collision? "I haven't seen them!".


No.

You listen to the road condition.

You sense the road bumping.

You feel the acceleration.

Your eye have better dynamic range.


We aren't arguing about easy stuff, we are arguing about radar / lidar being necessary.


I think the 'proper' approach is to use whatever is economically viable to get to the desired result as quickly as possible.

Ultimately the market will only ask two things: Does it work? And, how much does it cost?

No-one cares that "Ah but ours only uses vision".


It's smart if you realize that they never had a choice. No matter what Musk says, they could do vision-only or nothing at all. Google started their program in 2009 and had a pile of cash. Tesla started their program in 2015 ish and until 2019 they were in a precarious financial position. So they never had the money or time to take Google head on. With vision, they could at least use their position to their advantage.

And it's a good bet. There's tons more to self driving than perceiving the world (lidar does not help a driver what to do in a novel and ambiguous situation, it's not a perception problem) and Tesla's vision is quite good based on all the FSD videos on YouTube.


Forgive me if this seems like a knee-jerk response, but literally zero human drivers use "vision alone" to drive. Humans drive with a spacio-temporal model of the world, visual, auditory, and haptic feedback, logical/symbollic rules about driving norms, emulative models of other drivers/agents in the driving environment, ethical judgments about what it is ok/not ok to collide with... and so on and so forth.

Reducing that to "vision" doesn't even make superficial sense.


I always wondered, and don't remember seeing it specified one way or another, do self-driving AIs use any sort of temporal modelling for the environment?

A model which can use predictive behaviour for the objects that the visual part detects seems orders of magnitude better than one that just does visual detection from scratch. Seems like a huge wasted opportunity to have to model the world starting from zero for every frame they receive from the cameras.

Objects that pop-in and out of the field of vision due to occlusion or other reasons seem to trip Teslas (in at least some of the reported incidents), but even like that it's hard to believe that they didn't implement such an obvious improvement.


Thanks not the full argument, actually. The argument is:

You’ll need to solve vision anyway. Because you need to know what the object is. Is it a trash can or is it a dog? Will it move or stay?

If you had LiDAR, you’d still need sensor fusion with a camera to answer those questions, introducing more problems.

That’s why Elon says that any company that relies on LiDAR is doomed. LiDAR isn’t the gold standard - it’s the low hanging fruit.


It remains to be seen: Musk is making a bet that Tesla could win big or lose here. It is an interesting bet to be sure, but Waymo's bet is as well (sensor fusion, and LIDAR will become affordable over time). I like that we have some diversity in the bets being made at least.


> Elon and Andrej Karpathy argued that since humans can drive using just vision, that’s how we should do it in self driving cars, but I think that’s a flawed argument.

Of course. The real argument is "LIDAR is expensive in comparison and after scamming people for almost a decade, we have to be careful what kind of money we ask for". LIDAR was never considered an overkill by Elon.


That was just their reply to “is it even possible?”. Their argument is that vision actually works better than sensors. The signal processing data it receives is higher quality. You just have to know how to process it like the human brain does. It’s a harder solution, but if solved a better one


> The signal processing data it receives is higher quality... It’s a harder solution, but if solved a better one

What is the basis for this claim? Have you seen the kind of data other sensors provide?

Radar gives you exact distance and velocity of a vehicle thats hundreds of meters away, through fog, in the dark.

Cameras can't even give you distance for objects that are too far away


Humans also use sound as in tires squealing, something going "thump, thump, thump", someone screaming, horns, sirens, "crash, tinkle, tinkle, tinkle", etc.

Humans also use their sense of motion from the car sliding, rocking, tilting, jerking, etc.

Then humans integrate all such input, combine that with what they know about how cars work, traffic, people, the road, diagnose what has happened, apply years of prudent judgment, and then decide what to do.

E.g., maybe they have an ice chest with 20 pounds of ice which has melted and now, due to the motion of the car, has tilted, spilled, and is about to get the dog soaked with ice water. Good luck with the self-driving with a good response to such a scenario.


Humans can drive using just vision, but the autopilot doesn't have the processing capabilities of a human by a long long margin. So the smartest thing to do would be to compensate the worse processing capabilities with better senses.


I'm not privy to the legit decisions around Teslas tech vs PR that I occasionally run across. Does this mean they are dedicated to only using traditional vision comparable to humans or is that just their focus at the current time? I can appreciate there's only so much tech that can be reliably developed for mass production at a given time but Elon also tends to make some odd choices/statements out of principle.


Obviously they know that. They are just betting that they can do it with vision only, to save on expenses, and also work with the way their cars work today.


And they're betting with other people's money but keep the profits.


I agree. Vision only approaches are cheaper but far more difficult in the long run.

But that wouldn't be a problem if Tesla didn't advertise their cars as containing all the necessary hardware for self driving. If Tesla admitted that cameras were insufficient for self-driving, they would open themselves up to legal liability.


If you view Tesla for what it is - a car company than the insistence doesn't seem dumb.

They got all the hype and PR for the AI angle without the money pit that comes with a real self driving project.

Even if they eventually get cannibalized by actual self driving cars then they still made billions and billions in the process.


I’m personally surprised that this strategy plateaued as soon it has. It had an extremely sharp early-success curve.

Opinion: a lot of people in this space thought “no one wants to get in a car with a radar tower on top, can’t we make it just look like a car?”

And decisions proceeded from there.


My question to Musk would be,"Is there one, cheap, device that could be deployed widely on roads that could improve self driving cars enough to push the technology into viability by a substantial margin?".


> Elon and Andrej Karpathy argued that since humans can drive using just vision

Which is insipid because humans do not use just vision when driving. We use hearing, touch, and proprioception extensively when driving.


Not to perform the main driving task though. Deaf people tend to be able to get a driver's license no problem, and proprioception is mostly irrelevant unless you need to know where your feet are to ensure they're on the correct pedals.


> Tesla’s insistence on using vision alone is pretty dumb

If they can finally make it work well one day, that comment will be the one looking dumb in the end.

just saying that one should be careful about making this kind of assumptions...

> if given additional senses, wouldn’t humans use them for safer driving?

It's all a question of costs in the end. It would be safer to fly with 10 pilots in one airplane, but the economics make it work for 2-3 at most.


I brlieve self driving can be achieved through standards, e.g. if roads become digital. Street signs and cars could communicate the current state of traffic in real time with other cars. It should be left to the state to detect certain events. E.g. traffic jams or, if a bicycle or pedestrian is about to cross the road. This way liability issues could be fairly split up between law- and car-makers.


That sounds great if we want to rebuild our entire road infrastructure. So using this for self driving will never happen since it would require every country to implement a very costly standard.

Sounds great in practice but isn’t realistic in the slightest


Not sure that a rebuild is necessary, could be a simple as beacons on the back of emergency vehicles, 'qr' coded signs, and reflective road markers. Tesla already detects traffic cones...

https://electrek.co/2019/11/07/tesla-autopilot-handle-constr...


>> Sounds great in practice but isn’t realistic in the slightest

And that's also why a "self driving" car isn't realistic.

I think the proper term for what we have now should, at most, be something more akin to "Assisted Driving Features" because right now it's far to blurry.


> would require every country to implement a very costly standard.

Thats what road are already. They are costly and they are mostly standard through out the world.


If thats the argumentation they go by, the car should be able to drive on tomatoes.


Agreed: it’s like they’re insisting on playing on hard mode (or is that cheap mode?).


This implies that they can create a piece of software as intelligence as the human brain. Absolutely absurd reasoning. If they are being serious with such a statement then they are complete fools. More than likely they are just making excuses for why they are being cheap. Still fools, but maybe not complete fools.


It doesn't imply that at all. That's like saying openai is creating a human level intelligence with chatgpt. Emulating a single function a human is able to perform really well is not the same as aiming for human level intelligence.


This has nothing to do with ChatGPT. They claim they only need vision because a human only needs vision. Although that statement in of itself is false because humans have other senses, but they can't know how much complexity of the human brain is required to make only vision image processing work. If they can't at least replicate that level of intelligence then they have no business making such a claim.


In driving don't we use all our intelligence? I know I'm sometimes on automatic when I'm tired but I don't see that as a good thing.


"Humans can move without any wheels at all!" exclaimed the Tesla engineer as he announced Tesla's new Legs only, Wheels are for Losers policy.


Also, humans are pretty horrible at driving. They constantly commit traffic violations and kill eachother. Driving kills about 1 in 103 people in their lifetime.

There is no other day to day activity where this level of risk is considered acceptable.


What are you talking about? Humans are shockingly good drivers. It is a average of ~80,000,000 miles, or ~5,000 years of regular driving between fatalitys and that includes the motorcyclists, drunks, and people who do not wear their seatbelts who account for ~70% of all deaths if I recall correctly. If you are a average driver who does not drive drunk and who wears your seatbelt and you started driving when agriculture was invented you would not be expected to have gotten into a fatal accident yet.

Anybody who says humans are bad drivers is almost certainly underestimating the difficulty of replacing humans by a factor of 1000x.


We can compare against another mode of human-controlled transportation. There are 1.37 deaths per 100 million passenger-miles driving in the US [1]. In comparison, there are ~0.2 deaths per 10 billion passenger-miles flying. Converting into the same units, there are 137 deaths per 10 billion passenger-miles driving. So you are 685X more likely to die while driving/riding in a car than flying. That's almost three orders of magnitude worse! Humans are pretty terrible drivers in comparison to how good we are at flying.

[1]: https://www.iihs.org/topics/fatality-statistics/detail/state... [2]: https://en.wikipedia.org/wiki/Transportation_safety_in_the_U...


Pilots have mandatory sleep cycles, drug tests, significantly greater initial training, backup pilots, and dedicated airspace. If you could wipe out all of the tired, drunk/high, and teenagers from the road, I bet the driving stats would look significantly better.


We don't ask pilots to do basically anything. They are given dedicated lanes and have constant radar monitoring them anywhere near an airport where they may be expected to somehow come in contact with another plane.

Compared to sharing roads going opposite directions at high speeds inches away from each other, it is no contest that humans driving is the much more impressive number.


You should compare with GA to get closer to apples-and-apples. Comparing a highly regulated industry with everybody from 16 year olds to 90 year olds over and extreme range of experience and health isn't going to give you a useful result.

And even GA pilots are probably in much better physical shape than the general public that drives cars.


Flying (the type done commercially) is a much easier task than driving, well maybe except for takeoff and landing where the majority of miles are not spent. The pilot can basically sleep most of the way.


This is a great example of why relative risk matters so little when the absolute risk is so low.


I don’t know how we could say humans are “good” or “bad” drivers. What are we comparing against? We’re the only thing that drives cars (other than a few self driving cars).


Compared against any other activity or mode of transportation.


If someone had said said “trains are safer than cars” I’d agree with that comparison, but the question is are humans good at driving cars. I don’t know if we’re particularly better at driving cars than trains, trains are just better designs.

I mean, you wouldn’t say humans are good at tic-tac-toe, and bad at chess, right? Chess is just a much harder game.


> It is a average of ~80,000,000 miles, or ~5,000 years of regular driving between fatalitys

That's actually not a very big number. 5000 years of regular driving is about the lifetime driving for 100-150 people. Which means one of them will have a fatal accident within their lifetime.


Why are you counting only fatalities? They are low because modern cars have lots of safety features. In 2021 there were 5,400,000 medically consulted injuries related to motor vehicles.


What does "started driving when agriculture was invented" even mean?


If you Google "when was agriculture invented" it says, "approximately 10,000 years ago".


> Driving kills about 1 in 103 people in their lifetime.

More people have been killed by cars than I have.

https://en.m.wikipedia.org/wiki/Comparative_illusion


Let me be more precise. Human driving killed 42,939 people in the US in 2021.


Apparently it’s 45,404. That’s still a little bit less than people killed by a gun the same year (48,830), and killed by opioids (nearly twice as many: 80,411). Let’s not forget an estimated 300k/year deaths related to obesity. Source: https://www.cdc.gov/nchs/fastats/injury.htm.

Whataboutism but puts those numbers in perspective.


The obesity deaths are comorbidity, which is… not to say that obesity killed them, but that it may have been a factor. E.g. obese 55 year olds that have a heart attack are in there, even though it is entirely possible for non-obese individuals to have heart attacks at young ages.

Guns… most of those are suicides, and that stat is uniquely American in terms of developed countries.

Opioids… also almost entirely self-inflicted (not willingly, but getting flattened by a bus is different than getting accidentally hooked)

Car stats also don’t count the incredible number of people who have lifelong or major injuries due to cars, which I would imagine is much higher. I feel pretty comfortable saying that a majority of people I know have been injured by cars, something that isn’t true for opioids or guns.


>Guns… most of those are suicides, and that stat is uniquely American in terms of developed countries.

That seemed unbelievable to me, so I had to do some checking. The CDC[0] put 48k suicides in 2021, and attributed 55% to guns. Leaving us with 26.4k suicide gun deaths. While still a large number, there are still a significant portion of GP's 48k gun deaths which were not self inflected.

[0] https://www.cdc.gov/suicide/suicide-data-statistics.html


Human vision is simply converting rays of visible light sensed by our light sensors (aka eyes) into electric signals which are subsequently converted into an image by our brain which is then processed.

Given that, perceiving the environment with radar, lidar, visible light, infrared, and so on is equivalent to human vision.

As far as I'm aware, Tesla uses more than just visible light sensors. Am I wrong in my understanding?


Humans aren't capable of emitting an electromagnetic pulse, sensing the reflection, and using the time of flight to calculate distance between themselves and an object. So, no, lidar and radar aren't equivalent to human vision even if you extend the idea of human vision over a wider range of the EM spectrum.


> Given that, perceiving the environment with radar, lidar, visible light, infrared, and so on is equivalent to human vision.

When paired with a general intelligence evolved over a couple million years, and visual sensors that have much more dynamic range than commercially available sensors yeah.

The problem is without those 2 things Teslas crash into fire trucks.

And different bands of EM have different properties so eyes aren’t equivalent to radar. Otherwise we would be able to see around corners.

And if humans had more sensory data we would definitely be integrating it into our driving. Otherwise ADAS tech wouldn’t be so commonplace in 2023.


Even with those things, humans crash into fire trucks too...


Which is why we shouldn’t be replacing human vision with inferior versions done in software.


Park sensors, for example, use radar but they are not equivalent to human vision because humans can't see the back of the car, and they can't measure the distance as accurately.


I thought parking sensors used ultrasonic. Wouldn’t that make them sonar?


I believe there are also EM ones, but in either case, my point still stands.


I meant that they are equivalent in a very fundamentalist sense.

You have input, taken by sensors, converted into usable form by a processor. There is no fundamental difference between visible light and radio waves, per se.



Thanks, I don't care to follow car news so I wasn't aware of that.

Though, it seems Tesla reincorporated radar from their post '21 models on, so the premise of this sub-thread is at best outdated and at worst a half-truth. Oh well.


The mix of unconstrained input (the real world, not a lab, not a simu, not a factory) and safety-critical output make this kind of problem particularly hard to tackle.

I don’t understand why anyone could think that it would be easy or fast.

Guys like Elon or Geohot have been simply delusional.

I think that being naïvely over optimistic is a good trait for innovation engineering, but we should also manage expectations…

Nothing is done until it’s done, proof is in the pudding.


> I don’t understand why anyone could think that it would be easy or fast.

Because progress went so quickly from "cars are entirely human operated" to "cars can nearly drive themselves", so it's assumed that progress will continue at that pace.

We've made this mistake time and time again before self driving cars and evidently continue to make that mistake now


> so it's assumed that progress will continue at that pace. We've made this mistake time and time again before self driving cars and evidently continue to make that mistake now

We are making the same mistake now with chatGPT. We think if it progressed so much in the last 3 years, future progress will go at the same speed. But the last 1% is exponentially harder than the previous 99%.


Yeh thats what I was getting at, though in reality the expectations are even more unrealistic. A lot of the (less technical) people I've spoken to haven't been saying that so much progress has been made in a matter of years, but a matter of months. Because of course if you weren't following the early development of these models, it appears that chatGPT came out of nowhere


> But the last 1% is exponentially harder than the previous 99%.

Also, sometimes (often?) you can quickly make a lot of progress using the WRONG methods that are completely useless in the remaining 50% or more of the work.

Then that field hits a dead end and is mothballed until a decade or so later when sometimes dares to restart it from a fresh perspective.


I agree.

if we have a model that can do (end to end) the role of a lawyer, scientist or software engineer, then I think we have reached AGI.


AI has had a few step change events recently - AlphaGo, AlphaZero, now ChatGPT and StableDiffusion

we may very well get another step change soon

or we may never get another one!


No. There is no incentive for car AI to improve because everyone knows it won't be allowed for basically forever. Chat AI is here and will progress, so there's massive incentive for it to improve.

Chat AI will improve at at least 10x the rate as car AI.


It's obvious that it's hard now, but at the time it was much harder to coherently argue that THIS was the mode shift of the progress chart.

An S-curve just looks exponential from the bottom. It's not until you hit the flattening part of it you see where the limit is.


as a percentage of accidents, i would imagine autodriving cars are less than human driven cars.

It's still flawed atm, but surely it will continue to improve. Not to mention that the average human is already terrible a driver, and replacing them with autodriving surely nets more benefits than the flaws of autodriving - if not at this moment, then very soon in the future.


The problem is that whenever a human driven car reverses over a toddler or wraps around a telephone pole or veers across lanes into oncoming traffic or blows through a red light and T-bones someone, nobody cares because it happens all the time, but when a self-driving car does, murderbot outrage makes frontpage news all over the world.

So even though we rationally should immediately go all in on automation as soon as self-driving cars are on average safer than human ones, realistically this is politically completely unpalatable until they reach a much, much higher, possibly unachievable, standard.

It's not "fair" that this is the case, but unfortunately humans are really bad at accurately assessing the risks of low probability incidents (eg. dying from terrorism), especially if they get a lot of media airtime.


> but when a self-driving car does, murderbot outrage makes frontpage news all over the world. It's not "fair" that this is the case,

It is exactly fair - if a drunk guy in Australia is driving one car. If he hits a child, that has no relevance to my life in EU. He will be arrested and dealt with.

But Tesla autopilot drives millions if cars. It if hits a child in Australia, it does not get arrested. Its also gonna drive over my children in EU.


> Its also gonna drive over my children in EU.

a false equivalency.

The same type of bozo that reverses into a child also exists in europe.


Himans have separate brains. The Aitopilot runs same code.

If autopilot has error in the code that makes it drive off cliffs, that affects made pople.


This is the advantage of self driving. If there is an error it gets fixed for all cars in the fleet. Humans have bugs - we get bored, we get distracted, we get angry, we thrill seek, etc - and we’ll never fix our bugs. Nor should we. Any technology that requires humans to be perfect all the time to be safe is an inhuman technology. I think people will look back at fully human drive cars, and the death rate we tolerate from it, with disbelief.


The liability problem is important here - when someone reverses over a toddler, someone gets sued. Or they lose their license. Or $CONSEQUENCE.

What's the consequence in an autonomous vehicle? Does only _that_ vehicle lose its license? Does every vehicle with that particular revision of the ML model? The whole fleet?

The next problem is, you're assuming that the autonomous vehicle is going to be _better_ than the human; it may have more information, may be faster in its response time, but maybe in a chaotic environment, that doesn't translate into drastically improved safety. Unless of course, _everything_ on the road is self-driving (meaning, zero pedestrians, zero other vehicles, no animals,...) which is going to run into chicken-egg...


> you're assuming that the autonomous vehicle is going to be _better_ than the human

i think this is already proven true empiracally with the existing milage and rates of accidents. https://thedriven.io/2023/04/27/accident-rate-for-tesla-80-l... (or from here https://www.carscoops.com/2023/03/tesla-says-its-autonomous-..., which quotes various sources of data, unless tesla is lying about their FSD data they reported).

Humans just don't believe in autodriving, and there's still the problem of liability assignment - both aren't a technology problem imho, but a human impediment to adoption.


The second of your links says "According to the folks over at Jalopnik though, the bar for “safer than a human” is probably a lot higher than one might guess. In fact, after crunching numbers from the NHTSA, it seems as though drivers avoid crashes 99.999819 percent of the time. If Tesla’s data is to be believed, it’s already surpassed that."

Are you even making a statistically significant comparison at this point?

And there are these also: https://www.theverge.com/2023/5/25/23737972/tesla-whistleblo...

Quite apart from the fact that this isn't actual self-driving that's being measured (as sibling points out).


That’s not data about autonomous vehicles. Tesla’s fsd is a feature where the car drives itself during some parts of a trip. The driver is always supposed to keep paying attention and sometimes take over control.

To show that autonomous cars are better drivers than humans you need data from autonomous cars in the same situation as human drivers. And that data doesn’t exist (yet), because truly autonomous cars don’t drive in all places and situations where humans do.


I do not imagine that. Sometime in the future they will be safer, but as of now they are in the "unknown safety level" category.

Pretty much all those optimistic estimates were done by highly untrustworthy companies and in highly artificial conditions.


Geohot might be delusional, but his company has a value proposition that's viable.


What baffles me is that it is relatively easy to make most transportation needs very predictable under a very controlled setting. And we have been doing it for a while, we have even been operating self driving vehicles on these corridors that transport millions of people every day.

Fully automatic train systems are a proof that it can be done, but lack of imagination among policy makers makes them think the only way forward is with privately owned consumer market cars.


Geohot never made any delusional or outrageous claims. His company set out to build and ship an advanced driver assist system and they did.



Yeah, what's the issue?


> * I don’t understand why anyone could think that it would be easy or fast *

Like electric cars, or landing rockets, or even getting to the moon.

I don't think anyone honestly thought self driving cars would be easy or fast, but they're getting better every single month, and sooner or later they're going to be better than human drivers.


are they getting better every month? a company like Waymo may have good data on this. Tesla updates seem to be doing more like a random walk.


It's just crazy because you can get to driving great 99.99999% of the time very very quickly. The unceasing attention & ability to keep track of known unknowns is a huge superpower versus humans.

But all the other cases, that wild element of reality, is so endlessly widely puzzling.


The amount of engineering, testing, rigor, and so on for a rover on Mars is stunning. And they are working with much more constrained environments without human actors.

Society won’t accept robots that kill people. You can bring out statistics until you are blue in the face, but a major reason car accidents are tolerated is that a person can be convicted if they break the law. A faceless AI that hides behind insurance won’t cut it.

Enthusiasts of self-driving cars can be hyper-logical about statistics, safety analyses, etc. while simultaneously ignoring all the other areas of political life that are based on emotion rather than rational logic. If we can’t convince huge fractions of society about basic truths - pick whatever truths you are passionate about that half the country disbelieves - why do you think society will accept robot cars that cause accidents (even 1)? It’s not going to happen.


"Despite claims to the contrary, self-driving cars currently have a _higher_ rate of accidents than human-driven cars, but the injuries are less severe. On average, there are 9.1 self-driving car accidents per million miles driven, while the same rate is 4.1 crashes per million miles for regular vehicles." https://www.natlawreview.com/article/dangers-driverless-cars....

I didn't know that statistics for self-driving car is that good (just 2x worse than humans). Last time I've checked, it was an order of magnitude worse. Perhaps, the statistics is for ideal conditions.


Exactly this, or something very close to this already happened in history. There was a time it was legal for pedestrians to just walk on the road. Now it is illegal and nobody blames "killer bolt" or it's operator if this person is hurt. Society adapted around technology, not the other way around.

The other example is of course railroads and trains. People literally made laws and diminish freedoms of people to allow giant polluting noisy hazardous carriages do their thing and punish anyone who impeded them harshly.

No reason to think it can't happen third time. We just adapt roads, laws and customs to tell people e.g. "if a purple vehicle on purple road kills you, it is 100% your fault because this law says so". And other nations will have to accept it too because nobody wants to be outcompeted.


Tesla gets a bad rap because of their notorious missed deadlines. They continue to chip away at the problem though, and have a large number of people paying them to use their beta software.

The fact that they’ve figured out how to make income from unfinished self driving software makes them, in my opinion, likely to succeed eventually. For everyone else self driving is a money pit. Tesla can continue working the problem indefinitely until it’s solved.


In my book, Tesla gets a bad rap for providing an unvalidated, should-be safety-critical system to run-of-the-mill consumers without an accompanying Safety Management System.

The fact that they profit handsomely off this structurally dangerous wrongdoing is just the cherry on top.

And, without robustly maintaining a systems safety lifecycle (which, by necessity, must incorporate a Safety Management System)... no technical progress is quantifiable by anyone, including Tesla.

Tesla effectively throws a system over-the-wall and throws it all on the human driver and on the public.


People keep using their Necessary Captial Words to say what we have is balderdash. Another post guffawes that there isn't Operational Design Domain.

I agree that where we are is balderdash, and dishonest, unclear about itself in extreme & lacking. But I detest this My Paradigm Is Required phrasing. Say what you think please! Browbeating the topic with particular/spefific engineering dogmatisms is unhelpful, unclear: it leans on authority, while also not having assertable claims anyone else could contest. This kind of hollow criticism degrades.


Nothing you said is wrong, I just really, truly don’t care. FSD makes fewer mistakes on routes I drive than it did a year ago, so whatever they’re doing seems to be working. Complaining that people like me shouldn’t be allowed to purchase safety critical software just deepens my resolve to keep using it.


> Nothing you said is wrong, I just really, truly don’t care. ... Complaining that people like me shouldn’t be allowed to purchase safety critical software just deepens my resolve to keep using it.

You do understand that people are concerned about setting these cars free on public roads, right, where they can kill unwilling participants? I don't think the concern here is about your freedom to choke on billionaire boots.


Regulators (NHTSA in the US specifically, but other countries as well) continue to allow it.

Elon is a bit of a monster (personal opinion), but regulators have the final say. When they force FSD to be pulled, then there is weight behind the argument, but this hasn’t happened and that sends signal.

You already share the road with inattentive drivers and drunks, so the risk acceptance/appetite benchmark has been set. FSD is arguably better than both cohorts, considering number of deaths caused.

As always, nuance. More people will die in the next ~10 minutes from traffic deaths than have ever been attributed to Tesla’s autopilot or FSD (33 total, as of this comment).

https://www.tesladeaths.com/

(Again, not a billionaire simp, just a rationalist; booo on Elon, but props to Tesla engineers in the aggregate; personally, I hope he gets blown out the door and JB Straubel takes over as CEO)


> You already share the road with inattentive drivers and drunks...

The second is a crime and I believe the first is a misdemeanor. Getting caught in either scenario repeatedly will cause you to lose your license.

So we may share the road with dangerous drivers, but we don't accept it. So it isn't grounds to accept more danger.

Really this line of argument is always wrong. The presence of danger should make you less comfortable accepting additional danger, not more. It's not like they cancel out, they sum. (One might say that mature and accessible self driving technology would take these drivers off the road, but the situation today is immature technology and high end vehicles.)

As a self described rationalist, I think you should take another look at that - to me, it reads like you're saying that because it doesn't feel like we're taking on additional marginal risk in comparison to the risks we've already taken on, we don't need to worry about how we're actually doing so, so I was caught a bit off guard when you said you were a rationalist.


Also, is it essentially the same "driver" in all those Teslas? If one driver was responsible for 33 deaths over a couple of years, they'd have lost their license long ago!


> FSD is arguably better than both cohorts, considering number of deaths caused.

I'm skeptical of any digital technology use case where the analogies/comparisons tend to be:

a) to non-digital things, when there are perfectly decent digital comparisons to be made (e.g., autopilot in the airline industry)

b) to conspicuous non-digital things, here drunk or inattentive humans

c) more or less a $small-human-scale-X improvement on the performance/efficiency/safety of the non-digital thing, often implying a future $unspecified-X improvement that, say, Moore's law would suggest (not saying you're doing the latter here, btw)

Those last two especially. I'm just imagining someone hawking a newfangled realtime audio system, claiming that it outperforms hand-punching a player-piano score. It's a silly example for sure; but on a regular basis on HN I read how Bitcoin is no worse than fiat in terms of global energy use, how crypto scam rates are roughly equivalent to wildcat banks in the old west (and hey, we eventually improved on those, so...), and how many more humans cause car deaths than these intractable systems which are "arguably" better than drunks.


> Regulators continue to allow it, and their opinion > internet randos.

This is true.

> When they force FSD to be pulled, then there is weight behind the argument.

Well, I suppose that it is pretty hard to dispute this, but it should be recognized that the NHTSA (the theoretical regulator of vehicle and highway safety in the US) is extremely weak and virtually non-existent.

The NHTSA lacks anything close to the skill sets necessary to independently, proactively and robustly scrutinize even rudimentary mechanical issues (which has been confirmed by several USDOT OIG reports over the years).

With opaque, complex automated systems and software... the NHTSA stands no chance.

The NHTSA lacks the internal skill sets to understand any of the comments that I have made elsewhere on this post.

Again, you are not wrong per se, but again, it should be recognized that the NHTSA is concerned primarily with establishing plausible deniability to protect the agency and with headlines rather than protecting the public with solid regulatory processes and oversight.

(Coincidentally enough, yet another USDOT OIG report was buried in a Friday afternoon release: https://www.autoblog.com/2023/06/02/nhtsa-fails-to-meet-inte.... I kid you not, every four years or so the USDOT OIG releases another critical report on the NHTSA that focused on issues not rectified in the previous report. It is like Groundhog Day.)

> You already share the road with inattentive drivers and drunks, so the risk acceptance benchmark has been set.

This is true.

Because the US public does not demand change and because roadway deaths are high, but distributed across time and space... the NHTSA remains weak and overall US transportation policy remains dreadfully poor.

> FSD is arguably better than both cohorts, considering number of deaths caused.

Unquantifiable.

There is no way to accurately and independently quantify the downstream safety impact of FSD Beta.

Sure, perhaps the NHTSA believes that (because they must given their structural issues), but we should recognize why such assumptions are flawed.

> More people will die in the next few minutes from traffic deaths than have ever been attributed to Tesla’s autopilot or FSD (33 total, as of this comment).

There is the possibility for "indirect" incidents caused by FSD Beta where the FSD Beta-active vehicle is never physically impacted.

We cannot assume that those do not exist.

And we also cannot assume that the media is able to pick up on every Tesla vehicle-related incident - even as well-followed as Tesla, the company, is.

In fact, other than the automaker's word, in many cases, safety investigators like those from the NTSB cannot independently and forensically establish specific root causes.


Yet they allow human beings on public roads killing unwilling participants. There seems to be a two tier system at play - unaided humans killing thousands upon thousands to the extent it’s normal vs humans using a tool incorrectly and killing … a handful?

My experience with FSD is it’s a terrible autonomous system and anyone who uses it as such is a fool, and a fool with a car is dangerous no matter what. However the joint probability of my driving awareness and skill and the cars combined is greater than mine alone. I’ve had it suddenly brake when a car I didn’t notice was drifting into my lane and had it not I would have been in an accident. Likewise it made mistakes and I took control.

I personally don’t care if it ever is able to take me from point A to point B without my attention or assistance. I value its ability to navigate with my assistance especially on long trips, reducing my overall fatigue and taking me through confusing sections of urban interstate without errors - when I always make a wrong turn. The fact it’s 360 aware and I’m not and it’s indefatigable and I’m not is valuable.

In the last year it’s become remarkably more capable. I don’t know if they can continue this rate of improvement but if they can it’s about as good as I would expect from todays technology on a consumer car. That’s a decent bar for me. I think it’s also something valuable on the roads - as a driver assistance tool. The folks who turn it on and get in the backseat would do something just as bone headed without FSD. Rather I notice enormous number of Tesla cars on the road not being drive by total idiots, and presumably quite a few using FSD without issue. And, as I assert above, I believe the joint probability of the aware driver with FSD having an accident is lower than either alone.

I don’t care what hyperbole a bipolar nut job spouts, but I do appreciate him setting an unreasonable goal and failing halfway there while the rest of the world seems content with stagnating. Tesla created the EV movement in the mainstream, SpaceX created the space revival we are experiencing.

Fwiw, I think the choking on billionaire boots comment is not a particularly high value contribution.


    I do appreciate him setting an unreasonable goal and failing halfway there while the rest of the world seems content with stagnating.
This is underrated. I write this a someone who is half appalled and half amazed by Musk. How much that Tesla has already achieved in their self-driving efforts is incredible. Leave aside Musk's vision, it also means they must have an absolutely stellar engineering team working on this problem. Creating and maintaining this team is a huge feat by itself.


> right, where they can kill unwilling participants?

I feel the same way about teenagers, grandmas, and grandpas, and yet here we are.


Which is why many countries have a 18 year limit for driving, plus in my country there are people pushing for mandatory regular driving tests for old people.


If FSD is statistically safer than the 18 year old (or new driver of any age), is it ethical to knowingly cause more death by forcing the new driver to drive instead of allowing them to use FSD?


You seem to forget that Grandpas have rights, for example to live their life, and. being able to get from A to B is part of that.

FSD does not have rights.


Once grandpa's vision, reaction time, etc. declines to the point he becomes unsafe he should be losing his license anyways. That's the law today, though it is not enforced as strictly as it should be. In my model grandpa maintains the ability to travel by using FSD and everyone is safer, in your model he loses his license and is stuck at home or more people die.


Those concerns are entirely unwarranted, boarding on hysteria. FSD has an adequate real world safety record.


FSD has no safety record as Tesla does not release their datasets for analysis by third partys, research institutes, or government regulators. In fact, they have deliberately misclassified FSD to not be a autonomous vehicle system, despite repeatedly indicating that it is intended to be and currently is a autonomous vehicle, to avoid mandatory California DMV reporting requirements [1]. The only "safety record" available is published by Tesla themselves with no access to the underlying data which is the entity with the greatest financial conflict of interest. You might as well ask Ford how safe the Pinto was or VW how clean their diesel was. There is literally no reason to believe those safety reports, in fact, you should probably anti-believe them as this is the standard pattern of manufacturers of unsafe/inadequate systems.

[1] https://www.dmv.ca.gov/portal/vehicle-industry-services/auto...


If Tesla had an adequate safety record, they would back their claims by offering liability insurance when self driving is enabled.

The fact that Tesla does not put their money where their mouth is says all you need to know.

Wake me up when someone is willing to take on liability. Until then automated driving does not exist.


FSD isn’t to the point where it can safely operate without a driver monitoring it. You’re arguing against allowing it on the roads with a safety driver on the basis that FSD isn’t ready to be operated without a safety driver. That makes zero sense.

FSD with a safety driver does not pose a threat to public safety. That’s not coming from me, it’s coming from the NHTSA.


What does FSD stand for?

I would have thought that something called Full Self Driving would be capable of fully driving by itself.

Sarcasm aside, that’s how most people use it.

The NHTSA hasn’t regulated it, but that doesn’t mean it’s an endorsement. It just means that they haven’t found anything within their legislative mandate that they can regulate here. US agencies are largely reactive, and I’m not sure I trust them with my life to get this right.


Most people use FSD as a driver assist because the software will permanently ban you from the beta program if you don’t keep your hands on the wheel.

No one who has used the software for more than a minute has any misconceptions about FSD.


> No one who has used the software for more than a minute has any misconceptions about FSD.

But people who have heard the name might. Honestly, it's a ridiculous name. Just "Self-driving" might be arguable. What are they going to call it when it's really fully self driving? "Literally Fully Self Driving Totally Serious"


I have no problem allowing it on the road. I do have a problem calling something that does not drive itself “self driving”. Much less FULL self driving.

Although, I guess my bigger issue is actually with the government that takes no action about false advertising.


"Full" can mean a lot of things. You chose to interpret it as L5, but to me it clearly compares itself to AP and other lesser ADAS systems which has a much smaller ODD. FSD has a nearly full ODD (it doesn't do parking and reversing yet). I understand you might get the impression that Musk means L5, and clearly that is the long term goal, but that doesn't mean FSD has to be L5. Anyway, arguing about a name is pointless. There are literally thousands of videos online where someone who's about to purchase a product for $15,000 can see exactly what they will get. The "fear" that grown adults will purchase something for $15,000 purely based on a vague name without reading any disclaimers or read/watch a single review is disingenuous and purely false concern.


"Adequate" is an interesting choice of a word when describing something's safety record.


Adequate to many is “Kills fewer people than the average human driver.”

And yet, the average driver kills 0 people. It’s in the far margins where deaths occur - a fraction of a fraction of a percent.

This is in comparison to the arguably singular entity “Tesla Autopilot” which has already killed several.


Tesla autopilot replaces a lot more than one human driver, so this is a mathematically nonsensical argument.


So does a bus driver. Or a pilot. Or an engineer.

And we hold them responsible for the humans in their care.


We don't. 42K people are killed in the US every year in automobile accidents. Yet humans are allowed to keep driving and killing people despite their massive flaws.


Individual humans have their license revoked and sometimes even face manslaughter charges if they cause too much death on the road. FSD is like one mind in control of an entire fleet. It's mistakes are amplified relative to a lone human


The problem is, you don't hear the counterargument from the folks for whom it didn't work, because they're dead.


Pretty sure you do - any indication that even regular auto-steer+ACC is engaged during a fatal incident is cause to sound the alarm bells and put out a headline for "Tesla Autopilot killed this person!". The first fatal accident involving FSD Beta is going to be a multi-week long charade of media attention and pressure on regulators to exclaim "despite this being 5x safer, you must gate this until it's literally infallible".



> > The first fatal accident involving FSD Beta

> first?

According to your own citation, there are no deaths where FSD Beta has been alleged to be involved in any way.


So a Tesla driving off a cliff, is OK because it wasn't using the "FSD Beta" or whatever that idiot is marketing as the latest buzz?

Shouldn't the public be already outraging from the existing deaths? what's the difference... There will not be any change with an FSD death. As there wasn't with the previous deaths. Most people just don't care.


Is it okay, you ask. Yes! Yes, it is okay to be factually accurate. It's not okay to be factually inaccurate in order to reject evidence which contradicts your position.

> So a Tesla driving off a cliff

This is a lie. There is no evidence that any Tesla vehicle has ever driven itself off a cliff.

Perhaps you are thinking about media reports from a few months ago where a human drove their vehicle off a cliff? If so, it is concerning that you misremembered and/or misrepresented this news event in this way. This news event was of a human driving a vehicle off a cliff. That vehicle happened to be a Tesla — noteworthy only because experts were saying that the chassis did an astoundingly good job of protecting its occupants.

No use of driving aids was ever alleged.

> Shouldn't the public be already outraging from the existing deaths?

Yes. A million people are killed by vehicles driven by humans every year. A million people dead. Every year. This is an outrage.

As for deaths where use of Tesla Autopilot is indicated or alleged, I find them no more outrageous than deaths involving any manufacturer's brand name for lane centering/adaptive cruise control. These features do not lower the driver's responsibility to keep the vehicle under control. And they do not diminish the accountability of the driver in the event of a collision.


Isn't that an empty criteria? Doesn't the system disengage before an accident by design?


Tesla says “ we count any crash in which Autopilot was deactivated within 5 seconds before impact, and we count all crashes in which the incident alert indicated an airbag or other active restraint deployed” https://www.tesla.com/VehicleSafetyReport

But even without that, there are no allegations of such FSD beta related fatal or incapacitating incidents, who would say it even if the beta put the human in harm’s way 10, 15 seconds earlier. The black box in the car includes both the dash cam camera and whether auto steer is activated, so any fatal crash would still leave such evidence of Beta being active.


Every day thousands upon thousands of new teenagers start driving. One day, maybe today, the self driving technology will be safer than those teenagers and it will be a moral crime to put them in control of the vehicle. The same applies for the elderly.


No, Tesla gets bad rap because they constantly lie. And they lie, because it allows them to get billions in basically free capital.


Exactly. Missing a deadline would be understandable, but Tesla has been consistently dishonest about their software.

Unfortunately, neither consumers nor the market have punished them for it. So it's inevitable that they will continue, and that other companies will follow.


Just because a scam is profitable for the scammer doesn't mean the thing being sold works.


It kind of works, that’s the point. I pay for an FSD subscription because I get enjoyment from my car driving me around. Am I being scammed?


Probably, since you do not know the operational characteristics of the safety-critical device you are operating. It is a safety-critical device with known safety defects that is incomplete and that rejects defining any operational domain where it can operate safely. It literally does not define any way it can be used safely by a general consumer or the limits of its safe operation unlike say a chainsaw or a gun where there is a safety manual which defines the ways it can be operated safely by a user. This is acceptable when tested by a trained operator who is in direct contact with the designers and is made aware of the specific operational characteristics of the device, but not when operated by a untrained consumer. The product is literally, explicitly unfit for any use as it is a safety-critical product.

I guess you might not be getting scammed if you know it is unfit for any purpose, but at the very least you are being a danger to society for using a explicitly unsafe product that can not be operated safely.


It is only unsafe if the human operator lets it continue in unsafe situations. In a situation where it is taking a right turn at a red light, the only safety issue here would be the possibility of cosmic rays flipping enough bits in such a way that causes the software to instantly trigger full torque into another car, say a 1964 convertible mustang for maximum chance of fatally injuring the other humans.

By your own argument all adaptive cruise control is unsafe. Maybe you think the manual and warnings given to the consumer qualify "defining any operational domain where it can operate safely", but it doesn't give an exhaustive list of "here's where you can safely operate this", it just says to not use it when it's unsafe to do so - it's always "drive safely and take control when necessary" because the scenarios are literally infinite.


It isn’t as though there isn’t an internal and governmental testing process in place to ensure a reasonable level of safety.

I suppose what you’re saying is true, in the same sense that using any modern computer technology which has software in it’s stack, which hasn’t been formally verified, is fundamentally unsafe.


> in the same sense that using any modern computer technology which has software in it’s stack, which hasn’t been formally verified, is fundamentally unsafe.

If an IDE on your laptop malfunctions, you end up with a crash or some malformed data.

If self-driving fails, you end up barreling down the road in a 2 tonne deathbox, likely killing you or someone else.

Not equivalent in the slightest.


Yet that is what humans are allowed to do every day and kill people.


Software powers more than just your IDE, even cars that don’t drive themselves depend on computer code. So it is in fact exactly equivalent.


Do you think average website and an aircraft autopilot are written in the same programming language and to the same level of quality?


No, I think code like what is used to for Tesla’s Autopilot is written to a much higher quality and highly tested. That is in fact my entire point.


When people have already been scammed, they will argue till blue in the face that it's not a scam, even if it's clear it is to everyone else...

So if I said yes, what's your response? No, of course.

Not saying everyone thinks it's a scam here, just that your argument it's not doesn't do much with the bias of already paying...


I pay some money once a month for my car to drive me around. My car then drives me around.

My argument is that since I’m paying month by month I clearly know what I’m buying and have properly evaluated the value proposition. The classic argument that FSD is a scam because consumers don’t understand what they’re buying falls apart when FSD is a no contract subscription service.


Does the car fully self drive, or does it require human supervision by a licensed driver in a particular seat?

Sounds like FSD doesn’t actually FSD.

In other words, you know what you’re buying is not what’s advertised so it isn’t a scam?


You can imagine my shock every time the steering wheel nag goes off. You’d think I’d learn by now.


The argument is that you know what you're buying, full stop. It doesn't matter what's being advertised, unless you intend to sue for false advertising, in which your best bet is to lie the whole time and say that you based your purchase entirely off of the advertising.


Exactly.

I have learned that there is no point trying to tell a Tesla fanboy what the problem is. They will not listen. In the nytimes article, a guy bought another Tesla right after the previous one was totalled because of an FSD error.

The thing I have been telling myself is that fortunately I am not risking my own life for that.


Is this the article you’re talking about? https://www.nytimes.com/2023/01/17/magazine/tesla-autopilot-...

The “totaled car” was a 2015 Model S which would have been running MobilEye software, not FSD. Stirring up hysteria is a classic tactic that news sites use to generate revenue. Always look at the title of the page. When you see something like “New York Time Magazine” there’s a good chance what you’re reading is not actually a reputable NYT article.


    When you see something like “New York Time Magazine” there’s a good chance what you’re reading is not actually a reputable NYT article.
This is the first time I have seen such an accusation on HN about the quality of NYTM vs NYT. To me, NYTM is just longer form -- a bit more story telling.


The Tesla Stanley video perfectly sums up my experience discussing FSD with big fans.

https://youtu.be/4r-kUtLShJo


> I get enjoyment from my car driving me around. Am I being scammed?

I get enjoyment from my heroin too, so scam is the wrong word.

But a crime is commited in both cases


I don't think you're being scammed, but it's not fair that you're beta testing software that could kill me and I have no say in it and don't get paid anything. I shouldn't have to die because idiots trust Tesla marketing lies about safety.


Not necessarily, but in this case


Tesla chose to kill people by shipping a low quality system.

They did this because they were (and remain) massively behind the market leaders, and the only way to catch up was to start bulk data collection early with a system that wasn't ready.


Strong allegations, what is the death rate per million miles of FSD vs baseline? Is that a publicly known figure?


As FSD is a safety-critical product, the burden of proof is on Tesla to prove in a verifiable way to third partys that it is safe. Absent that, it is assumed to be unsafe. So, no, it is the weakest allegation possible and the default assumption.

As to the baseline rate, the baseline rate for police reported accidents in the US is 1 per ~500k miles and fatalitys is 1 per ~80M miles [1]. That is approximately 1 per 30 years of driving and 1 per 5,000 years of driving, respectively using the reported average of ~15,000 miles per year per driver.

As to the availability of FSD data, you can see my other comment:

https://news.ycombinator.com/item?id=36181877

[1] https://cdan.nhtsa.gov/tsftables/National%20Statistics.pdf


I still think that saying "Tesla chose to kill people by shipping a low quality system." is a strong allegation. I don't think it's reasonable to speak with that kind of certainty about an unknown. If your other comment is correct, that's a shame that they haven't been required to make that data public.



By your standards you can never deploy any FSD system anywhere, because such a system cannot be proven safe without deploying it.


You could do extensive closed-course testing with realistic conditions and then pay to have human drivers watching & ready to take over at any time. The problem is that this costs money and takes time, and Tesla was hoping not to do either.


Closed course testing to compare with humans would require about 800,000,000 miles of testing (10x length of average travel between fatalities). And that would just be for one version, every update would require that, and updates are about 100x internal version changes per year depending on how you count additional ML annealing.

That's about 80B test miles per year at about a capital cost of $0.50 per mile.

For a company that, say, spends ~10% on R&D, it would require the company be about a trillion dollars, which only Apple, etc. have reached.


Every other system has rigorous testing with trained safety drivers, planning interventions and working in short enough shifts to remain attentive.

Tesla does some of this, but then releases fast to the public... So far it has had some concrete consequences when they pushed this testing responsibility onto unaware people.


By that standard, every single existing transportation option is unsafe. We are back to hunting and gathering while living in trees, but let me tell you, even tree climbing can be dangerous.


They're talking about safety certification... And yeah, other transport components are safety certified. Autonomous software certification is a new enough area that the process is still being developed, but the EU has already instituted rules that specifically call out Tesla's actions as bad examples.


That wasn't the standard presented in parent comment:

> As FSD is a safety-critical product, the burden of proof is on Tesla to prove in a verifiable way to third partys that it is safe. Absent that, it is assumed to be unsafe. So, no, it is the weakest allegation possible and the default assumption.

And what you are talking about in this comment is quite different. Please don't move the goal post (or actually, sure, safety certification makes much more sense than verifiable safety).


I'd call that quote a perfectly adequate summary of certification. Can you rephrase what point you're making to the GP more clearly? It's not obvious to me how certification wouldn't be an answer here?


It is in the quote. Certified safe (this car isn't completely safe since no car is, but it meets our metrics so we certify it) is different from third-party verifiable safe (we have a proof of safety!).


This is such a inane semantic quibble. I was saying that Tesla must be able to prove (i.e. in the legal sense of meeting a standard of evidence) to a third party (i.e. a unaffiliated entity who does not have a conflict of interest, probably a regulatory agency or certification body), in a way that is verifiable by the third party (i.e. they do not require information from Tesla or they do use information from Tesla that can be reasonably believed to be accurate despite the conflict of interest), that it is safe (i.e. it meets a standard of safety generally agreed upon as acceptable for the problem domain). So yes, I was talking about safety certification.


Then go and just say that and don't go and use language that is much more strict in our own field (assuming HN average readers). As you say, you could have just rewritten your comment as:

"It is up to Tesla to get safety certification (that doesn't exist yet)."

That is obviously nowhere near what you wrote:

> As FSD is a safety-critical product, the burden of proof is on Tesla to prove in a verifiable way to third partys that it is safe. Absent that, it is assumed to be unsafe.

How does that in any way relate to safety certification?


I did not do so because I did not think anybody would misinterpret my statement as you did. I instead expected them to interpret it as the first person who replied to you did who found it obvious what I was talking about. It is in fact shocking to me that literally anybody would assume that I was declaring that they must prove that it is absolutely perfectly safe in a mathematically provable manner as that would mean that the reader thought I was such a blithering idiot that I can not even propose a minimally coherent argument.

I used extra words because I was expressing the core principle of certification which is to demonstrate to a third party that you achieve some set of agreed upon requirements. I did this because there is no current specific safety certification for autonomous vehicles, so I must appeal to the general principle underlying all safety certifications which is that you must demonstrate "safety" whatever that happens to be and that a product is not assumed "safe" by default.


Not to take away from the rest of it, but there is a very new process for L2-3 certification in Europe. As far as I know, only Mercedes has completed it and released a product (with TUV Rheinland), but TUV Sud is also offering homologation services for European type certification. The L4+ process for Europe is still in working group, with standards expected to be released next year (and 2025 for the UK).

Manufacturers are all pursuing self-certification in the US because regulators have been unwilling to answer petitions either way or commit to firm guidelines.


Again, certification (a matter of judgement) is a much lower standard than verification (a matter of truth). There is a lot of literature on the difference between the two words, and I assumed you chose to use the word verification intentionally. If that wasn’t the case, then it is better to choose certification when you mean that rather than verification in the future.


I think you're getting hung up on semantics. It's perfectly fine to call certified products "safe" in a casual sense, and the process of certification is proving that status to a third party.

Again, I wouldn't find these objectionable statements in casual conversation, even as someone who participates in the mess that is certification.


Note that any baseline should be on the same type of road e.g. FSD is probably already pretty safe on most highways whereas it's probably a coffin with wheels on a British country lane.


This is like saying Ford chose to kill people by shipping a car. Guess what, cars kill people, society has accepted that.


The Model Y was the best selling car in the word in Q1 - they're not massively behind the market leaders.

https://driving.ca/auto-news/industry/the-tesla-model-y-just...


keep seeing that stat and wonder 1. What it would look like if governments weren’t literally paying people thousands to buy these cars 2. What it would look like if you counted all of the models on the same platform for other manufacturers.

If you compare the Model Y to the Corolla it outsells it. If you compare the Model Y to every car that is built on a Toyota B chassis (Corolla chassis) it blows Tesla out of the water. The B chassis alone has 10 different models, 9 of which aren’t Corollas. In other words, is a “JPN taxi” model that looks like a Corolla and has the same parts a as a Corolla not a Corolla?


"Over three million Pintos were produced over its ten-year production run, outproducing the combined totals of its domestic rivals, the Chevrolet Vega and the AMC Gremlin."


Domestic rivals? Model Y outsold every other vehicle globally. And it will end as the best-selling vehicle globally in 2023... only 3 years after its launch.


I give Tesla a lot of credit for shipping features that I can buy.

I use lane following, summoning (which saves me time), and the windshield wipers every day.

Does anyone remember will it wipe? https://youtu.be/0SSYFMtdJ5k

I’m also furious about the hype, and how much of my time it has personally wasted.

I sometimes have to turn off the auto wipe, when there is a lot of glare…


At some point one can’t just claim a missed deadline. They have in fact relied on misleading marketing massively inflating their capabilities in this regard.

Which is sad because they’re still the best value for money you can get in the EV space. The only reason I haven’t bought one is because I hate touchscreens.


I'm impressed by this comparison between Tesla FSD vs Waymo.

whether Tesla is level 2 or 4 autonomous. I'm very impressed with Tesla's FSD.

edit: Tesla FSD reach the destination first by taking the high way while Waymo go the local route and far longer to reach the same destination.

https://www.youtube.com/watch?v=Hv9HtWUf27s

marques brownlee FSD demo: https://www.youtube.com/watch?v=9nF0K2nJ7N8


That "comparison YouTube video" is absurd and dangerous, because, at minimum...

A Level 4-capable vehicle (a Waymo vehicle) is an incomparably different system than a Level 2-capable vehicle (a vehicle equipped with FSD Beta).

The Waymo vehicle has a design intent such that there is no human driver fallback requirement within their vehicle's Operational Design Domain (ODD).

The Tesla vehicle has a de facto design intent such that the human driver is the fallback at all times - which makes the control relationship between the human driver and the automated system exactly the same as if the Tesla vehicle was equipped with no automation at all.

The risk profiles and failure mode analyses are Night and Day different and, therefore, the validation traits between these two vehicle are Night and Day different.

But, more than that, there are no guarantees that:

- The human driver of the FSD Beta-active vehicle shown in that video did not manipulate any of the vehicle controls out-of-view that clandestinely assisted the vehicle without deactivating the automated system (possible and inherent Human Factors safety issues with that aside); and

- The creators of this comparison video did not select the most visually-performant run out of several attempts.

Naturally, since we are dealing with safety-critical systems here, assumptions of "positive safety" are not compatible with any internal or external analysis.

Lastly, I have yet to see a video involving FSD Beta where indirect and "unseen" systems safety issues were satisfied. Appearances can be deceiving and deadly with safety-critical systems.


>The human driver of the FSD Beta-active vehicle shown in that video did not manipulate any of the vehicle controls out-of-view that clandestinely assisted the vehicle without deactivating the automated system (possible and inherent Human Factors safety issues with that aside); and

that's why i included Marques Brownlee's demo.


Respectfully, no FSD Beta video can add anything of safety value in evaluating these systems - and the only thing that these videos do these days is add a sense of complacency in most or all FSD Beta users.

Videos and personal experiences can only reveal safety issues, never positive progress.

Marques (and every other FSD Beta user) is not read into a would-be systems safety lifecycle for this safety-critical systems that Tesla should be maintaining.

Marques (and every other FSD Beta user) is entirely blind to that.

It is a complete Black Box to them.

Therefore, the assessments made are always subjective and are almost entirely based on emotions and appearances (and other hand-wavy, ill-defined aspects such as "interventions" or "disengagements") rather than a complete accounting of all relevant systems safety components.

Systems safety is about exhaustively asking questions and then exhaustively seeking quantifiable answers to those questions against established failure modes and in the context of the system and every other system that interacts with it (including the human driver in the case of a FSD Beta-equipped vehicle as a Level 2-capable vehicle).

That is the whole point of a company maintaining a robust systems safety lifecycle - to convert subjective opinions of system characteristics into quantifiable understanding.

Tesla is not maintaining that.

Throughout the video, there are several places were Marques states "he thinks" or "he believes" or "that looks good" and such comments are also prevalent in the YouTube comments attached to the video.

These are safety-critical systems where an unhandled failure can readily result in an injury or death.

Responsible systems developers need something far more quantifiable than blind opinions of run-of-the-mill consumers.

That FSD Beta-active vehicles do not appear to "run into things as often" on the roadway is not a complete evaluation of the system.

There are also very real indirect and "unseen" safety components that are inherently part of the public roadways that must also be accounted for.

For what it is worth, I touched on some examples of this recently in a Mastodon thread: https://elk.zone/mastodon.social/@adamjcook/1101629508444173...


> It is a complete Black Box to them.

If you think it's a black box to the Tesla drivers, how is it not a black box to the Waymo customers in the back seat of these cars? Or how are you evaluating Tesla vs Waymo if not by how how humans subjectively feel each system is performing?

If you mean to the teams, you cannot assume that Waymo's systems are any less of a black box than Tesla's systems. And even then, they're not much of a black box at all, besides the actual object detection, as both Waymo and Tesla still have most decision-making in regular logic-based code, not machine learning algorithms; and when they do, such as with "do I need to get over now to make the next turn", it's still fed back into the "business logic" that decides what to do and thus logged and audited when it's sent back to HQ.


Waymo customers in the back seats of cars are not testers or operators; they are cargo. These are fundamentally different roles with fundamentally different requirements with respect to the safety lifecycle.


So are we just as much in the dark about how much progress Waymo could be making? given "Videos and personal experiences can only reveal safety issues, never positive progress" is the argument and yet Waymo doesn't exactly give us access to their bigquery to perform our own qualitative analysis.


Yes. As a member of the general populace you have no idea how much progress Waymo is making and are unqualified to "test" their systems. However, you are not being asked to "test" their systems and you are not involved in the operation of the systems, so the point is moot. This is in contrast to Tesla where you "are" both of those things which is the problem.

Also, I just realized that the systems safety engineer you responded to has also posted a reply, so you should look to their statement for a more in-depth analysis as they are a expert on the subject.


I think it’s unfair to say that you’re not testing Waymo’s systems when they allow people to get in them to take trips. And while if it has a problem, it’ll try to pull over on the side of the road, it can also stop in the middle of the road if it doesn’t think it can pull over, which can be a safety problem on even 35/45mph zones.


> If you think it's a black box to the Tesla drivers, how is it not a black box to the Waymo customers in the back seat of these cars?

The general public (as a vehicle occupant) only interacts with a Waymo vehicle as a passenger with no vehicle control responsibilities.

That is in stark contrast from the integral human-machine relationship that exists in a Tesla vehicle.

> If you mean to the teams, you cannot assume that Waymo's systems are any less of a black box than Tesla's systems.

True.

Waymo's internal processes are a Black Box to me (and anyone external to Waymo) because we are not read into their systems safety lifecycle, whatever it may be.

Hopefully and presumably, Waymo is maintaining a Safety Management System (SMS) with their test operators and other internal teams, as they have claimed in the past.

Of course, since there is little-to-no regulatory oversight of this in the US (at the moment, perhaps)... Waymo's "word" is really the only thing the public has to go by.

That is not acceptable, in my view, in constructing a novel transportation system that ultimately relies on public trust to be economically viable... but that is the regulatory reality right now.

In the case of Tesla, it is definitive that they are not maintaining a SMS, in large part, because Tesla's (untrained) customers utilizing the system cannot be sufficiently read into a lifecycle. There is simply no way to do that without maintaining a highly-controlled, continuous relationship with the test operator.

For example, the "release notes" (sprinkled with some Tweets from Musk) that Tesla issues with some of the FSD Beta updates are simply too puny relative to the complexity of not only the vehicle system, but the larger complexity of the roadway.

> And even then, they're not much of a black box at all, besides the actual object detection, as both Waymo and Tesla still have most decision-making in regular logic-based code, not machine learning algorithms; and when they do, such as with "do I need to get over now to make the next turn", it's still fed back into the "business logic" that decides what to do and thus logged and audited when it's sent back to HQ.

As I stated elsewhere, these are physical safety-critical systems where the totality of the systems safety components cannot be expressed in software alone.

Remote vehicle telemetry is valuable of course, but as a tool to serve the validation process... not the validation process itself.

Vehicle telemetry cannot be a complete accounting of all of the interacting systems safety components involved here.

For that, like all other safety-critical systems, one needs exhaustive, controlled and physical validation.


That’s a lot of words, but at the end of the day the NHTSA allows FSD beta on the roads. Someday in the distant future Tesla will likely use the data they’ve collected to make statistical inferences to regulators about the safety of the system as a whole. Design intent doesn’t matter now, and won’t matter in the future when the system is retroactively validated for level 5.


yah, I'm amazed how humongous piles of paper analyses are considered more safe than evidence that stuff works in the real world. I get statistics are hard. It took 30 years to bust p-value hacking and much longer to get Bayes inference widely accepted. some of us would.love to _understand_ how neural net based automation makes it's decisions. how do you deal with the observable fact that these robots indeed in the real world make the right decisions more often than humans? but we can't explain why, because we didn't construct the robots controls, at least not in the traditional way?


Speculation. Lynching was also allowed at one point, and then it wasn't


Did you really just compare Tesla’s driver assist to lynching?


Doesn't matter - something that used to be common, used to kill people, and eventually society decided it shouldn't be allowed. Asbestos might be better comparison.

Point is, you are assuming a certain legal outcome, and there is no reason to believe it won't go the opposite way


The problem is that the complexity is hidden in the edge cases. And those edge cases are non obvious and deadly.

To put it succinctly, the difference in safety between a car manufactured in 1953 vs 2023 is not fully obvious to a driver who has not been in an accident.

Google had a self driving car over 10 years ago, that is at the level of FSD, but their approach is to go straight to L5 for safety.

https://youtu.be/TsaES--OTzM


Unless their approach is completely unworkable, which by all appearances it is. It doesn't matter how much you bet if it's on the wrong horse.

Tesla's own systems from five years ago, when they used a proper sensor suite, worked better than what they ship today.


That and Tesla is insanely profitable and not running out of resources any time soon. The opposite actually. I don't really care about any arbitrary timelines here. Because they don't have any real dead line other than "Elon Time" which is probably is a bit of a blessing in disguise. By putting on pressure to deliver quickly, he actually gets results. And some of those results in other areas have lead to his companies being very profitable and unlocking multiple multi billion dollar markets. As long has he believes self driving can be done and is well funded, he'll continue to pursue this thing. Rapid iterations and quickly adapting to challenges and set backs is a good strategy here.

I think there are two misconceptions in this space:

1) People mostly only talk about US companies and self driving cars in the US. China actually has a lot of self driving cars as well. Mostly following strategies similar to Waymo and with some borrowing from Tesla. And China is nowhere near as burdened by trigger happy lawyers slowing everything down. It's an ideal test ground for self driving from a legal point of view. And they are well funded. And they are running circles around most of the US in terms of car manufacturing. In general, the rest of the world will need to be covered by self driving eventually. Forget Phoenix; that's easy and rather boring. Can self driving cars manage in Italy, Spain, or conquer the German autobahn? I'm not pessimistic. But it won't be next year.

2) Painting this as a black and white game. All or nothing. Waymo is leading the way here by focusing on where it can work and gradually expanding their abilities. They are not even trying to make it work everywhere; just where they need it to work to make money. It's expanding it's area of operation and as it does the area it operates in adapts to self driving cars rather than the other way around. And it undeniably moves people from A to B at this point, seemingly without major incidents. The one thing that makes self driving cars hard is having human drivers around. Easy solution: get them out of the equation. This technology is basically going to be safety statistics and cost driven. As it gets better (i.e. safer and cheaper), it gets easier to optimize roads for self driving and the problem as a whole gets easier to deal with. Meanwhile there are big market opportunities in personal transport, freight, containers, etc. where self driving makes a lot of sense. Those are already happening.

My prediction is that Tesla will do well where companies like Waymo do well in as well and that they will meet in the middle sooner rather than later. And being positioned as a safety feature, all Tesla needs to keep on doing is failing to cause a lot of fatalities and gradually keep on improving. The rest is just raw numbers. As soon as there are millions of cars driving semi autonomously most of the time failing to cause a lot of trouble, confidence increases and it gets harder to argue there is a problem. Eventually, people will let go of their steering wheels. Insurance companies and cost will incentivize them. Based on the numbers.


So with the disclaimer that these views don't reflect those of my employer, as someone who works in the industry I think this article is basically spot on. The only point I would add is that the top line cost for all these vehicles is quite high right now, so scaling up the service alone isn't really a solution to the profitability problem. I won't get into specifics, but I think this blog post from Cruise summarizes the point pretty well (https://getcruise.com/news/blog/2023/av-compute-deploying-to...). The term "edge supercomputer" really is the best way to describe AV hardware deployment. And that doesn't even cover the sensor suite which is quite costly as well.

So if I was a betting man, I'd say that you can expect Cruse, Waymo & others to scale a little bit now, just to show investors that they can but for them to really save the bulk of the scaling (to hit that targeted figure of 1B/yr of revenue) until after they've found a way to get the costs down. That's going to come in the form of more bespoke vehicles that are better vertically integrated with custom hardware and sensing solutions (like the Cruise Origin).


So many armchair opinions here about Tesla FSD. It drove me 175 miles today from a dense and complicated city to a distant destination without me intervening once. Reading the comments here you’d think it’s completely smoke and mirrors.


Yep, it's either science fiction or science fact. There seems to be a lot of opinionated people that insist they can counter reality by yelling harder. But their points of view don't necessarily align with the facts.

My view is very simple. Lots of people insist that this stuff is dangerous and will kill people. Lots of Tesla FSD capable cars are on the road and are racking up billions of miles. Where are the traffic deaths? Where are the countless crashses? Those supposedly dangerous situations escalating all the time? It's all failing to happen as people insist it ought to be happening. Maybe that's because they are wrong. If anything, it seems Tesla runs circles around the likes of Volvo in terms of car safety by now. They certainly seem to insist so and they claim to have the numbers to prove it. And I'm not hearing a lot of statistics that counter that.

Meanwhile, there are of course lots of traffic deaths. The vast majority of which involve human drivers making fatal mistakes and getting themselves into trouble. Even adjusted for relative miles driven by humans and AI, the numbers aren't good for humans. They are terrible actually. It's not that hard to do better than that. AI vs. drunk, tired, reckless, moronic, etc. human drivers is basically no contest. And there are a lot of those. Roads are dangerous because of that.


Not to mention that if you are using FSD, your attention is being monitored. I can’t pick up my phone or look away from the road for more than a few seconds without the car warning me. Meanwhile the rest of the driving public almost always has their phone in hand.


I did over 700 miles with it over the memorial day weekend, worked really well especially on the highway no issues, surface streets probably 90%. Each release gets noticeably better. I actually notice when someone drives manually now how it's not as smooth as FSD in many situations.


That's fantastic! But hopefully you were paying attention, ready to take control at any point in time. I've personally been inside a "self-driving" Tesla that would have crashed without human intervention.

So while we can give Tesla props for shipping a very useful feature, it's not reliable enough to be considered "fully self driving". There could easily be fatal consequences for consumers who don't babysit it diligently enough.


I don’t disagree with you. FSD isn’t yet what the name claims, though they’ve made shocking progress in just the last 3 months. If your experience with FSD was longer ago, know that it’s almost a completely different system. I have full confidence it will get there, and it isn’t going to take long from where it is today.

Also, the eye tracking update that was rolled out a few months ago is strict - you are either paying attention to the road and never looking at a phone, or FSD is going to shut itself off and suspend your use of it. It easily holds you to a higher standard than the average driver holds themself for being engaged with the road (which is a low bar.)


What's your life expectancy is you let Tesla FSD do all the driving for you, without you supervising it?


There’s eye tracking in the cabin now. It’s simply not possible to use it without supervising it.


Of course, Full Self Driving is not fully self driving because it relies on a human to monitor it.

Without a driver monitoring system, what would have been your life expectancy if you let it drive without you monitoring it?

Full Self Driving can only be fully self driving when it approaches the reliability and safety of human drivers.


If it’s the name we are focusing on, I agree with you - the system isn’t “full self driving”, the name is misleading, and Tesla didn’t do themselves any favors by naming it that.

But if it’s how reliable and useful this feature is in practical applications, that’s a completely different story, and for 99% of people who don’t actually care about semantics, that’s going to be the important part.


i noticed on HN. if you gave Tesla/Elon any credits. you get down voted.


Absolutely true. A lot of emotions on this forum.


This tells me the FUD/smearing campaigns by hedge funds that are (or have been) short Tesla have been extremely effective at brainwashing everybody including the relatively intelligent who roam HN.

The average (hedge fund owned) media outlet, especially the financial ones, basically read "please remember to hate Elon".


Just responding to the headline, Cruise just started rolling out service to the whole city of San Francisco. Before it was limited to areas and times that were hard to take advantage of, but now it's opening up to the whole city (still late night only) so it's started competing with Uber-Lyft for late night rides. we'll just have to wait and see what effect it has on the transportation system as a whole.


And it’s so much better than a Lyft or Uber. They are clean. Don’t try to talk to you. And take the GOS-recommended routes.


I mean.. they're clean right now.

Think about when they're GA. You're going to have all sorts of people, not just the invited few as it is right now(fellow techies and such, who mostly are not going to trash and destroy property).

The public transit is gross most of the time with trash, vandalism, and grimy seats. Do you really think people are going to treat cars well where they're all alone inside (when have cameras stopped some people from being vile?)?

IMO these AI car services should be priced at a premium above Uber/Lyft. I want individual door to door service any time, with a guarantee it will be clean and comfortable. Otherwise, I'm always opting for my own car.


I don't know if you meant it this way but you sound so classist with this garbage. Afraid of the non-techies coming in and being 'vile' and trashing things? You only want to share space with wealthy, upper class people, as if they're the only ones who know how to respect public space?

Are Uber and Lyft not also GA? Are their cars also trashed? Seems strange that you picked on public transit as if the "unwashed masses" can't also order an Uber.

The public transit in SF is definitely not vile or trashed. Sometimes there's a bit of litter on the ground. Not grimy seats. I've never seen it the way you describe it and I ride several times a week.


As a former Uber/Lyft driver, I had to pull over and at least shake out whichever floor mats people's shoes were on after nearly every single ride. Some kind of visible debris usually ends up on the floor immediately when a person enters the vehicle. Often there was worse stuff to clean up. It was a rare exception that someone didn't leave any trace at all.

I've seen some of the cars of drivers who clearly don't clean as often, and they can get pretty gross looking (at least in daylight) pretty quickly.

I think these self driving taxis are going to need really good internal cameras, a vast network of human cleaners large enough that one is always reasonably nearby, higher pricing and a strong fine/fee system to avoid looking nasty for the majority of riders. With maybe a little less cleaning needed at night.


Given the propensity for night time riders to be tipsy, if not outright drunk, I'd think they'd need more cleaning, not less at night. Or is it due to the sheer number of people? I've yet to try driving for Uber tho so I could have the wrong impression.


I think the point is more about what people do when others aren't watching. Uber and Lyft have drivers. Public transit has other passengers, staff, and is generally constructed in a way to be more resistant to malfeasance (and amenable to cleaning) than your average car.

Self-driving cars will not have that, so you need to counter it somehow. Cameras and fines are probably sufficient, but OP's point that probably charging people more upfront will reduce this is probably valid, as the type of person who is willing to pay more is also going to be easier to collect fines from.


> I don't know if you meant it this way but you sound so classist with this garbage. Afraid of the non-techies coming in and being 'vile' and trashing things? You only want to share space with wealthy, upper class people, as if they're the only ones who know how to respect public space?

I grew up in a 3rd world country poorer than the poorest person you can find here. Most of my life I was poor until I started making easy money in tech once I got some experience, so everything I know from my US experience has been either inner city public transit or walking. I do know that the more money I got, the cleaner and more comfortable my environment became. I don't blame you for making some ridiculous statement like that, high compensation tends to do that to people especially people who have never faced actual adversity or difficulties so they have to pretend things are OK and rosy out of guilt.

So let's be clear. If you live west of the 101, you're in a good area. Things are definitely fine in Sunset, Richmond, Portola, etc. I spend a lot of time in Mission, Soma, Bayview for work and social activities. It is not clean and nice in these areas. Graffiti, trash, people passed out in the street, etc. Common sights there.

If you're in tech in SF and clearing 300-400k plus, you're not going to deal with this. If you're in these not-so-nice areas you're going to be in a nice apartment with modern security systems paying 3-4k a month or more. You probably have a nice view of the SF skyline or water, and spend time in parks wearing at worst a store brand like Patagonia while snapping pictures of your $4000 dog. You probably use Uber and Lyft, which let's be honest, are mostly used by upper middle class persons and up.

> Are Uber and Lyft not also GA? Are their cars also trashed? Seems strange that you picked on public transit as if the "unwashed masses" can't also order an Uber.

Uber and lyft have a person driving the car. There are obviously instances of people ruining these cars, but in Cruise and Waymo there isn't a driver. It cost me 40 bucks to go from Soma to Sunset one time. These services are for people with disposable income. There are people who can barely afford public transit. They aren't using Uber of Lyft. Be real dude lol. Have you ever taken public transit outside of Pacific Heights or out of necessity?

Anyway, the sad reality is that people destroy property for no reason, and most of the occurrences I've personally experienced of this have been in areas where people are not present and/or entry fee is cheap/free. The desks in my American schooling and university would always be vandalized and ruined.. why etch shit onto the desks it makes it hard to write. People rip out keys from the public library computers. I take the bus sometimes. Why are the seats ripped intentionally and shit scribbled into the window and the hand rails?


This is 100% obvious and inevitable.

I can't imagine what these cars will look like late on a Saturday night once this is normal.

Then imagine what they look like after a few years of this. They will be as clean as the local pay by the hour motel.


> Sam Abuelsamid, the industry analyst, told me that he “did some math” on Cruise’s plan to generate a billion dollars in revenue in 2025. He concluded that the goal was “actually probably achievable.” He estimates that the goal corresponds to a fleet of around 6,000 driverless vehicles

Let's plug in some reasonable guesses and see. 6000 vehicles x 10 trips/day average x $25/trip average x 250 days/year: $375M.

To an analyst that maybe be practically the same as a billion, but not to an accountant. And that's gross revenue. Take off all the car costs as well as the overhead.

(Remember: these vehicles can't cope with the busiest, most profitable locations for taxis.)


I am puzzled by your assumptions. 10 rides per day? How many rides do you think a taxi/uber driver does in one SHIFT? 250 days - why? I would expect more like 95% uptime and 95% availability, for 329 days.


Why would a self-driving car only drive for 250 days? You have to consider maintenance/repairs, but that won't be 115 days. The 10 trip a day guess is probably also low, but seems okay as an average to me.


With 330 days a year it still brings you to half a billion, which is half of what's considered "achievable".

This means they would need 20 trips per day on average, or higher pricing on average for each trip.


Why wouldn't they be able to get 20 trips a day? Seems like 10 is low balling even worse than the 250 was for a car that can be available 24 hours minus however long maintenance and cleaning takes.


But 660 days brings it to a cool billion.


Because the demand is only there for 250 days.


Based on...your best guess?


You don't have to estimate, use average income of taxi driver as that's what ai taxi will earn.


Or even more. An AI taxi won't speed or nearly cause accidents due to braking way too late. Greetings from my recent Poland trip :D (But yeah, time will tell whether they will cause unnecessary accidents in other ways)


I can't be the first person to think that trying to do the "every possible situation, individually" thing is stupid.

Why no focus on e.g. a top down federal "system" that would take over the driving e.g. just on the highways/interstates? Seems like that would be orders of magnitude EASIER.


I always felt there would be 'carpool'/express lanes that would just be for automatic driving cars. They would communicate with each other and so they could pack together really tightly. You'd enter your destination, then drive you car to the point that you enter the lanes, then the car would take over. It would warn before your exit, then you'd take over.


I would like to see such a system installed at a high volume intersection. When the light turns green the system instructs all the cars lined up to accelerate simultaneously. The motivation isn’t necessarily practicality, just pure entertainment value.


That’s called a streetcar normally.


Honestly I wish people would just do that. It doesn't have to be in perfect lockstep but it significantly increases the amount of backed-up traffic you can get through the light in one cycle if everyone does that.


One reason super high traffic cities will probably be the first ones to mandate self driving cars will be to optimize limited road infrastructure. Surely Beijing and Shanghai would go for it, they already have what Americans would consider drastic rules that limit when you can drive your own car (like even odd days based on your license plate number).


Now that I think about it, cars’ existing adaptive cruise technology could probably be extended to accomplish something fairly close.


Another benefit could be to have extremely fast lanes instead of more packed-together lanes. With good enough coordination, or a block zone system like trains[/roller coaster trains], you could achieve >120mph cruising. With both of these possibilities though, the requirements are generally that the car completely takes over driving with no way for the human to assume manual control to break the system or otherwise create an unsafe situation for other road users.


We are getting closer and closer to inventing Trains™


That's really not fair here. These would be trains you could bolt onto highways, and "transform" them back into cars as needed.


In a way, even easier? Why not just get on the interstate, bolt cars together. Then you're also saving on gas.


Why do it on public roads? There are oil fields in North Dakota where a lot of driving needs to be done and corporations could completely control that environment.

Why not automate cargo ships? There's far less to hit in the ocean and far fewer inputs required.

Why still have human pilots in airplanes when they're largely automated?


> Why not automate cargo ships? There's far less to hit in the ocean and far fewer inputs required.

Maneuvering a ship is but a very small part of all that's needed to maintain a ship operational – engine maintenance, paperwork, dealing with port agents and customs, operating the radio, etc.. Much of that can be automated, I'm sure, but the return on investment is quite poor. Crew salaries are a small part of all operations costs, so you're not saving much by removing them from the equation. (Add to that the fact that large cargo ships can exceed $1B worth of cargo, so I'm not surprised that the economics don't make much sense.)


It's all about the cost of a driver relative to the activity and risk; with taxis it's a much higher proportion than it is for aeroplanes.


If we wanted to make roads as safe as possible why not equip every vehicle with a breathalyzer that blocks the vehicles ignition if the driver isn't sober?

Or use GPS to limit the speed of vehicles to the speed limit that's been set? For that matter why allow the sale of vehicles capable of exceeding 85 miles per hour (the highest speed limit anywhere in the country)?


Did you intend to reply to me? The reason we are trialing these systems on public roads is because that is where they must operate in order to make sense. It's not about safety alone.

To respond to your tangent: Breathalyzers are not all that reliable, nor is GPS localisation... And a national speed limit would mostly reduce revenue (not to mention the new problems regarding who is liable for maintaining the limit correctly). There is a great history of implementing practical safety requirements, maybe not the first globally, but when it becomes practical to adopt something we generally do.


The US is functionally ungovernable due to its political structure, so the federal government can’t really just “do things” that are that ambitious, even if they make sense. If something like that happened at all it would probably be in China.


You should read more? We've already done railroad, long distance telephone and arguably the internet.


None of which are from this century, and our railroad system is… not an example of good governance in its creation or in its current management.


That’s the idea GM had way back in 1958: https://youtu.be/cPOmuvFostY. Basically ATC for a high speed self driving lane.


>It’s appropriate to criticize Waymo and Cruise for incidents like this, but we should also maintain a sense of perspective. While these incidents were undoubtedly frustrating for everyone involved, they don’t seem to have posed a serious danger. Incidents like this help the companies learn and improve their technology.

Well officer, I'm sorry I ran through yellow tape and killed a paramedic responding to the crash scene, but have some perspective. I've now learned not to do that, incidents like this help me improve my driving ability.


No one was killed.


This is hyperbole, yes, I'm not parodying the actual incident but the attitude.

Anyhow, crossing yellow lines isn't okay for that reason in particular, it's to keep first responders safe. The incidents listed should be just seen as failures which developers need rectify, not as a "learning experience" the public needs perspective on. The learning experiences should have happened in their controlled testing, not on the road.

I will say that generally in life, when you fail, even spectacularly; sure, you can tell yourself it was a learning experience to keep your mind sane. You do have to be careful saying such things to others judging you as if to ward them off because it comes across as cheapening the fault, or even faulting them for judging you (the lacking "perspective" bit). The best you can do is apologize, admit fault, and then assure them you'll do better in the future.


From the article

On March 18, 2018, a prototype Uber self-driving car slammed into Elaine Herzberg as she walked her bike across a road in Tempe, Arizona. First and foremost, Herzberg’s death was a tragedy for her family.


Is that Waymo or Cruise?


That was Uber.


> A human driver would disengage FSD long before it got into situations like that, which means Tesla would be unlikely to have the training data necessary to train its cars to handle it properly.

Wouldnt Tesla just just have Auto Pilot simulate what it would do and then compare that with what a human actually did and then learn from that?


That is literally what they do with a massive battery of regression tests that run new versions through simulations.


As someone who lives in a largely Victorian town with narrow, double parked streets, I look forward to the carnage of self driving cars. Seeing how they've been coping on big, roomy grid systems has left me amused that anyone has any hope at all for these things.

In the end, we'll regret the investment in these things. A few reliable, human-assisted tramways in and between towns is all people actually need. It isn't high tech, but it is high value: the ROI for society is huge. The money spaffed on self driving cars is just a waste.


I only see it as a problem if cities (and citizens...) think and act as if cars would be perfect for everything once they are self-driving.

As an example, I can't imagine Munich without public transport. It's great. But 3-6 times per year I do need a car. If my friends or I didn't need to get a license, that would have been nice as well (it costs 2000-3000€ here), it also allows everyone to get driven.


> I only see it as a problem if cities (and citizens...) think and act as if cars would be perfect for everything once they are self-driving.

No doubt that's how they'll be marketed though. I just hope segregated cycle lanes become ubiquitous first so driverless cars don't kill me and my kids.


Self-driving cars are already here - they are trams, light rail, metro, HSR and more.


Still waiting on the HSR to come out to my rural property which is a 1.5 hour drive from the nearest city. And then when I get their for that city to be full of metro, trams, and light rail (it's not).


That is a problem


The amusing thing is that there are both articles saying self-driving cars are far away, and articles complaining there are too many self-driving cars in their neighborhood.


I don’t find it amusing or ironic. That both of those things are true at the same time is telling. There are too many self-driving cars in certain places like SF. This is noticeable, and is a problem, because self-driving cars are incompetent drivers. This, it certainly seems, is a major problem that will be with us for some time. What we were promised is far away.


NYC has traffic jams composed entirely of yellow taxis. The self-driving cars at least report to a control center that can spread them out.


I think it's still a reality, but just needs to be implemented differently. Something similar to how railway works, that is railway has infrastructure for trains and nothing else, but at the same time there are crossings for interactions with roads and train stations for interactions with people.

When we have a place for self driving cars you can drive onto and have our car switch to automatic, it frees up a lot of the problems you need to solve. We can control the traffic more like a network then by having nodes at various intervals that conducts the traffic allowed on the route. This removes the need for the car to take on the responsibility of being aware of its surroundings.

The same could be implemented more cheaply for passenger (arial) drones.


All my tech podcasts (and now print media apparently) have been pushing a "self-driving cars are almost here" narrative for the last couple weeks. But to be blunt here, isn't this total bullshit? Let's say that's 5 years away; well that's at best in the realm of "who knows". Has anyone here made an accurate 5 year roadmap? Yeah....

I don't know what exactly is going on here, maybe Waymo and Cruise media relations teams are really earning their bonuses this year, but it's pretty gross. Kevin Roose had the guy from Cruise on and they were talking about the effect of self driving cars on gig drivers, and he said he felt like Cruise needed to take an active part in helping those drivers retrain or find other work. All I wanted was a single follow up: "what are you currently working on in that space?" Because the answer would be "lol we're not actually working on it, and if anyone does it it will be the government" (which is also the AI industry's response).

But also these cars are driving in the best of circumstances: nice weather, slow or predictable traffic, reasonable road layouts, etc. Wake me up when a car can drive me through snow on a highway narrowed to a single lane on New Year's Eve after I've been drinking heavily.

My guess here is that industry is hoping we'll start adapting to self-driving cars, not the other way around. I'm not saying that's bad, but it's an entirely different story than "The tech is close to working!" It would be more like "The tech never worked right, we basically made trams, and that was fine, but woof very expensive"


    Has anyone here made an
    accurate 5 year roadmap?
In 1961, Kennedy proclaimed that the USA would put a man on the moon within the decade. And in 1969 they did.


In fairness, that's many more years using the resources of probably the most lopsided government in human history racing to embarrass a rival. Do we think Google could put a man on the moon using 60s tech? They can't even make a messaging app (burrrrrrrn :)


Essentially poor loser that has lost the race to space started to waste money just so they can later brainwash their citizens to believe they won... Not that they lost in the first place horribly.


I think there's a question of what bar you're trying to reach. The mid-teens driverless car hype seemed to promise ubiquitous driverless cabs everywhere.

I think what people will expect is that you can call them and they'll come any time a human-driven Uber would come. By that I mean on a rainy night, or in the winter in a northern city. I really would like to see them operating year-round in someplace like Boston or, better, Pittsburgh (SF-like hills but with snow).


I'd like to see them operating year-round in Delhi and Lagos.


You dream for years together and one fine day, it will becoem a reality. For AI, that momebt has happened. Will soon for Self Driving Cars too.


Self-driving just suffers from good old software project time underestimation and overpromising executives.

It's quite possible to outdo a human at keeping rolling boxes from hitting each other and the unpredictable human particles buzzing around them. (Mainly because humans suck at paying attention and don't have millisecond response times.)

It's also quite possible to put people on the moon, but budgetary appetites are always the limiting factor.

So self-driving can still die if the difficulty exceeds the money and patience of the people footing the billions. Shareholders are free to say "why are we still paying for this when we were promised success 5 years ago?" and demand the plug be pulled. I hope they don't, but I also hoped NASA would get a higher budget.


Humans have 0.2s response time. It’s astoundingly good. I’m in robotics and our pipeline time is nowhere remotely near 0.2s.


I used to develop for CNC systems, and those responded to sensor trips within microseconds. The computers involved were realtime systems. If everything wasn't that fast, the machines would have demolished themselves. A human pulling a lever would have been 500,000x too slow.


Robots beat humans at things like determining if a circuit has been completed (sensor trips), and following a predefined set of directions.

Humans are far better at complex things like reacting appropriately to an unanticipated input (someone yelling stop, or honking a horn)


You're comparing "read a binary flip" to "process millions of pixels into a high level understanding of the world and make a complicated decision"


Well, not for anything complex. We can't even compensate some unexpected steering in 0.2s, while robots can do that in some µs.


If you're measuring a human in front a screen with their finger on a button, and the human is young, healthy, and fully alert, then you get 0.2s. Some electronic safety systems[0] can indeed physically respond in µs. Some robotics might have lag due to mechanical slack or software/processing limitations, but that can always be improved.

Whereas humans are never going to do better than 0.2s and will usually do much worse, especially if they have to turn their head, focus their eyes, and manipulate physical controls.

[0]for example: https://youtu.be/ynEdke5dzIU?t=10


(Sub-elite) sprinters average a response time of 119ms

https://www.basvanhooren.com/is-it-possible-to-react-faster-...


You total up all the R&D budgets burned trying to crack this problem and I have to wonder just how profitable would it really be to put (at first) taxi and (now) rideshare drivers out of a job? Did any of this actually make sense?


Maybe, in the long run...? Take a look at working-age population projections in NA and Europe [1].

I have for a long time had a gut feeling that practical broadscale self-driving is a $ trillion- or 2 trillion-dollar R&D project. It's still under a trillion so far, I think.

1. https://population.un.org/wpp/Graphs/Probabilistic/POP/20-69...


My gut tells me even that figure is low but taking that at face value either industry players think they can recover the GDP of Saudi Arabia on a timescale that doesn't lead to a tidal wave of investor lawsuits OR a whole bunch of people are full of shit.


It’s got a lot of long term benefits. For one, a self driving car can be designed radically differently — like easily accommodate more passengers.


You mean like mass transit?


Yep! Point to point, decentralized mass transit using the same infrastructure as everyone else.


Burning the GDP of the 2nd largest oil producing nation on the planet to recreate the bus from first principles while replacing the driver with an engineering and liability nightmare doesn't sound like good decision-making. Why do investors hate the idea of low wage workers this much?


What's the actual expected positive outcome when we get automatic driving cars?

Marginal efficiency gains? Possibly - impossible to prove - less deaths cause by cars? What else?

Is it worth all this hype, money and brain/manpower? Should we chase other things?


It’ll enable people who cannot drive, such as children and the elderly, to drive.

Accordingly this is going to significantly increase the amount of cars on the road which will make traffic miserable.


We should make cars with extra seats that follow common routes. We could even give them their own lanes and infrastructure.

We could make them extra long, and add hyper efficient metal wheels on metal roadways.


If children cannot be out of sight of parents without someone calling CPS then these cars offer no benefit to them.

For a subset of the elderly or differently abled it will offer a less social alternative to taxis.


The average American spends about an hour a day driving. You are awake for about 16 hours a day. So truly self-driving cars, where you can fall asleep or pull out your laptop and work on something else, give you 1/16th of your life back. If you live to the age of 80, it's like an extra five years.

That's worth a lot. Some kind of hybrid state where you need to always have your eyes on the road but maybe can take your hands off the wheel? Yeah I agree, I don't really see the point in that case. Or at least it's not something I get excited over.


To be fair, what will happen is more likely to be 5 extra years on Facebook, Twitter and TikTok.


Why would fewer deaths be impossible to quantify? Insurers already lower your premiums if your car has modern safety features, like automatic emergency braking. I'd posit that if we replaced human drivers with Tesla's FSD (never mind Waymo tech) today, we'd save lives on the net. We just accept that getting T-boned by a texting driver is "an accident" and those "just happen", whereas getting hit by a self-driving car is "omgwtfbbq those self-driving cars can't be trusted", so standards for self driving cars are unnecessarily higher. (Just imagine the median driver; by definition 50% of drivers are worse.)


I disagree and fully believe that no manufacturer would want their cars to be free to run on all roads today, fsd included

I'd love to bring it somewhere where it'll fail and sue them for damages, I'm sure I'm not alone

Edit: exhibit A, tesla won their case this year because fsd was enabled on city streets, where it is not capable and they tell you so themselves.


How is that possible if FSD has so many safety controls? Are their controls so easily bypassed that they're effectively non-existent?


There’s a large portion of the population in the US, mostly lower income, that commutes two hours a day. Solving self driving means giving those people that time back. That’s time that can be used to relax, take a nap, communicate with friends, etc. Self driving will be a boon for mental health.


alternatively, we can reshape cities with better public transit, liveable walkable areas, and allow people to not need a car. europe and asia look at america and shake their heads. which is money more well spent?


Sure, with 80 years and a trillion dollars that could work. Why bother working on self driving software when you can restructure all of society.


Considering how modern, US society was built for cars perhaps it's only fair we dial that back a bit. The end result is likely to be more humane and ecologically sustainable.


Absolutely, we should be building public transportation for the next generation. But pretending that it’s a solution for people today and using it as an excuse not to build self driving cars is short sighted and stupid.


Ok then do it. I don't think you can. People are too into cars here. It's fun to dream about the perfect city layout but we need realistic ideas.


Road deaths are the biggest killer of people aged 9 to 29, taking about 1.35M lives annually. That's roughly on par with USA Coronavirus deaths for the entire pandemic.

If autonomous driving can considerably reduce that road death number, I'd say it's a worthwhile endeavour.


Where are you getting that death number? I'm seeing numbers around 43k, 30 times lower. Is your figure including health damage from air pollution or something? Or is it an aggregate over many years?

https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...


Dude it’s obviously the global number.


I think you're right, though I don't think it's so obvious when it's being specifically compared with US COVID deaths.


Going to work in a driverless taxi, being able to work while driving. I could do that now with a regular taxi, but the driver sitting in front is a bit expensive. Only needing the car should make it a lot more affordable.


The bus is only $3. Plus it takes longer so you get more work time!


I can't really work properly while standing at the busstop waiting for the bus, or when I get on and the seating is too cramped for a 14" laptop.


I imagine a world in which I can take a road trip while asleep in my car (or reading or working). That would be pretty wonderful.


You mean like a train?


A majority of the use cases for self-driving cars are either solvable, or already solved, by some combination of better urban planning (i.e. zoning and probably large-scale regulatory reform), public transit (specifically useful public transit), and public investment. Unfortunately, in the United States, we are unable to do even one of these things sufficiently well, hence the need for self-driving cars (or, in some cases, some other technological solution; e.g. hyperloop).


I don’t know, can a train take you from any arbitrary point to any other arbitrary point?


Can a car?

Why do people always seem to forget that cars also requires huge infrastructure changes, even more so than some other options.


Yes, from the train station inside my garage to the train station inside the hotel. /s


No it's not like that at all. It leaves whenever you want.


Less deaths. This is a huge win.


More phone time


The irony being I get nauseus if look at my phone in a moving car.

In a potential future with self driving cars i do see massive benefits for women wanting safe rides home. Many still have hard time with being creeped out by drivers often enough.


Look out your window and imagine what we could do with all the street space that's dedicated to parking.

Biggest benefit is that most people will stop owning cars.


Traffic is going to be the same or higher. Instead of parking the car nearby you, the car has to travel to/from a parking hub.

I really dislike cars, I know there are sometimes necessary, I hate car parking spots, but this "hey we can safe parking spots" argument doesn't really work imo.


Only if they aren’t taxis. Then it’s just a matter of moving onto the next fair. Course, using a taxi or Uber to avoid parking fees is already a well understood thing.


Yeah this is it. And we get to reuse the existing roads hopefully without having to build more. We can make better use of suburban sprawl.


You are free to chase whatever you want. Same goes for those who have chosen FSD.


Self-driving cars, particularly if they communicate, can safely keep shorter following distances. All-else-equal, this means that many more cars can drive on the same amount of road.


> One reason is that there are some problems that a company will only encounter once it begins testing fully driverless operations. Waymo and Cruise’s problems with fire hoses and caution tape is a good example. A human driver would disengage FSD long before it got into situations like that, which means Tesla would be unlikely to have the training data necessary to train its cars to handle it properly.

The reasoning here seems flawed to me. I assume Tesla use periods when the human is driving for training data, so this is no constraint at all.


> they’re already testing it in Phoenix and San Francisco

It's not dead, it's just extremely niche. Try testing in the sticks in country roads in England. Most people from the US who try that for the first time are terrified, and that's not even the worst of it, roads world wide are complex, diverse and require immense adaptability and negotiation. Negotiation of large, well defined city roads is only progress towards that niche not arbitrary self driving ability.


The % of total English driving that occurs on country roads is miniscule. That's the niche* usecase. The vast majority of driving occurs in sub/urban areas. The rate of driving even within the city of London/Manchester/Cambridge is dropping like a rock.

If you solve the 80% of that use case, you still have most of the thing, and that seems close.


The problem is that these systems can't handle a ton of “edge” cases which are actually fairly common — they work in the best case scenarios when conditions are normal, weather is decent, other drivers are behaving, etc. That could, as you point out, still replace a LOT of human driving time but there are two challenges: the first is that you don't know when road or weather conditions will change unexpectedly and require human intervention, and the second is that there's a fair cost to having these systems. That extra cost is easy to justify when the pitch is “you get a robot chauffeur and can watch a movie or sleep” but not when it's “you need to be ready to step in at a moment's notice because we just encountered inconsistent markings, road work, or a spill”.

I am expecting something like the automatic convoying some of the trucking companies have been experimenting with to be more successful since they avoid the tedium aspects (they're already paying a driver to handle everything), don't drive in terrible conditions or unusual roadways, and since they get heavy utilization of the vehicles there isn't the problem of paying even more for something which already sits idle 95% of the time.


I will believe self driving long haul trucking when I see it.

I imagine it is a bit of a pipe dream. When it goes wrong it is going to go horribly wrong, most likely at very high speeds and the liability will all be on the trucking company.


The same is true in the US. They are trying in new, car-centric cities where the weather is almost always clear and sunny.

There’s a reason that they aren’t doing this where it snows, or in cities that aren’t giant car-first grids.


What makes country roads in England so terrifying to US drivers (other than the obvious left vs right issue which would apply to cities too)?


Country roads in England are generally less than two lanes wide, and have a ton of blind curves. Oncoming passes must be coordinated carefully


Yeah, even with the larger lanes you have to get used to judging minimal clearance often without a centre line which may be intentionally left out to encourage common sense negotiation. That's before we even get into City parking where you have to be good at squeezing into improbably sized spaces on narrow Victorian streets.

There are also some newly developed areas in the middle of some cities where the roads are pristine, well marked, with generous and consistent lane widths, those look closer to something like Phoenix (except not a giant grid), but that's uncommon, most areas are a patchwork due to the way everything has evolved slowly over time.


I miss when products were only announced when you could actually order them. The new process where people announce they "will totally definitely might have something in [now()+18months]" then everyone blindly believes it because they want it to be true is just frustrating. Still at least my General AI will be able to compute to work by Fusion powers self driving cars on a blockchain tomorrow morning...


I get executives over promise and Elon especially, but he's admitted to the difficulty and how FSD was harder than anyone suspected. Why all of a sudden in his videos does he seem extremely confident where he's been silent-ish the past few years?

I've seen in numerous recent videos him mentioning Tesla's Ai team is the #1 real world AI company and no one is even close, and that FSD will be here very soon.


> FSD was harder than anyone suspected

Literally the only thing most people know about FSD is that it is hard. This is very on brand for Elon:

1. Claim something is super simple and easy

2. Get billion dollar contracts for vaporware

3. Fail to actually achieve anything remotely useful

4. Go silent

see: The Boring Company, Hyperloop, FSD, etc...

> Tesla's Ai team is the #1 real world AI company

Big doubt on that one, I'm sorry, but it's almost certainly false.


> This is very on brand for Elon:

I don’t think SpaceX is a failure in general, and Tesla seems to sell a lot of EVs, I feel like 10 years ago their were many comments on HN that both were impossible achievements.

Not really a fan of musk, I bought a BMW i4 instead of a Tesla (although not for dislike of Musk), but the Musk haters seem to have a worse track record on their predictions than the Musk fanboys.


>I don’t think SpaceX is a failure in general

I do take issue with this because I don't really see how a company that essentially leeches off Uncle Sam's bottomless pockets can tell its workers they need to be more "hardcore" or face bankruptcy if Starship doesn't work by EOY 2023


Leeches of Uncle Sam ‘s bottomless pockets by providing it with cheaper launches than any other contractor could? Seriously not a success? Ok then…

I get the idea that Musk is like Bezos or early Gates, it’s a management style that I wouldn’t like to work under, but it’s a style that can get results for some things. Anyways, engineers can easily vote with their feet.


>Leeches of Uncle Sam ‘s bottomless pockets by providing it with cheaper launches than any other contractor could? Seriously not a success? Ok then…

The fact that any contractor comes close to bankruptcy proves that they are completely unable to continue providing these launches at the prices they agreed. And that's not even factoring in this wild `Mars 2029` delusion.


So they are leeching off of Uncle Sam’s bottomless pockets by selling it launches at a loss (hence burning VC or investor money)? Really confused what the point is here.


You said above that Elon can’t be trusted, but now you are saying we should trust him that SpaceX “comes close to bankruptcy”? So trust him when it suits your argument?


Elon promising things and not delivering = he failed his promises in the past many multiple times, there is no reason to trust him further

Elon looking at his company's balance sheet or 1y forecast = hopefully there can't be too many things wrong in there, there is more reason to trust/believe statements regarding this.

Not 4D chess.


Do you know that NASA would be left without a mean to send astronauts to the ISS without SpaceX? SpaceX offers a launch cost that is lower than any other vendors, and more importantly F9 is the most reliable vehicle, bar none. Are you saying there is only one party benefiting from that relationship?


It's difficult to admit you've been tricked. I took a look at your post history and it's obvious I'm not going to be swaying you any time soon lol.


>I've seen in numerous recent videos him mentioning Tesla's Ai team is the #1 real world AI company and no one is even close,

Marketing babble

>and that FSD will be here very soon.

Same story he told the last 5 years.


The issue is that these self-driving cars are limited to a few big cities--where they're least necessary, since good public transit is far more efficient, many things are within walking distance, and speed limits are low anyway.

IMHO the biggest potential for self-driving cars is in suburbs and exurbs where mass transit is inherently less efficient.


About problems to solve, maybe not American ones, how well do these cars avoid potholes, especially the new ones that develop after heavy rain on worn out roads?

I mean, I'll be happy if massive adoption of driverless cars will materialize the money to regularly maintain the pavement of all roads but I don't believe it.


Current vehicles don't do much to avoid potholes. It's not an especially important issue for scaling and you can work around them with simple map blacklists. I've only seen a couple potholes that led to recovery events, even in the shitty roads of SF. One of those was really more of a sinkhole that physically broke the car.


I don't know about waymo et al. But Tesla FSD beta is awful with potholes.


Isn’t this the same story Ars Technica had, a couple of days ago?

https://arstechnica.com/cars/2023/06/the-death-of-self-drivi...


I thought self-driving was always an aspiration. Full self driving requires solving the holy grail of robotics + the ultimate safety problem. There should be by now a few technologies that can be deployed as side-benefits instead of waiting for the whole package to be achieved


AI improvements will eventually make their way to self driving. It has too much economic value to fade away. The question is whether self driving cars will be the cause or beneficiary of future innovation. There was a time when it was a cause. It looks like that's changing.


Did I miss the part where they operate in a city that gets a lot of snow? Dallas and Houston: maybe they get snow once in a great while. Phoenix and San Francisco: never.

> But it’s hard to judge the risk of something that’s never happened.

That isn't even a Black Swan; it's a known unknown.


Self-driving cars will work best when ALL cars are self-driving, and connected with each other, over a standard protocol shared by all manufacturers, to develop swarm networking to avoid collisions and cooperatively decide optimal routes and speeds on the fly.


Perhaps off-topic, but are there any estimates of how much power it takes to run one of these "supercomputer on wheels"? Is the compute energy a significant portion of the total energy for a given trip?


No, not compared to locomotion.


What I don't understand is why aren't companies pushing ahead with intermediate solutions on constrained paths that's more like public transport - imagine a small van or mini-bus carrying 4-10ish passengers that travels on well-mapped, public, main roads (and is allowed to use the bus lane).

The main paths in big cities already go pretty close to big venues, and we could set up these paths in such a way that the pickup and dropoff point is at most a few minutes walk away from people's true destinations.

It would serve as a replacement for public transport instead of a way of having your car drive itself (which isn't that big of a help considering the biggest issue with city driving is parking).


You have pretty much described what Waymo is doing. They use specific, pre-mapped zones only in certain conditions.


There are a few companies around the world working on that sort of solution to the public transit problem (called autonomous Personal Rapid Transit), and one or two systems similar to what you’re describing that have been built. Open road autonomy has to deal with all sorts of unexpected situations, still has to contend with traffic (until all cars are communicating with all other cars… very very far in the future), and still quite unprofitable for the driverless robotaxi use case. PRT - and what it sounds like you’re suggesting - get around these types of problems: By operating in dedicated (fenced off) lanes, the perception and planning problems become trivial (especially if you put sensors on the roadway). With one company building and operating the system, coordination between vehicles becomes possible (with no pedestrians and stations off of the mainline, this means little traffic and still high throughput). Small, lightweight vehicles with reduced sensing/compute requirements are cheaper and require less road space (meaning cheaper infrastructure too)

The main downsides I can think of being it would requiring walking to your final destination from the bus stop/station (but for public transit prices instead of Uber prices I’m sure that’d be an appealing option to many), and requires new (or at least dedicated) infrastructure which is a harder sell to companies and cities.


How is this better than a regular bus?


Theoretically you could have more on rotation at a time. Rather than have a whole bus that’s 25% full, you can have a few vans shuttling along the route. This should allow greater access since people don’t have to wait as long between buses. For example, the bus on my city are few. If I want to go to the grocery that is only a 8 miles await, the trip would take about 15 minutes by bike because there is no bus route to it. However mini vans on a known route could come get me most of the way.


Self driving hype and investment got ahead of the tech.


Seems to be the unfortunate progression of almost every potentially transformative technology.


the interest in self-driving cars (by anyone except salivating bean-counters and CEOs) has been greatly exaggerated.

Most people seem to prefer driving to being a passenger, particularly the passenger of some broken half-baked tech (the kind of semi-broken tech all consumer computing systems seem to say is OK to bring to market)


People desperately need to keep this grift going. They have enjoyed the billions invested up to this point, and do not want that money to stop.


Argo AI "grift" stop. there goes your whole narrative.


So private fleets aside, when will it be out-of-the-box on my Toyota?

Are we at diminishing marginal returns and not getting closer?

Do we need a 'leap' in AI? Or is this just a matter of grinding it out?


If only the death of self-driving car owners was greatly exaggerated


It is greatly exaggerated. I may be wrong, but as far as I’m aware, the death toll associated with Tesla’s self-driving beta is still zero.

There are a handful of highly publicised deaths associated with Tesla drivers that misused adaptive cruise control and lane centring features. Such features are available on most new cars sold today. It is unclear whether the death rate from cruise control misuse is actually higher with Tesla vehicles, or if the apparent difference is from differing levels of media attention.


The life of self-driving cars also seems to have been greatly exaggerated.


It does seem awfully strange to talk about "the death of self-driving cars" when they haven't actually arrived yet!


True, but that's exactly what has been popular in HN, that self driving is a pipe dream.


Self-driving manufacturers will simply kill as many people as it takes until they "get it right".

There's no downside, no-one is going to be put in prison, they'll just pay the fine which will be less than a month of profit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: