Yet it is quite odd how Tesla also reports that untrained customers using old versions of FSD with outdated hardware average 1,500,000 miles per minor collision [1], a literal 3000% difference, when there are no penalties for incorrect reporting.
Consumer supervision is having all the controls of the car right there in front of you. And if you are doing it right, you have your hands on wheel and foot on the pedals ready to jump in.
Accident rates under traditional cruise control are also extremely below average.
Why?
Because people use cruise control (and FSD) under specific conditions. Namely: good ones! Ones where accidents already happen at a way below-average rate!
Tesla has always been able to publish the data required to really understand performance, which would be normalized by age of vehicle and driving conditions. But they have not, for reasons that have always been obvious but are absolutely undeniable now.
At least once every few days, it would do something extremely dangerous, like try to drive straight into a concrete median at 40mph.
The way I describe it is: yeah, it’s self-driving and doesn’t quite require the full attention of normal driving, but it still requires the same amount of attention as supervising a teenager in the first week of their learning permit.
If Tesla were serious about FSD safety claims, they would release data on driver interventions per mile.
Also, the language when turning on FSD in vehicle is just insulting—the whole thing about how if it were an iPhone app but shucks the lawyers are just so silly and conservative we have to call it beta.
Yikes! I’d be a nervous wreck after just a couple of days.
I kept it for a couple months after the trial, but canceled because the situations it’s good at aren’t the situations I usually face when driving.
You mean it has obvious bugs.
The only problem is, it doesn't work.
That was the case when they first started the trial in Austin. The employee in the car was a safety monitor sitting in the front passenger seat with an emergency brake button.
Later, when they started expanding the service area to include highways they moved them to the driver seat on those trips so that they can completely take over if something unsafe is happening.
I wonder if these newly-reported crashes happened with the employee positioned in e-brake or in co-pilot mode.
This is awkward for any technologies where we've made it boring but not safe and so the humans must still supervise but we've made their job harder. Waymo understood that this is not a place worth getting to.
It would be interesting to try training a non-human animal for this. It would probably not work for learning things like rules of the road, but it might work for collision avoidance.
I know of at least two relevant experiments that suggest it might be possible.
1. During WWII when the US was willing to considered nearly anything that might win the war (short of totally insane occult or crackpot theories that the Nazis wasted money on) they sponsored a project by B.F. Skinner to investigate using pigeons to guide bombs.
Skinner was able to train pigeons to look at an image projected on a screen that showed multiple boats, a mix of US and Japanese boats, and move their heads in a harness that would steer a falling bomb to a Japanese boat. They never actually deployed this, but they had tests in a simulator and the pigeons did a great job.
2. I can't give a cite for this one, because I read it in a textbook over 40 years ago. A researcher trained pigeons to watch some parts coming off an assembly line, and if they had any visible defects peck a switch.
There were a couple really clever things about this. To train an animal to do this you have to initially frequently reward them when they are right. When they have learned the desired behavior you can then start rewarding them less frequently and they will maintain the behavior. You will have to keep occasionally rewarding correct behavior though to keep the behavior from eventually going away.
The way they handled this ongoing occasion reward was to use groups of 3 pigeons. The part rejection system was modified to go with a majority vote. Whenever it was not unanimous the 2 pigeons in the majority got a reward. This happened frequently enough to keep the behavior from going extinct in the birds, but infrequently enough to avoid fat pigeons.
Once they had 3 pigeons trained by a human deciding on the rewards during the initial training when you need frequent rewards and got them so they were working great on the line, they could use those 3 to train more. They did that by adding the trainee as a 4th member of the group. The trainee's vote was not counted, but if the other 3 were unanimous and the trainee agreed the trainee was rewarded. This produced the frequent rewards needed to establish the behavior.
The groups of 3 pigeons could do this all day with an error rate orders of magnitude lower than the error rate of the human part inspector. The human was good at the start of a shift, but rapidly get worse after as their shift goes on.
Ultimately the company that had let the researchers try this decided not to actually have it used in production. They felt that no matter how much better the pigeons did and how much they publicly documented that fact ads from competitors about how that company is using birds to inspect their parts would cost too many sales.
>Jack (died 1890) was a Chacma baboon who was an assistant to a disabled railway signalman, James Wide, in South Africa.
>Jack was the pet and assistant of double leg amputee signalman James Wide, who worked for the Cape Town–Port Elizabeth Railway service. James "Jumper" Wide had been known for jumping between railcars until an accident where he fell and lost both of his legs below the knee. To assist in performing his duties, Wide purchased Jack in 1881, and trained him to push his wheelchair and to operate the railways signals under supervision.
>An official investigation was initiated after someone reported that a baboon was observed changing railway signals at Uitenhage near Port Elizabeth.
>After initial skepticism, the railway decided to officially employ Jack once his job competency was verified. He was paid twenty cents a day, and half a bottle of beer each week. It is widely reported that in his nine years of employment with the railway company, Jack never made a single mistake.
13781-13647 Street, Other fixed object, No injuries, Proceeding Straight, 17mph, contact area: bottom
13781-13648 Street, Bus, No injuries, Stopped, 0mph, contact area: left, front
13781-13646 Parking lot, Other fixed object, No injuries, Backing, 2mph, contact area: bottom
13781-13645 Parking lot, Pole / Tree, No injuries, Backing, 1mph, contact area: rear right
13781-13644 Street, Heavy truck, No injuries, Proceeding Straight (Heavy truck: parked), 4mph, contact area: left
Seems like there's zero benefit to this, then. Being required to pay attention, but actually having nothing (ie, driving) to keep my engaged seems like the worst of both worlds. Your attention would constantly be drifting.
Externalized risks and costs are essential for many business to operate. It isn't great, but it's true. Our lives are possible because of externalized costs.
OSAH also has regulation to mitigate risk ... tag and lock out.
Both mitigate external risks. Good regulation mitigates known risk factors ... unknown take time to learn about.
Apollo program learned this when the door locks were bolted on and the pure oxygen environment burned everyone alive inside. Safety first became the base of decision making.
To be clear, I'm not in support of dumping chemicals into the world, just calling out that experimenting on the public with large robotic cars is perfectly in line with American business practice.
Politics is really a mind killer. Just think for a second. Who can be fooled by this "turning off FSD milliseconds before impact"?
They advertise and market a safety claim of 986,000 non-highway miles per minor collision. They are claiming, risking the lives of their customers and the public, that their objectively inferior product with objectively worse deployment controls is 1,700% better than their most advanced product under careful controls and scrutiny when there are no penalties for incorrect reporting.
https://www.rubensteinandrynecki.com/brooklyn/taxi-accident-...
Generally about 1 accident per 217k miles. Which still means that Tesla is having accidents at a 4x rate. However, there may be underreporting and that could be the source of the difference. Also, the safety drivers may have prevented a lot of accidents too.
I think Tesla's egg is cooked. They need a full suite of sensors ASAP. Get rid of Elon and you'll see an announcement in weeks.
If you have a large fleet, say getting in 5-10 accidents a year, you can't buy a policy that's going to consistently pay out more than the premium, at least not one that the insurance company will be willing to renew. So economically it makes sense to set that money aside and pay out directly, perhaps covering disastrous losses with some kind of policy.
Insurers would charge 4 times as much for insurance I think. Which matches what I've seen when quoting insurance for Teslas before.
As an aside, the situation at Tesla sure is getting stranger. I don't know if it was yesterday or earlier in the week, but Elon saying that at least one Cybercab will be sold to a "consumer" before the end of '26 for under $30k makes no sense (yeah yeah promises promises). But wasn't the idea that Tesla would control the fleet? Why would they sell a person a Cybercab to operate as a taxi? That would mean that there's profit to be had by that buyer and so why the heck wouldn't Tesla just keep that profit for itself and run the entire operation? Some kind of balance sheet gimmick? Offloading the insurance risk to someone else?
Maybe someone reading this long-ass reply will clue me in. And I get it the majority of the folks these days think it's all vaporware, but doesn't the vaporware at least have to make some sense?
https://www.npr.org/2025/02/24/nx-s1-5305269/tesla-state-dep...
> That would mean that there's profit to be had by that buyer and so why the heck wouldn't Tesla just keep that profit for itself and run the entire operation?
I suspect this is because they have less confidence in the ability of the cab to pay for itself and would rather offload that financial risk on the buyer.
So this number is plausible.
Gigantic lithium batteries on wheels guided by WIP software do not
Tesla needs their FSD system to be driving hundreds of thousands of miles without incident. Not the 5,000 miles Michael FSD-is-awesome-I-use-it-daily Smith posts incessantly on X about.
There is this mismatch where overly represented people who champion FSD say it's great and has no issues, and the reality is none of them are remotely close to putting in enough miles to cross the "it's safe to deploy" threshold.
A fleet of robotaxis will do more FSD miles in an afternoon than your average Tesla fanatic will do in a decade. I can promise you that Elon was sweating hard during each of the few unsupervised rides they have offered.
Almost there. Humans kill one person every 100 million miles driven. To reach mass adoption, self-driving car need to kill one every, say, billion miles. Which means dozens or hundreds of billions miles driven to reach statistical significance.
They need to be around parity. So a death every 100mm miles or so. The number of folks who want radically more safety are about balanced by those who want a product in market quicker.
I don't think so.
The deaths from self-driving accidents will look _strange_ and _inhuman_ to most people. The negative PR from self-driving accidents will be much worse for every single fatal collision than a human driven fatality.
I think these things genuinely need to be significantly safer for society to be willing to tolerate the accidents that do happen. Maybe not a full order of magnitude safer, but I think it will need to be clearly safer than human drivers and not just at parity.
We're speaking in hypotheticals about stuff that has already happened.
> I think these things genuinely need to be significantly safer for society to be willing to tolerate the accidents that do happen
I used to as well. And no doubt, some populations will take this view.
They won't have a stake in how self-driving cars are built and regulated. There is too much competition between U.S. states and China. Waymo was born in Arizona and is no growing up in California and Florida. Tesla is being shaped by Texas. The moment Tesla or BYD get their shit together, we'll probably see federal preëmption.
(Contrast this with AI, where local concerns around e.g. power and water demand attention. Highways, on the other hand, are federally owned. And D.C. exerting local pressure with one hand while holding highway funds in the other is long precedented.)
I like to quip that error-rate is not the same as error-shape. A lower rate isn't actually better if it means problems that "escape" our usual guardrails and backup plans and remedies.
You're right that some of it may just be a perception-issue, but IMO any "alien" pattern of failures indicates that there's a meta-problem we need to fix, either in the weird system or in the matrix of other systems around it. Predictability is a feature in and of itself.
A self-driving car that merely achieves parity would be worse than 98% of the population.
Gotta do twice the accident-free mileage to achieve parity with the sober 98%.
1 in a billion might be a conservative target. I can appreciate that statistically, reaching parity should be a net improvement over the status quo, but that only works if we somehow force 100% adoption. In the meantime, my choice to use a self-driving car has to assess its risk compared to my driving, not the drunk's.
To be clear, I'm not arguing for what it should be. I'm arguing for what it is.
I tend to drive the speed limit. I think more people should. I also recognise there is no public support for ticketing folks going 5 over.
> my choice to use a self-driving car has to assess its risk compared to my driving, not the drunk's
All of these services are supply constrained. That's why I've revised my hypothesis. There are enough folks who will take that car before you get comfortable who will make it lucrative to fill streets with them.
(And to be clear, I'll ride in a Waymo or a Cybercab. I won't book a ride with a friend or my pets in the latter.)
It seems reasonable that the deaths and major injuries come highly disproportionally from excessively high speed, slow reaction times at such speeds, going much too fast for conditions even at lower absolute speeds. What if even the not very good self-driving cars are much better at avoiding the base conditions that result in accidents leading to deaths, even if they aren't so good at avoiding lower-speed fender-benders?
If that were true, what would that mean to our adoption of them? Maybe even the less-great ones are better overall. Especially if the cars are owned by the company, so the costs of any such minor fender-benders are all on them.
If that's the case, maybe Tesla's camera-only system is fairly good actually, especially if it saves enough money to make them more widespread. Or maybe Waymo will get the costs of their more advanced sensors down faster and they'll end up more economical overall first. They certainly seem to be doing better at getting bigger faster in any case.
Important correction “kill one or less, per billion miles”. Before someone reluctantly engineers an intentional sacrifice to meet their quota.
Pedantic correction: "kill one or fewer, per billion miles"
You can prove Tesla's system is a joke with a magnitude of metrics.
People have an expectation that self driving cars will be magical in ability. Look at the flac waymo has received despite it's most egregious violations being fender bender equivalents
I think they do. That's the whole point of brand value.
Even my non-tech friends seem to know that with self-driving, Waymo is safe and Tesla is not.
Once Elon put himself at the epicenter of American political life, Tesla stopped being treated as a brand, and more a placeholder for Elon himself.
Waymo has excellent branding and first to market advantage in defining how self-driving is perceived by users. But, the alternative being Elon's Tesla further widens the perception gap.
I'm probably not the average consumer in this situation but I was in Austin recently and took both Waymo and Robotaxi. I significantly preferred the Waymo experience. It felt far more integrated and... complete? It also felt very safe (it avoided getting into an accident in a circumstance where I certainly would have crashed).
I hope Tesla gets their act together so that the autonomous taxi market can engage in real price discovery instead of "same price as an Uber but you don't have to tip." Surely it's lower than that especially as more and more of these vehicles get onto the road.
Unrelated to driving ability but related to the brand discussion: that graffiti font Tesla uses for Cybertruck and Robotaxi is SO ugly and cringey. That alone gives me a slight aversion.
Robotaxis market is much broader than the submersibles one, so the effect of consumers' irrationality would be much bigger there. I'd expect an average customer of the submarines market to do quite a bit more research on what they're getting into.
I don't know what a clear/direct way of explaining the difference would be.
Totally rational.
A small number of humans bring a bad name to the entire field of regular driving.
> The average consumer isn't going to make a distinction between Tesla vs. Waymo.
What's actually "distinct?" The secret sauce of their code? It always amazed me that corporate giants were willing to compete over cab rides. It sort of makes me feel, tongue in cheek, that they have fully run out of ideas.
> they will assume all robotic driving is crash prone
The difference in failure modes between regular driving and autonomous driving is stark. Many consumers feel the overall compromise is unviable even if the error rates between providers are different.
Watching a Waymo drive into oncoming traffic, pull over, and hear a tech support voice talk to you over the nav system is quite the experience. You can have zero crashes, but if your users end up in this scenario, they're not going to appreciate the difference.
They're not investors. They're just people who have somewhere to go. They don't _care_ about "the field". Nor should they.
> dangerous and irresponsible.
These are, in fact, pilot programs. Why this lede always gets buried is beyond me. Instead of accepting the data and incorporating it into the world view here, people just want to wave their hands and dissemble over how difficult this problem _actually_ is.
Hacker News has always assumed this problem is easy. It is not.
That’s the problem right there.
It’s EXTREMELY hard.
Waymo has very carefully increased its abilities, tip-toeing forward little by little until after all this time they’ve achieved the abilities they have with great safety numbers.
Tesla appears to continuously make big jumps they seem totally unprepared for yelling “YOLO” and then expect to be treated the same when it doesn’t work out by saying “but it’s hard.”
I have zero respect for how they’ve approached this since day 1 of autopilot and think what they’re doing is flat out dangerous.
So yeah. Some of us call them out. A lot. And they seem to keep providing evidence we may be right.
Genuine question though: has Waymo gotten better at their reporting? A couple years back they seemingly inflated their safety numbers by sanitizing the classifications with subjective “a human would have crashed too so we don’t count it as an accident”. That is measuring something quite different than how safety numbers are colloquially interpreted.
It seems like there is a need for more standardized testing and reporting, but I may be out of the loop.
Driving around in good weather and never on freeways is not much of an achievement. Having vehicles that continually interfere in active medical and police cordons isn't particularly safe, even though there haven't been terrible consequences from it, yet.
If all you're doing is observing a single number you're drastically under prepared for what happens when they expand this program beyond these paltry self imposed limits.
> Some of us call them out.
You should be working to get their certificate pulled at the government level. If this program is so dangerous then why wouldn't you do that?
> And they seem to keep providing evidence we may be right.
It's tragic you can't apply the same logic in isolation to Waymo.
The difference is that accidents on a freeway are far more likely to be fatal than accidents on a city street.
Waymo didn't avoid freeways because they were hard, they avoided them because they were dangerous.
Maybe. We don’t know for sure.
You seem to frame that a bit like Waymo is cheating or padding their numbers.
But I see that as them taking appropriate care and avoiding stupid risks.
Anyway as someone else pointed out they recently started doing freeways in Austin so we’ll know soon.
Not sure how you read that. I'm saying Waymo was prioritizing safety.
Same argument, different sentiment.
LIDAR gives Waymo a fundamental advantage.
Tesla FSD is crap. But I also think we wouldn't see quite so much praise of Waymo unless Tesla also had aspirations in this domain. Genuinely, what is so great about a robo taxi even if it works well? Do people really hate immigrants this much?
What’s so great about a robotaxi even if it works well? It’s neat. As a technology person I like it exists. I don’t know past that. I’ve never used one they’re not deployed where I live.
I don't live in a covered area, but when I am in range I will gladly pay 10-20% more for a Waymo ride than an Uber/Lyft/etc.
In some spaces we still have rule of law - when xAI started doing the deepfake nude thing we kind of knew no one in the US would do anything but jurisdictions like the EU would. And they are now. It's happening slowly but it is happening. Here though, I just don't know if there's any institution in the US that is going to look at this for what it is - an unsafe system not ready for the road - and take action.
the issue is that these tools are widely accessible, and at the federal level, the legal liability is on the person who posts it, not who hosts the tool. this was a mistake that will likely be corrected over the next six years
due to the current regulatory environment (trump admin), there is no political will to tackle new laws.
> I just don't know if there's any institution in the US that is going to look at this for what it is - an unsafe system not ready for the road - and take action.
unlike deepfakes, there are extensive road safety laws and civil liability precedent. texas may be pushing tesla forward (maybe partially for ideological reasons), but it will be an extremely hard sell to get any of the major US cities to get on board with this.
so, no, i don't think you will see robotaxis on the roads in blue states (or even most red states) any time soon.
In the specific case of grok posting deepfake nudes on X. Doesn't X both create and post the deepfake?
My understanding was, Bob replies in Alice's thread, "@grok make a nude photo of Alice" then grok replies in the thread with the fake photo.
Where grok is at risk is not responding after they are notified of the issue. It’s trivial for grock to ban some keywords here and they aren’t, that’s a legal issue.
Sure, in this context the person who mails the item is the one instigating the harassment but it's the postal network that's facilitating it and actually performing the "last mile" of harassment.
However notification plays a role here, there’s a bunch of things the post office does if someone tries to use them to do this regularly and you ask the post office to do something. The issue therefore is if people complain and then X does absolutely nothing while having a plethora of reasonable options to stop this harassment.
https://faq.usps.com/s/article/What-Options-Do-I-Have-Regard...
You may file PS Form 1500 at a local Post Office to prevent receipt of unwanted obscene materials in the mail or to stop receipt of "obscene" materials in the mail. The Post Office offers two programs to help you protect yourself (and your eligible minor children).
The postal network transports a letter, and only the person reading the letter can see the contents.
These situations are in no way comparable.
the same is true if the webapp has a blank "type what you want I'll make it for you" field and the user types "CP" and the webapp makes it.
Legal things are amoral, amoral things are legal. We have a duty to live morally, legal is only words in books.
Truly baffled by this genre of comment. "I don't think you will see <thing that is already verifiably happening> any time soon" is a pattern I'm seeing way more lately.
Is this just denying reality to shape perception or is there something else going on? Are the current driverless operations after your knowledge cutoff?
for the rest of us aligned to a single reality, robotaxis are currently only operating as robotaxis (unsupervised) in texas (and even that's dubious, considering the chase car sleight of hand).
of course, if you want to continue to take a weasely and uncharitable interpretation of my post because i wasn't completely "on brand", you are free to. in which case, i will let you have the last word, because i have no interest in engaging in such by-omission dishonesty.
“robotaxi” is a generic term for (when the term was coined, hypothetical) self-driving taxicabs, that predates Tesla existing. “Tesla Robotaxi” is the brand-name of a (slightly more than merely hypothetical, today) Tesla service (for which a trademark was denied by the US PTO because of genericness). Tesla Robotaxi, where it operates, provides robotaxis, but most robotaxis operating today are not provided by Tesla Robotaxi.
hm yes i can see where the confusion lies
I'm not one to nitpick grammar but if you want to convey something is a proper noun you capitalize it.
no, i wasn't. i am telling you i wasn't and i have already told you i wasn't. how many more times do you need to be told?
> unless you believe the "extensive road safety laws and civil liability precedent" only apply to Tesla branded Robotaxis
i was talking about tesla robotaxis, sure.
please quote me where i said that it only applies to them. otherwise, you're making shit up in your head and accusing me of it :)
Due to your poor writing, what you intended to write and what you actually wrote were different. Though you still don't appear to understand what you wrote, I am now satisfied you made the error unintentionally. Hope this gives you peace.
in addition, i made it explicitly clear to you MULTIPLE times what i was talking about, and you still struggled with resolving the ambiguity.
please count how many times i repeated it to you, and use that to inform your own knowledge of your limitations to absorb new information
anyways it's clear that this conversation is at an end, so i will let you have the last word, if you wish.
[citation needed]
Historically hosts have always absolutely been responsible for the materials they host, see DMCA law, CSAM case law...
if you think i said otherwise, please quote me, thank you.
> Historically hosts have always absolutely been responsible for the materials they host,
[citation needed] :) go read up on section 230.
for example with dmca, liability arises if the host acts in bad faith, generates the infringing content itself, or fails to act on a takedown notice
that is quite some distance from "always absolutely". in fact, it's the whole point of 230
Note that I'm not asking for perfection. However if someone does manage to create child porn (or any of a number of currently unspecified things - the list is likely to grow over the next few years), you need to show that you have a lot of protections in place and they did something hard to bypass them.
That ain't true [1].
Teslas are really cheaply made, inadequate cars by modern standards. The interiors are terrible and are barebones even compared to mainstream cars like a Toyota Corolla. And they lack parking sensors depending on the version you bought. I believe current models don’t come with a surround view camera either, which is almost standard on all cars at this point, and very useful in practice. I guess I am not surprised the Robotaxis are also barebones.
Getting this to a place where it is better than humans continuously is not equivalent to fixing bugs in the context of the production of software used on phones etc.
When you are dealing with a dynamic uncontained environment it is much more difficult.
Any engineering student can understand why LIDAR+Radar+RGB is better than just a single camera; and any person moderately aware of tech can realize that digital cameras are nowhere as good as the human eye.
But yeah, he's a genius or something.
Beyond even the cameras themselves, humans can move their head around, use sun visors, put on sunglasses, etc to deal with driving into the sun, but AVs don't have these capabilities yet.
You can solve this by having multiple cameras for each vantage point, with different sensors and lenses that are optimized for different light levels. Tesla isn't doing this mind you, but with the use of multiple cameras, it should be easy enough to exceed the dynamic range of the human eye so long as you are auto-selecting whichever camera is getting you the correct exposure at any given point.
Photon counting is a real thing [1] but that's not what Tesla claims to be doing.
I cannot tell if what they are doing is something actually effective that they should have called something other than "photon counting" or just the usual Musk exaggerations. Anyone here familiar with the relevant fields who can say which it is?
Here's what they claim, as summarized by whatever it is Google uses for their "AI Overview".
> Tesla photon counting is an advanced, raw-data approach to camera imaging for Autopilot and Full Self-Driving (FSD), where sensors detect and count individual light particles (photons) rather than processing aggregate image intensity. By removing traditional image processing filters and directly passing raw pixel data to neural networks, Tesla improves dynamic range, enabling better vision in low light and high-contrast scenarios.
It says these are the key aspects:
> Direct Data Processing: Instead of relying on image signal processors (ISPs) to create a human-friendly picture, Tesla feeds raw sensor data directly into the neural network, allowing the system to detect subtle light variations and near-IR (infrared) light.
> Improved Dynamic Range: This approach allows the system to see in the dark exceptionally well by not losing information to standard image compression or exposure adjustments.
> Increased Sensitivity: By operating at the single-photon level, the system achieves a higher signal-to-noise ratio, effectively "seeing in the dark".
> Elimination of Exposure Limitations: The technique helps mitigate issues like sun glare, allowing for better visibility in extreme lighting conditions.
> Neural Network Training: The raw, unfiltered data is used to train Tesla's neural networks, allowing for more robust, high-fidelity perception in complex, real-world driving environments.
The IMX490 has a dynamic range of 140dB when spitting out actual images. The neural net could easily be trained on multiexposure to account for both extremely low and extremely high light. They are not trying to create SDR images.
Please lets stop with the dynamic range bullshit. Point your phone at the sun when you're blinded in your car next time. Or use night mode. Both see better than you.
> What this really reflects is that Tesla has painted itself into a corner. They've shipped vehicles with a weak sensor suite that's claimed to be sufficient to support self-driving, leaving the software for later. Tesla, unlike everybody else who's serious, doesn't have a LIDAR.
> Now, it's "later", their software demos are about where Google was in 2010, and Tesla has a big problem. This is a really hard problem to do with cameras alone. Deep learning is useful, but it's not magic, and it's not strong AI. No wonder their head of automatic driving quit. Karpathy may bail in a few months, once he realizes he's joined a death march.
> ...
https://news.ycombinator.com/item?id=14600924
Karpathy left in 2022. Turns out that the commenter, Animats, is John Nagle!
For me it looks like they will reach parity at about the same time, so camera only is not totally stupid. What's stupid is forcing robotaxi on the road before the technology is ready.
Nah, Waymo is much safer than Tesla today, while Tesla has way-mo* data to train on and much more compute capacity in their hands. They're in a dead end.
Camera-only was a massive mistake. They'll never admit to that because there's now millions of cars out there that will be perceived as defective if they do. This is the decision that will sink Tesla to the ground, you'll see. But hail Karpathy, yeah.
* Sorry, I couldn't resist.
Or did he "resign" since Elon insists on camera-only and Karpathy says i cant do it?
It's far from clear that the current HW4 + sensor suite will ever be sufficient for L4.
Waymo still takes many wrong turns and can easily get stuck in situations where a human would not.
Technology is just not there yet, and Elon is impatient.
Waymo could be working on camera only. I don’t know. But it’s not controlling the car. And until such a time they can prove with their data that it is just as safe, that seems like a very smart decision.
Tesla is not taking such a cautious approach. And they’re doing it on public roads. That’s the problem.
No reason to assume that. A toddler that is increasing in walk speed every month will never be able to outrun a cheetah.
The only way that ion thruster might save the toddler is if it was used to blast the cheetah in the face. It would take a pretty long time to actually cause enough damage to force the cheetah to stop, but it might be annoying enough and/or unusual enough to get it to decide to leave.
agreed. this also provides an explanation for the otherwise surprising fact that prey animals in the savannah have never been observed to naturally evolve ion thrusters.
I'm curious how crashes are reported for humans, because it sounds like 3 of the 5 examples listed happened at like 1-4 mph, and the fourth probably wasn't Tesla's fault (it was stationary at the time). The most damning one was a collision with a fixed object at a whopping 17 mph.
Tesla sucks, but this feels like clickbait.
> What makes this especially frustrating is the lack of transparency. Every other ADS company in the NHTSA database, Waymo, Zoox, Aurora, Nuro, provides detailed narratives explaining what happened in each crash. Tesla redacts everything. We cannot independently assess whether Tesla’s system was at fault, whether the safety monitor failed to intervene in time, or *whether these were unavoidable situations caused by other road users*. Tesla wants us to trust its safety record while making it impossible to verify.
My suspicion is that these kinds of minor crashes are simply harder to catch for safety drivers, or maybe the safety drivers did intervene here and slow down the car before the crashes. I don't know if that would show in this data.
While I was living in NYC I saw collisions of that nature all the time. People put a "bumper buddy" on their car because the street parallel parking is so tight and folks "bump" the car behind them while trying to get out.
My guess is that at least 3 of those "collisions" are things that would never be reported with a human driver.
So the average driver is also likely a bad driver by your standard. Your standard seems reasonable.
The data is inconclusive on whether Tesla robotaxi is worse than the average driver.
Unlike humans, Waymo does report 1-4 mph collisions. The data is very conclusive that Robotaxi is significantly worse than Waymo.
For those complaining about Tesla's redactions - fair and good. That said, Tesla formed its media strategy at a time when gas car companies and shorts bought ENTIRE MEDIA ORGs just to trash them to back their short. Their hopefulness about a good showing on the media side died with Clarkson and co faking dead batteries in a roadster test -- so, yes, they're paranoid, but also, they spent years with everyone out to get them.
Are you being sarcastic due to Elon buying Twitter to own/control the conversation? He would be a poster child for the bad actions you are describing.
[1] https://www.businessinsider.com/musks-claim-teslas-appreciat...
If used, good on you. You're not making things much worse. I've seen people cheap out and buy performance diesels as they'd depreciated so much. Picking up a cheapo Tesla is at least better than that sorry outcome. Thanks.
“13781-13644 Street, Heavy truck, No injuries, Proceeding Straight (Heavy truck: parked), 4mph, contact area: left”
Also as a disclaimer I need to know if you were long the stock at the time. Too much distortion caused by both shorts and longs. I wasn't on either side but I learned after many hard years that so much on /r/teslamotors and /r/realtels was just pure nonsense.
``` The incidents included a collision with a fixed object at 17 miles per hour, a crash with a bus while the Tesla vehicle was stopped, a crash with a truck at four miles per hour, and two cases where Tesla vehicles backed into fixed objects at low speeds. ```
so in reality one crash with fixed object, the rest is... questionable, and it's not a crash as you portrait. Such statistic will not even go into human reports, as it goes into non driving incidents, parking lot etc.
Your context sucks, and it's good as a lie.
>Waymo reports 51 incidents in Austin alone in this same NHTSA database, but its fleet has driven orders of magnitude more miles in the city than Tesla’s supervised “
So far , you can clearly tell : 1. tesla works decent in a limited environment, no crazy patterns 2. It's a limited env that means nothing. Scale is still not there. They ned to prove themself.
Electrek says they aren't made public, if I understand correctly (?). Do you know where the public can access them - do you have any links?
Though maybe the safety drivers are good enough for the major stuff, and the software is just bad enough at low speed and low distance collisions where the drivers don't notice as easily that the car is doing something wrong before it happens.
A few are low speed reversing into things, the extreme majority of which done by humans are never reported and are not in the dataset comparing how many crashes Tesla have had vs humans.
I would say they’re facts, but they’re being used dishonesty
Since the narratives are redacted, who's to say the Tesla didn't change lanes to be in front of the bus, slam on the brakes, then get rear ended?
Or pull partially out of a driveway, stopping and blocking a lane with a bus traveling 35mph in said lane and got hit by it?
> A few are low speed reversing into things, the extreme majority of which done by humans are never reported and are not in the dataset comparing how many crashes Tesla have had vs humans.
I'm sure this happens to humans all the time, but not a single one of those humans would be considered a good (or even decent) driver.
So is the bar here being a good or decent driver, or being x times worse than the average human?
I see a lot of bar moving.
> I see a lot of bar moving.
"Less than decent" means "worse than the average human driver".
I've never hit a stationary object, or any object for that matter, in 20 years of driving.
I understand that might not be the same for you. My bar is that it must be better than my own good driving.
Even Waymo have tons of reported crashes in the same document.
Self driving cars need to be better than the average human - which means less injuries and deaths. Given 100 people will be killed on the road in the US today, it’s actually not a crazy high bar to clear.
My own bar being a self driving car better than me is made up and impossible to test for?
Stop trying to force shitty self driving implementations down other's throats. If they were good and useful, people would voluntarily use them.
> Self driving cars need to be better than the average human
And Teslas are obviously not, to everyone except the terminally brainwashed. Two more weeks until it works though, right?
Your bar is irrelevant, this isn’t about you, personally. This is about everyone.
As someone who is neither an Elon fan nor a hater, it irks me how deranged HN is about anything Musk-related.
It's always like that. The poor billionaire soon trillionaire is getting bullied by the blogger. Not.
Do you even realize how dumb that sounds?
https://electrek.co/2026/02/17/tesla-rolls-first-steering-wh...
Meanwhile, the article if you read it
> What makes this especially frustrating is the lack of transparency. Every other ADS company in the NHTSA database, Waymo, Zoox, Aurora, Nuro, provides detailed narratives explaining what happened in each crash. Tesla redacts everything.
The sheer scale and financial consequences (once it implodes) of Tesla scam are unprecedented.
While the word 'crash' seem a bit strong here, keep in mind, as the article mention, that Tesla is volountarily redacting all the details. The company was recently found to deliberately lying about the availability of the data in a death case, and ordered to pay >$200 million.
>The new crashes include [...] a crash with a bus while the Tesla was stationary
Doesn't this imply that the bus driver hit the stationary Tesla, which would make the human bus driver at fault and the party responsible for causing the accident? Why should a human driver hitting a Tesla be counted against Tesla's safety record?
It's possible that the Tesla could've been stopped in a place where it shouldn't have, like in the middle of an intersection (like all the Waymos did during the SF power outage), but there aren't details being shared about each of these incidents by Electrek.
>The new crashes include [...] a collision with a heavy truck at 4 mph
The chart shows only that the Tesla was driving straight at 4mph when this happened, not whether the Tesla hit the truck or the truck hit the Tesla.
Again, it's entirely possible that the Tesla hit the truck, but why aren't these details being shared? This seems like important data to consider when evaluating the safety of autonomous systems - whether the autonomous system or human error was to blame for the accident.
I appreciate that Electrek at least gives a mention of this dynamic:
>Tesla fans and shareholders hold on to the thought that the company’s robotaxis are not responsible for some of these crashes, which is true, even though that’s much harder to determine with Tesla redacting the crash narrative on all crashes, but the problem is that even Tesla’s own benchmark shows humans have fewer crashes.
Aren't these crash details / "crash narrative" a matter of public record and investigations? By e.g. either NHTSA, or by local law enforcement? If not, shouldn't it be? Why should we, as a society, rely on the automaker as the sole source of information about what caused accidents with experimental new driverless vehicles? That seems like a poor public policy choice.
> Aren't these crash details / "crash narrative" a matter of public record and investigations?
Per the OP, Tesla doesn't publish the details; all other autonomous driving manufacturers do.
There's no real discussion to be had on any of this. Just people coming in to confirm their biases.
As for me, I'm happy to make and take bets on Tesla beating Waymo. I've heard all these arguments a million times. Bet some money
I'm not a camera only doomer, and expect that in ten years Waymo will also not use Lidar, or that the units will be incredibly cheap and well integrated.
But I think the pro Tesla camp is exaggerating how quickly the march of 9s will happen for them, and underestimating and how quickly Waymo will expand in the next few years.
There's a reason Long Now uses this format, and I'm happy to use their platform and pay the fee: https://longbets.org/rules/
Email is reitzensteinm@gmail.com if you're interested.
Heard this for a decade now, but I’m sure this year will be different!
For instance, Would you like to bet 1000 dollars Tesla has more unsupervised self driving robotaxis than Waymo at the start of 2027?
So let's use a metric that unequivocally shows who is 'winning'. I'm confident Waymo will have more paid rides per week than Tesla at the start of 2027 (I'll give you 2028 if you want). No other metric indicates scale better than passenger trips. If you have more robotaxis or you are in more cities, it will show up in the trip count.
I'll give $1000 to a charity of your choosing if Tesla beats Waymo in this metric. Fully unsupervised trips only, does not include trips with a safety driver or a monitor in a passenger seat, none of the usual games they like to play.
I would also love to see every car brand have full autonomous driving. It seems like you think you must be in one camp or another, and that one has to "beat" the other - but that's not true. Both can be successful - wouldn't that be a great world?
[1] https://www.fastcompany.com/91491273/waymo-vehicle-hit-a-chi....
if Tesla drops the ego they could obtain Waymo software and track record on future Tesla hardware
Elecktek is just summarizing/commenting.
My comment was aimed at the implication that the data might be untrustworthy because they were the ones reporting it.
So I pointed out it wasn’t their data.
As for “spin“ Elon has been telling us for a long time that FSD is safer than humans and will save lives. We appear to have objective data that counters that narrative.
That seems worth reporting on to me.
It's basically a few light bumps going at snails pace and probably caused by other cars. The articles headline reads as if it mowed down a group of school children.
We are still a long, long, long way off for someone to feel comfortable jumping in a FSD cab on a rainy night in in New York.
https://www.cnbc.com/2026/01/22/musk-tesla-robotaxis-us-expa...
Tesla CEO Elon Musk said at the World Economic Forum in Davos that the company’s robotaxis will be “widespread” in the U.S. by the end of 2026.
Then they compare that numerator to Tesla’s own “minor collision” benchmark — which is not police-reported fender benders; it’s a telemetry-triggered “collision event” keyed to airbag deployment or delta-V ≥ 8 km/h. Different definitions. Completely bogus ratio.
Any comparison to police-reported crashes is hilariously stupid for obvious reasons.
On top of that, the denominator is hand-waved ("~800k paid miles extrapolated"), which is extra sketchy because SGO crashes can happen during non-paid repositioning/parking while "paid miles" excludes those segments. And we’re talking 14 events in one geofenced, early rollout in Austin so your confidence interval is doing backflips. If you want a real claim vs humans, do matched Austin exposure, same reportable-crash criteria, severity stratification, and show uncertainty bands.
But what you get instead is clickbait so stop falling for this shit please HN.
4x worse than humans is misleading, I bet it's better than humans, by a good margin.
No idea how these things are being allowed on the road. Oh wait, yes I do. $$$$
Given the way Musk has lied and lied about Tesla's autonomous driving capabilities, that can't be much of a surprise to anyone.
I know that it is irrational to expect any kind of balance or any kind of objective analysis, but things are so polarized that I often feel the world is going insane.
In before, 'but it is a regulation nightmare...'
Get over your bs.