There's also a denominator problem. The mileage figure appears to be cumulative miles "as of November," while the crashes are drawn from a specific July-November window in Austin. It's not clear that those miles line up with the same geography and time period.
The sample size is tiny (nine crashes), uncertainty is huge, and the analysis doesn't distinguish between at-fault and not-at-fault incidents, or between preventable and non-preventable ones.
Also, the comparison to Waymo is stated without harmonizing crash definitions and reporting practices.
It's pretty clear from his X feed:
The guy has serious Musk Derangement Syndrome.
> the fleet has traveled approximately 500,000 miles
Let's say they average 10mph, and say they operate 10 hours a day, that's 5,000 car-days of travel, or to put it another way about 30 cars over 6 months.
That's tiny! That's a robotaxi company that is literally smaller than a lot of taxi companies.
One crash in this context is going to just completely blow out their statistics. So it's kind of dumb to even talk about the statistics today. The real take away is that the Robotaxis don't really exist, they're in an experimental phase and we're not going to get real statistics until they're doing 1,000x that mileage, and that won't happen until they've built something that actually works and that may never happen.
More accurately, the real takeaway is that Tesla's robo-taxis don't really exist.
One crash in 500,000 miles would merely put them on par with a human driver.
One crash every 50,000 miles would be more like having my sister behind the wheel.
I’ll be sure to tell the next insurer that she’s not a bad driver - she’s just one person operating an itty bitty fleet consisting of one vehicle!
If the cybertaxi were a human driver accruing double points 7 months into its probationary license it would have never made it to 9 accidents because it would have been revoked and suspended after the first two or three accidents in her state and then thrown in JAIL as a “scofflaw” if it continued driving.
> One crash every 50,000 miles would be more like having my sister behind the wheel.
I'm not sure if that leads to the conclusion that you want it to.
Comparing stats from this many miles to just over 1 trillion miles driven collectively in the US in a similar time period is a bad idea. Any noise in Tesla's data will change the ratio a lot. You can already see it from the monthly numbers varying between 1 and 4.
This is a bad comparison with not enough data. Like my household average for the number of teeth per person is ~25% higher than world average! (Includes one baby)
I’m a decent driver, I never use my phone while driving and actively avoid distractions (sometimes I have to tell everyone in the car to stop talking), and yet features like lane assist and automatic braking have helped me avoid possible collisions simply because I’m human and I’m not perfect. Sometimes a random thought takes my attention away for a moment, or I’m distracted by sudden movement in my peripheral vision, or any number of things. I can drive very safely, but I can not drive perfectly all the time. No one can.
These features make safe drivers even safer. They even make the dangerous drivers (relatively) safer.
Driving a car takes effort. ADAS features (or even just plain regular "driving systems") can reduce the cognitive load, which makes for safer driving. As much as I enjoy driving with a manual transmission, an automatic is less tiring for long journeys. Not having to occupy my mind with gear changes frees me up to pay more attention to my surroundings. Adaptive cruise control further reduces cognitive load.
The danger comes when assistance starts to replace attention. Tesla's "full self-driving" falls into this category, where the car doesn't need continuous inputs but the driver is still de jure in charge of the vehicle. Humans just aren't capable of concentrating on monitoring for an extended period.
Driver fatigue is real, no matter how much coffee you take.
Lane-keep is a game changer if the UX is well done. I'm way more rested when I arrive at destination with my Model 3 compared to when I use the regular ICE with bad lane-assist UX.
EDIT: the fact that people that look at their phones will still look at their phones with lane-keep active, only makes it a little safer for them and everyone else, really.
Still damning that the data is so bad even then. Good data wouldn't tell us anything, the bad data likely means the AI is bad unless they were spectacularly unlucky. But since Tesla redacts all information, I'm not inclined to give them any benefit of the doubt here.
I think we're on to something. You imply that good here means the AI can do it's thing without human interference. But that's not how we view, say, LLMs being good at coding.
In the first context we hope for AI to improve safety whereas in the second we merely hope to improve productivity.
In both cases, a human is in the loop which results in second order complexity: the human adjusts behaviour to AI reality, which redefines what "good AI" means in an endless loop.
Sorry that does not compute.
It tells you exactly if the AI is any good, as, despite the fact that there were safety drivers on board, 9 crashes happened. Which implies that more crashes would have happened without safety drivers. Over 500,000 miles, that's pretty bad.
Unless you are willing to argue, in bad faith, that the crashes happened because of safety driver intervention..
But if the number of crashes had been lower than for human drivers, this would tell us nothing at all.
"Rear collision while backing" could mean they tapped a bollard. Doesn't sound like a crash. A human driver might never even report this. What does "Incident at 18 mph" even mean?
By my own subjective count, only three descriptions sound unambiguously bad, and only one mentions a "minor injury".
I'm not saying it's great, and I can imagine Tesla being selective in publishing, but based on this I wouldn't say it seems dire.
For example, roundabouts in cities (in Europe anyway) tend to increase the number of crashes, but they are overall of lower severity, leading to an overall improvement of safety. Judging by TFA alone I can't tell this isn't the case here. I can imagine a robotaxi having a different distribution of frequency and severity of accidents than a human driver.
If a human had eyes on every angle of their car and they still did that it would represent a lapse in focus or control -- humans don't have the same advantages here.
With that said : i would be more concerned about what it represents when my sensor covered auto-car makes an error like that, it would make me presume there was an error in detection -- a big problem.
> roundabouts in cities (in Europe anyway) tend to increase the number of crashes
Not in France, according to data. It depends on the speed limit, but they decrease accident by 34% overall, and almost 20% when the speed limit is 30 or 50 km/h.
The tech needs to be at least 100x more error free vs humans. It cannot be on par with human error rate.