With humans when they do this at max we can punish that individual. To increase population wide compliance we can do a safety awareness campaign, ramp up enforcement, ramp up the fines. But all of these cost a lot of money to do, take a while to have an effect, need to be repeated/kept up, and only help statistically.
With a robot driver we can develop a fix and roll it out on all of them. Problem solved. They were doing the wrong thing, now they are doing the right thing. If we add a regression test we can even make sure that the problem won't be reintroduced in the future. Try to do that with human drivers.
You have to know what you're fixing first. You're going to write a lot of code in blood this way.
It's not that people are particularly bad at driving it's that the road is exceptionally dynamic with many different users and use cases all trying to operate in a synchronized fashion with a dash of strong regulation sprinkled in.
In this case the expected behaviour is clearly spelled out in the law.
> You're going to write a lot of code in blood this way.
Do note that in this case nobody died or got hurt. People observed that the autonomous vehicles did not follow the rules, the company got notified of this fact and they are working on a fix. No blood was spilled to achieve this result.
Also note that we spill much blood on our roads already. And we do that without much of any hope of learning from individual accidents. When George runs over John there is no way to turn that into a lesson for all drivers. There is no way to understand what went wrong in George’s head, and then there is no way to adjust all driver’s heads so that particular problem won’t happen again.
Compared to that, autonomous vehicles have barely harmed anyone. Also they will probably save most of those lives once they become good.
The "least harm" approach is to scale autonomous vehicles as quickly as possible even if they do have accidents sometimes.
It seems like we're pretty close to that point, but the numbers need to be treated with care for various reasons. (Robotaxis aren't dealing with the same proportions of conditions - city vs suburban vs freeway - and we should probably exclude collisions caused by human bad-actors which should have fallen within the remit of law enforcement - drink/drugs, grossly excessive speed and so on).
Autonomous fleets have a major potential flaw too, in form of a malicious hacker gaining control over multiple vehicles at once and wreaking havoc.
Imagine if every model XY suddenly got a malicious OTA update and started actively chasing pedestrians.
I seriously doubt that the "mass takeover and murder" scenario would ever actually happen, and further doubt that it would cause anywhere near 10k deaths if it did occur.
OK, so you are optimistic. My own specialization is encryption/security, so I am not. State actors can do such things, too, and we've already had a small wave of classical physical-world sabotages in Europe that everyone suspects Russia of.
"further doubt that it would cause anywhere near 10k deaths"
This is something I can agree upon, but you have to take into account that human societies don't work on a purely arithmetic/statistical basis. Mass casualty events have their own political and cultural gravitas, doubly so if they were intentional.
Sinking of the Titanic shocked the whole world and it is still a frequent subject for artists 100 years later, even though 1500 deaths aren't objectively that many. I don't doubt that way more than 1500 people drowned in individual accidents worldwide in April 1912 alone, but the general public didn't care about those deaths.
And a terrorist attack with merely 3000 dead put the US on a war footing for more than a decade and made it spend a trillion dollars on military campaigns, even though drunk American drivers manage the same carnage in five months or so.
It's like jets falling out of the sky because the guy that bolts the wings on is only half doing his job, we can all see it and know about it and yet.. nobody wants to speak up.
I'd recommend to buy the book, but here's an early draft of that particular story:
This is exactly how the aviation industry works, and it's one of the safest ways to travel in the world. Autonomous driving enables 'identify problem -> widely deployed and followed solutions' in a way human drivers just can't. Things won't be perfect at first but there's an upper limit on safety with human drivers that autonomous driving is capable of reaching past.
It's tragic, but people die on roads every day, all that changes is accountability gets muddier and there's a chance things might improve every time something goes wrong.
If you really want to reduce accident rates you need to improve road design and encourage more use of public transport and cycling. This requires no new vehicles, no new software, no driver training, and doesn't need autonomous vehicles at all.
It isn't easy to fix autonomous driving not because the problem isn't identified. Sometimes two conflicting scenario can happen on the road that no matter how good the autonomous system is, it won't be enough
Though I agree that having different kind of human instead will not make it any safer
Flying is actually a lot more complicated than just driving. When you're driving you can "just come to a stop". When you're flying... you can't. And a hell of a lot can go wrong.
In any case, we do have autonomous flying. They're called drones. There are even prototypes that ferry humans around.
Would note that this is the same issue that made autonomous freeway driving so difficult.
When we solve one, we'll solve the other. And it increasingly looks like they'll both be solved in the next half decade.
the similar things also applied in driving, especially with obstacles and emergency, like floods, sinkhole in Bangkok recently, etc.
With a car, deferred or shoddy maintenance is highly probable and low impact. With an aircraft, if a mechanic torques a bolt wrong, 400 people are dead.
If you try something equivalent with building regs or tax authorities, they will come for you. Presumably because the coal-rolling dumbasses are drawn from the same social milieu as cops.
Waymo has been doing a lot of driving, without any blood. They seems to be using a combination of (a) learning a lot from close calls like this one where no one was hurt even through it still behaved incorrectly and (b) being cautious so that even when it does something it shouldn't the risk is very low because it's moving slowly.
https://support.google.com/waymo/answer/9059119?authuser=1
This is actually the one technology I am excited about. Especially with the Zoox/mini bus /carpool model, I can see these things replacing personal cars entirely which is going to be a godsend for cost, saftey and traffic
This is less and less true every year. Yes, it doesn't drive in the snow yet, no, I don't drive in the snow either, I'm ok with that.
Airbus A320s wouldn’t be very safe if we let Joe Schmo off the street fly them however he likes, but we don’t. An A320 piloted within a regulated commercial aviation regime is very safe.
What matters is the safety of the entire system including the non-technological parts.
https://waymo.com/blog/2024/01/from-surface-streets-to-freew...
For instance, the 2003 California Driver's Handbook[1] first introduced the concept of "bike lanes" to driver education, but contains the advice "You may park in the bike lane unless signs say “NO PARKING.”" which is now illegal. Anyone who took their test in the early 2000s is likely unaware that changed.
It also lacks any instruction whatsoever on common modern roadway features like roundabouts or shark teeth yield lines, but we still consider drivers who only ever studied this book over 20 years ago to be qualified on modern roads.
1. https://dn720706.ca.archive.org/0/items/B-001-001-944/B-001-...
Ironically this means the people with the cleanest driving record are least likely to know the current ruleset.
Well, sufficient at-fault crashing will suspended your license, and among other requirements for restoring the license may be traffic school, DUI school, or some other program depending on the reason for suspension, so this is not strictly correct. You can't use optional voluntary traffic school to clear points from a collision from your record BEFORE getting a suspension the way you can with minor moving violations without a collision, but that doesn’t mean collisions won’t force you into traffic school.
That's silly. People become aware of new laws all the time without having to attend a training course or read an updated handbook.
I took the CA driver's written test for the first time in 2004 when I moved here from another state. I don't recall whether or not there was anything in the handbook about bike lanes, but I certainly found out independently when it became illegal to park in one.
Maybe? In this particular case, it sounds like no one was injured, and even though the Waymos didn't follow the law around stopping for school buses, it exercised care when passing them. Not great, certainly! But I'd wager a hell of a lot better than a human driver intentionally performing the same violation. And presumably the problem will be fixed with the next update to the cars' software. So... fixed, and no blood.
A waymo, even if it drove in urban Seattle for 20 years where school buses aren't common, it would know what to do if it was presented with the exception tomorrow (assuming it was trained/programmed correctly), it wouldn't forget.
Some roads are going to be safer simply because drivers don't feel safe driving fast. Others are safer simply because there's less opportunities to get into a collision.
Wide street in cities encourage faster driving which doesn't really save a lot of time while making the streets more dangerous, for example.
I believe that road design is the only way to control how fast people drive. Speed limits are useless for controlling speed, people drive the speed the road is designed for.
Companies (and people) have an obligation to do the right thing.
It's pretty wild to jump straight to "they don't care about safety" here. Building a perfect system without real world testing is impossible, for exactly the same reason it's impossible to write bug-free code on the first try. That's not a suggestion to be lax, just that we need to be realistic about what's achievable if we agree that some form of this technology could be beneficial.
Why?
Do we need hasher fines? Give auto regulators as much teeth as the FAA used to have during accident investigations?
Genuinely curious to see how addressing reasonable concerns in these areas can be done.
"Corporations are bad"
"Why?"
"Because, you know, they act all corporate-y."
https://www.tiktok.com/@plutotvuk/video/7311643257383963937 (sorry googles first result was titktok)
Even though I agree, there was a time and a place (I'd say 2008-2010) when this forum was mostly populated by "I want to get rich!" people, maybe that is still the case and they've only learned to hide it better, I wouldn't know.
The invariably sociopathic leadership is a symptom, not a cause: they're the least encumbered by ethics, and the best fit for corporate leadership.
The cure is a strong regulatory state.
Yes, this is often the case. In this instance, though, endangering children is just about the worst PR possible. That's strong leverage.
I find that extremely optimistic. It's almost as if you've never developed software.
I am curious about Waymo's testing. Even "adding a regression test" can't be simple. There is no well defined set of conditions and outputs.
> Try to do that with human drivers.
At least where I live, the number of cars and car-based trips keeps increasing, but the number of traffic deaths keeps falling.
I do develop software. In fact I do develop self driving car software.
Yes it is not easy. Just talking about this particular case. Are the cars not remaining stationary because the legally prescribed behaviour is not coded down? Or are they going around school busses because the "is_school_bus" classifier or the "is_stop_arm_deployed" classifier having false negative issues? If we fix/implement those classifiers will we see issues caused by false positives? Will we cause issues where the vehicles suddenly stop when they think they see a stop arm but there isn't one actually? Will we cause issues if a bus deploys a stop arm as we are overtaking them? What about if they deploy the stop arm while we are 10 meter behind them? 20? 30? 40? 100?
And that's just one feature. How does this feature interact with other features? Will we block emergency vehicles sometimes? What should we do if a police person is signalling us to proceed, but the school bus's stop arm is stopping us? If we add this one more classifier will the GPU run out of vram? Will we cause thread thrashing? Surely not, unless we implement it wrong. In which case definitely. Did we implement it right? Do we have enough labeled data about stop arms of school buses? Is our sensor resolution good enough to see them far enough? Even in darkness? What about fog? Or blinding light? Do every state/country uses the same rules about school busses?
> I am curious about Waymo's testing
They do publish a lot. This one is nice overview but not too technical: https://downloads.ctfassets.net/sv23gofxcuiz/4gZ7ZUxd4SRj1D1...
Or if you want more juicy details read their papers: https://waymo.com/safety/research/
I think the net improvements will come from the quantitative aspect of lots and lots of video. We don’t have good facts about these friction points on the road and rely on anecdotal information, police data (which sucks) and time/morion style studies.
You're fighting an objectively safer future on the basis of a hypothetical?
Also, we already have capped liability with driving: uninsured and underinsured drivers.
When a software defect kills a bunch of people, the robot operator’s owners will subject to a way lower level of liability. Airlines have international treaties that do this.
An objectively safer future is common carriers operating mass transit. Robot taxi will creating a monster that will price out private ownership in the long term. Objectively safer remains to be seen, and will require a nationwide government regulatory body that won’t exist for many years.
Which is in practice lower than what a large operator would pay, particularly if they also write the software.
> will require a nationwide government regulatory body
It doesn’t require any such thing. That would be nice. But states are more than capable of regulating their roads.
States are incredibly bad at regulating commercial entities. The Federal DOT contracts with a few universities (or at least they did) use evidence based sampling and enforcement, fulfilled by state authorities for trucks and buses in their scope. Only states like California, New York, Texas would have the resources to do it, and it would be really difficult to do anything effective when there’s 53 or more flavors.
And if there is one car that everyone drives, it's equally easy for a single bug to harm people on a scale that's inconceivable to me.
Like the various "unfinished/broken bridge" deaths that have happened with Google maps involved (not saying to blame.. but certainly not innocent either) https://www.bbc.com/news/world-us-canada-66873982 https://www.bbc.com/news/articles/cly23yknjy9o
For a company, it's a financial calculation.
https://en.wikipedia.org/wiki/Grimshaw_v._Ford_Motor_Co.
(Add the period to the end of the link, HN won't do it)
If the school bus has a dashcam, much better info may be available. This video starts too late.
In any case it seems like tiny issue. Illegal or not it didnt do anything dangerous
Either completely removing cars from streets near schools, or blocking cars when children are coming or leaving school.
https://fr.wikipedia.org/wiki/Rues_aux_%C3%A9coles_%C3%A0_Pa...
The Waymo is going to be on high alert at all times, regardless of any flashing lights or stop signs.
Yes, if you see a school bus with its flashers on, you may not pass it. Period.
In fact, a school bus with red flashers on is, in my state, passing it is the only thing we cannot do in an emergency vehicle (in my case, ambulance and fire engine), even in "emergency mode" (lights and sirens both active).
I've only ever had this happen twice though, and in both cases the bus drivers stopped the process and turned their lights off for us.
The school bus' stop sign was extended and had red lights flashing. With the proximity to the intersection, it's most appropriately treated as an all-way stop.
Regardless of whether the bus' stop sign applies to cross streets, at some point in the turn the car is now in parallel with the bus, and the sign would apply at that point.
Also, you're blind to anyone who may be approaching the bus from the opposite side of the intersection.
Waymos exceed human drivers on both metrics, thus it is reasonable to say that Waymos have reduced crashes compared to the equivalent average human driver covering the same distance.
Mistakes like this are very rare, and when they do happen, they can be audited, analyzed with thousands of metrics and exact replays, patched, and the improved model running the Waymo is distributed to all cars on the road.
There is no equivalent in humans. There are millions of human drivers currently driving who drive distracted, drunk, recklessly, or aggressively. Every one of them who is replaced with a Waymo is potentially many lives saved.
Approximately 1/100 deaths in the US are due to car fatalities. Every year autonomous drivers aren't rapidly deployed is just unnecessary deaths.
That's not exactly right. You need to take into account how likely it is for accidents to happen, not just the number of miles travelled. If the low probability of accidents is taken into account it turns out it takes many more millions or even billions of miles than already travelled for self-driving cars to be considered safe. See:
Driving to Safety
How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability?
Given that current traffic fatalities and injuries are rare events compared with vehicle miles traveled, we show that fully autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their safety in terms of fatalities and injuries. Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles — an impossible proposition if the aim is to demonstrate performance prior to releasing them for consumer use. Our findings demonstrate that developers of this technology and third-party testers cannot simply drive their way to safety. Instead, they will need to develop innovative methods of demonstrating safety and reliability.
Waymo hit 100 million driven miles in July so far without a death (in the US the death rate per 100 million miles in cars is 1.26). Likewise the crash rate is lower across the board
https://www.theavindustry.org/blog/waymo-reduces-crash-rates...
I assume that study based its assumptions on the limited testing performed so far on the extant self-driving cars of the day, which of course if you have only 10 test cars would take many decades, but at scale that isn't relevant anymore given Waymo's success.
* lane keeping with optional steering
* pedestrian and obstacle detection at the front
* pedestrian and obstacle detection when reversing with automatic braking
* assisted driving with lane keeping and full stop / driving on in case of traffic jam
Waymos are just fancy taxis. And taxis haven’t replaced all human drivers or solved traffic accidents.
> And taxis haven’t replaced all human drivers or solved traffic accidents.
That comparison is irrelevant. The point is that Waymos are superior to human drivers with respect to safety, thus they would also be superior to taxis in that dimension and would be a justifiable replacement. Also self-driving tech, if deployed in all cars, would offer benefits beyond taxis since the car belongs to the user of the tech itself.
The bots aren’t superior either, they’re just not able to function in many countries and situations so in their little box they have good stats because they’re super-careful. This is fine for a taxi, hence that is the comparison that makes most sense.
"Uber reported 0.87 fatalities per 100 million vehicle miles traveled (VMT) in 2021–2022"
https://insurify.com/car-insurance/insights/rideshare-driver...
You could improve driver training. American drivers are absolutely terrifying.
I suspect most problematic American drivers already know they aren’t supposed to text, drink, or watch or record TikToks while they drive, but simply do it anyway because they are aware these laws are under-enforced.
Unsurprisingly, the rollout was quickly followed by news of 40+ false tickets from busses that were parked at a school. My understanding is that they were not loading or unloading kids, did not have their stop sign extended or blinking lights on, but just happened to be close enough to the adjacent street for the ticket cameras think the bus was stopped on the street and issue tickets to the innocently passing cars.
Those tickets were dropped and they're apparently fixing that, but not a confidence-inducing start to say the least.
B) the system can be setup to purge and/or record only at relevant times or during infractions
I propose they be made actually useful instead of merely surveillance for surveillance sake, but I can see how that would feel oppressive to drivers accustomed to getting away with murder.
This is only an issue because traffic code violations are treated like criminal acts instead of... code violations. We don't have this issue with parking tickets, there's no reason we should have it with automated red light and school bus cameras.
Answers like this are what drives the populace to support domestic terrorism.
Other countries have no issues with camera based traffic law enforcement.
Seems the way the law works is it needs some piece of two way communication. It doesn’t seem to work on a one way basis like it might in other countries. Maybe it is because most of our laws concerning technology are very much still structured for an analog world. E.g how in this case the old ritual of you being identified to have acknowledged the ticket by the cop writing it and handing it to you is preserved by you having to show you’ve actually received the ticket and consent to its validity viewing its status online.
[1] https://www.wwnytv.com/2025/02/12/absolutely-terrifying-grow...
Here (Australia) the bus just pulls over and you get off on to the sidewalk, even children, why is it not the case in the US?
Also, there's generally an exception for divided highways - if the road has a physical median or barrier, the oncoming traffic doesn't have to stop. I assume the bus route accounts for this and drops kids off on the correct side of the road.
In my case, a rural highway where traffic goes 55mph.
Is better to stop all traffic than force kids to figure out how to frogger through traffic.
That's pretty different from my experience.
Almost all the school bus stops around here are on small low-speed residential streets.
And while there are surely some stops on faster 2-lane roads...
A stroad or major road would mean 4+ lanes, which in my state means the school bus only stops traffic on one side. No kids will be crossing at those bus stops.
The closest crosswalk to my bus stop as a kid was about 45 miles.
Growing up, our school bus stop was on a service road off a 100km/h highway, but it had good visibility in both directions and most of the kids over the other side got dropped off by their parents while they were young.
It's a long video but the tldr is that Americans don't have foot paths. You would think they would but nope, it's not like Australia where everywhere you walk has a path and down paths to the road.
Even directly around schools no footpaths, and it's all because it's no one's responsibility other then the home owner.
Now, how does a robotaxi comply with that? Does it go to the district website and look up the current school year calendar? Or does it work like a human, and simply observe the patterns of the school traffic, and assume the general school calendar?
I suspect it continues in Mad Max mode.
It's so silly, when the obvious solution is to make school zones 40km/hr (25mi/hr) at all times, or to fix the road design. Typical speeds here are 60km/hr (40mi/hr), so anyone making the argument that it would 'slow traffic' is being dramatic.
(There is one exception that I know of - our east coast highway used to go near a school, which forced a change from 110km/hr (70mi/hr) to 40km/hr. In this case I will concede the speed is not the issue, the highway location is the issue)
They couldn't just put up a fence?
Unlike the sibling comment, there are no lights or indications of when school is in session. You must memorize the academic calendar of every school you drive past in order to know the speed limit. In practice, this means being conservative and driving more slowly in unfamiliar areas.
It's pretty simple.
You don't need clever software or self-driving cars, you just need to lift your right foot a little near schools.
https://maps.app.goo.gl/34QgN2KTQmGML2Ae8
Here is an example of one that just lights up with a 20mph limit when it's needed, from near where I grew up. Pretty high-tech for a remote part of the world, eh?
Needless to say, most people regularly violate some kind of traffic law, we just don't enforce it.
The answer is encoded in the map data in this case, but it's an interesting category of problems for autonomous vehicles.
I was very impressed about the decision making in this situation. Seems very intuitive (at least superficially).
in my estimation the robo driver has reached a median-human level of driving skill. it still doesn’t quite know how to balance the weight of the car through turns and it sometimes gets fussy with holding lanes at night but otherwise it mimics human behaviors pretty well except where they’re illegal like rolling through the first stop at a stop sign.
Many other states setup a flashing yellow light and program the light with the school schedule. Then the limit only applies "when light is flashing." Far more sensible.
The difference is usually 5 or maybe 10 mph.
Which over the distance of a school zone is nothing.
A building that looks the same with it without children inside of it.
Then it really would be as simple as looking up the calendar or simply erring on the safe side that all weekdays are schooldays.
Waymo only operates on fully mapped roads anyway so I think that Waymo could be reasonably expected to include such abilities.
Perhaps allowing them to drive around school buses is not a good idea, although personally I have felt far safer biking or walking in front of a Waymo than a human. But rules few humans follow, like rolling stops, and allowing them to go 5 over seems like a no-brainer. We have a real opportunity here to br more sensible with road rules; let’s not mess it up by limiting robots to our human laws.
Corporations gain control of public spaces by allowing corporations to cast other road users as incompetent. Much the same as GM, etc., did with jay walking laws in the US.
Distinguishing between human and robot drivers in this way benefits only corporations and the politicians they pay.
You are not making this point in good faith.
I am curious though. These services are running in places like San Francisco and Austin. How many of these are operating outside of Bakersfield?
The future is gonna be awesome. I fricking love science! Once we unlock self driving car technology, we will finally be able to move people and things from one place to another. All we have to do is force everyone on the road to install a transponder in their car that allows the government to track their location at all times, and develop a massively complex computer-camera system inside of the car that phones home and controls what the car is allowed to do.
Are we heading to another dark age after a peak technology ?
I call bullshit on that. Yes the stop sign is only on the left side but the flashing lights are on all four corners of the bus. You'd need to be approaching the side of the bus from a direct right angle to not see the flashing lights.
> a Waymo did not remain stationary when approaching a school bus with its red lights flashing and stop arm deployed.
Because it's physically possible to approach something while remaining stationary?
In this case it may well be safe for the Waymo to pass a bus but, the rule says not to pass a bus because humans will assume if the Waymo can pass a bus so can they and that's false.