It is often an indicator of true honesty, providing there is no government intervention. Governments intervene in insurance/risk markets when they do not like the truth.
I tried to arrange insurance for an obese western expatriate several years ago in an Asian country, and the (western) insurance company wrote a letter back saying the client was morbidly obese and statistically likely to die within 10 years, and they should lose x weight before they could consider having insurance.
As an example, let's say most people use FSD on straight US Interstate driving, which is very easy. That could artificially make FSD seem safer than it really is.
My prior on this is supervised FSD ought to be safer, so the 52% number kind of surprised me, however it's computed. I would have expected more like a 90-95% reduction in accidents.
1) it let's lemonade reward you for taking safer driving routes (or living in a safer area to drive, whatever that means)
2) it (for better or worse) encourages drivers to use it more. This will improve Tesla's training data but also might negatively impact the fsd safety record (an interesting experiment!)
As a father of kids in a neighborhood with a lot of Teslas, how do I opt out of this experiment?
I'm sceptical of Robotaxi/Cybercab. I'm less sceptical that FSD, supervised, is safer than fully-manual control.
Most notably my driveway meets the road at a blind y intersection, and my Model 3 just blasts out into the road even though you cannot see cross traffic.
FSD stresses me out. It's like I'm monitoring a teenager with their learners permit. I can probably count the number trips where I haven't had to take over on one hand.
You meant “I disable FSD because it does silly things”
I read “I disable FSD so I can do silly things”
My neighbor joked that I should install a stop sign at the end of my driveway to make it safer.
Still not paying $8k for it. Or $100 per month. Maybe $50 per month.
On average you include sleep deprived people, driving way over the speed limit, at night, in bad weather, while drunk, and talking to someone. FSD is very likely situationally useful.
But you can know most of those adverse conditions don’t apply when you engage FSD on a given trip. As such the standard needs to be extremely high to avoid increased risks when you’re sober, wide awake, the conditions are good, and you have no need to speed.
Tesla's FSD still goes full-throttle dumbfuck from time to time. Like, randomly deciding it wants to speed into an intersection despite the red light having done absolutely nothing. Or swerving because of glare that you can't see, and a Toyota Corolla could discern with its radars, but which hits the cameras and so fires up the orange cat it's simulating on its CPU.
I'm very skeptical that the average human driver properly supervises FSD or any other "full" self driving system.
Insurance companies can let marketing influence rates to some degree, with programs that tend to be tacked on after the initial rate is set. This self driving car program sounds an awful lot like safe driver programs like GEICO Clean Driving Record, State Farm Good Driver Discount, and Progressive Safe Driver, Progressive Snapshot, and Allstate Drivewise. The risk assessment seems to be less thorough than the general underwriting process, and to fall within some sort of risk margin, so to me it seems gimmicky and not a true innovation at this point.
It's so jarring at times that I'll often omit to use the Cruise Control if I have my wife in the car (so as not to give her car sickness) or other passengers (so as not to make them think I'm a terrible driver!).
I now have developed a totally new skill which is to temporarily disengage it when I see a mistake incoming, then re-engaging it immediately after the moment passes.
NB I am in Australia and don't have FSD so this is all just using Adaptive Cruise Control. Perhaps the much harder challenge of FSD (or near-FSD) is executed a lot better, but you wouldn't assume so.
Also, even if a system is fully automated, that doesn’t necessarily legally isolate the person who owns it or set it into motion from liability. Vehicle law would generally need to be updated to change this.
If your local legal system does not absolve you from liability when operating an autonomous vehicle, you can still be sued, and Mercedes has no say in this… even though they could reimburse you.
This analogy may be more apt than Tesla would like to admit, but from a liability perspective it makes sense.
You could in turn try to sue Tesla for defective FSD, but the now-clearly-advertised "(supervised)" caveat, plus the lengthy agreement you clicked through, plus lots of lawyers, makes you unlikely to win.
The product you buy is called "FSD Supervised". It clearly states you're liable and must supervise the system.
I don't think there's law that would allow Tesla (or anyone else) to sell a passenger car with unsupervised system.
If you take Waymo or Tesla Robotaxi in Austin, you are not liable for accidents, Google or Tesla is.
That's because they operate on limited state laws that allow them to provide such service but the law doesn't allow selling such cars to people.
That's changing. Quite likely this year we will have federal law that will allow selling cars with fully unsupervised self-driving, in which case the insurance/liability will obviously land on the maker of the system, not person present in the car.
So yes, carmakers would pay in a hit-and-run.
Why? That's not their fault. If a car hits and runs my uninsured bicycle, the manufacturer isn't liable. (My personal umbrella or other insurance, on the other hand, may cover it.)
If you run into someone on your bike and are at fault then you generally would be liable.
They're talking about the hypothetical where you're on your bike, which was sold as an autobomous bike and the bike manufacturer's software fully drives the bike, and it runs into someone and is at fault.
This is news to me. This context seems important to understanding Tesla's decision to stop selling FSD. If they're on the hook for insurance, then they will need to dynamically adjust what they charge to reflect insurance costs.
FSD isn't perfect, but it is everyday amazing and useful.
I'd guess my Subaru's lane-keeping utilisation is in the same ballpark. (By miles, not minutes. And yes, I'm safer when it and I are watching the road than when I'm watching the road alone.)
If the company required a representative to sit in the car with you and participate in the driving (e.g. by monitoring and taking over before an accident), then there's a case to be made that you're not fully autonomous.
I think you're mixing some concepts.
There's car insurance paid by the owner of the car, for the car. There's workplace accident insurance, paid by the employer for the employee. The liability isn't assigned by default, but by determining who's responsible.
The driver is always legally responsible for accidents caused by their negligence. If you play with your phone behind the wheel and kill someone, even while working and driving a company car, the company's insurance might pay for the damage but you go to prison. The company will recover the money from you. Their work accident insurance will pay nothing.
The test you can run in your head: will you get arrested if you fall asleep at the wheel and crash? If yes, then it's not autonomous or self driving. It just has driver assistance. It's not that the car can't drive itself at all, just that it doesn't meet the bar for the entire legal concept of "driver/driving".
"Almost" self driving is like jumping over a canyon and almost making it to the other side. Good effort, bad outcome.
Also, self driving is a feature of a vehicle someone owns, I don't understand how that should exempt anyone from insuring their property.
Waymo and others are providing a taxi service where the driver is not a human. You don't pay insurance when you ride Uber or Bolt or any other regular taxi service.
Well practically speaking, there’s nothing stopping anyone from voluntarily assuming liability for arbitrary things. If Tesla assumes the liability for my car, then even if I still require my “own” insurance for legal purposes, the marginal cost of covering the remaining risk is going to be close to zero.
They are as self-driving as a car can be.
This is different than the one where they had a human supervisor in passenger seat (which they still do elsewhere).
And different than the one where they didn't have human supervisor but did have a follow car.
Now they have a few robotaxis that are self driving.
Why surely? Turning on cruise control doesn't absolve motorists of their insurance requirement.
And the premise is false. While Tesla does "not maintain as much insurance coverage as many other companies do," there are "policies that [they] do have" [1]. (What it insures is a separate question.)
[1] https://www.sec.gov/ix?doc=/Archives/edgar/data/0001318605/0...
And I’d include “AI driver” as an example.
The assumption there is that the remaining human drivers would be the higher risk ones, but why would that be the case?
One of the primary movers of high risk driving is that someone goes to the bar, has too many drinks, then needs both themselves and their car to get home. Autonomous vehicles can obviously improve this by getting them home in their car without them driving it, but if they do, the risk profile of the remaining human drivers improves. At worst they're less likely to be hit by a drunk driver, at best the drunk drivers are the early adopters of autonomous vehicles and opt themselves out of the human drivers pool.
1. People who can't afford self driving cars (now the insurance industry has a good proxy for income that they couldn't tap into before)
2. Enthusiasts who like driving their cars (cruisers, racers, Helcat revving, people who like doing donuts, etc...)
3. Older people who don't trust technology.
None of those are good risk pools to be in. Also, if self driving cars go mainstream, they are bound to include the safest drivers overnight, so whatever accidents/crashes happen afterwards are covered by a much smaller and "active" risk pool. Oh, and those self driving cars are expensive:
* If you hit one and are at fault, you might pay out 1-200k, most states only require 25k-50k of coverage...so you need more coverage or expect to pay more for incident.
* Self driving cars have a lot of sensors/recorders. While this could work to your advantage (proving that you aren't at fault), it often isn't (they have evidence that you were at fault). Whereas before fault might have been much more hazy (both at fault, or both no fault).
The biggest factor comes if self driving cars really are much safer than human drivers. They will basically disappear from the insurance market, or somehow be covered by product liability instead of insurance...and the remaining drivers will be in a pool of the remaining accidents that they will have to cover on their own.
It kind of is. They're responsible for something like 30% of traffic fatalities despite being a far smaller percentage of drivers.
> People who can't afford self driving cars (now the insurance industry has a good proxy for income that they couldn't tap into before)
https://pubmed.ncbi.nlm.nih.gov/30172108/
But also, wouldn't they already have this by using the vehicle model and year?
> Enthusiasts who like driving their cars (cruisers, racers, Helcat revving, people who like doing donuts, etc...)
Again something that seems like it would already be accounted for by vehicle model.
> Older people who don't trust technology.
How sure are we that the people who don't trust technology are older? And again, the insurance company already knows your age.
> Also, if self driving cars go mainstream, they are bound to include the safest drivers overnight
Are they? They're more likely to include the people who spend the most time in cars, which is another higher risk pool, because it allows those people to spend the time on a phone/laptop instead of driving the car, which is worth more to people the more time they spend doing it and so justifies the cost of a newer vehicle more easily.
> Oh, and those self driving cars are expensive
Isn't that more of a problem for the self-driving pool? Also, isn't most of the cost that the sensors aren't as common and they'd end up costing less as a result of volume production anyway?
> Self driving cars have a lot of sensors/recorders. While this could work to your advantage (proving that you aren't at fault), it often isn't (they have evidence that you were at fault). Whereas before fault might have been much more hazy (both at fault, or both no fault).
Which is only a problem for the worse drivers who are actually at fault, which makes them more likely to move into the self-driving car pool.
> The biggest factor comes if self driving cars really are much safer than human drivers.
The biggest factor is which drivers switch to self-driving cars. If half of human drivers switched to self-driving cars but they were chosen completely at random then the insurance rates for the remaining drivers would be essentially unaffected. How safe they are is only relevant insofar as it affects your chances of getting into a collision with another vehicle, and if they're safer then it would make that chance go down to have more of them on the road.
> How sure are we that the people who don't trust technology are older? And again, the insurance company already knows your age
Boomers are already the primary anti-EV demographic, with the complaint that real cars have engines. It doesn’t matter if they know your age of state laws keep them from acting on it.
> that more of a problem for the self-driving pool? Also, isn't most of the cost that the sensors aren't as common and they'd end up costing less as a result of volume production anyway?
I think you misunderstood me: If you get into an accident and are found at fault, you are responsible for damage to the other car. Now, if it’s a clunker Toyota, that will be a few thousand dollars, if it’s a roll Royce, it’s a few hundred thousand dollars. The reason insurances are increasing lately is that the average car on the road is more expensive than it was ten years ago, so insurance companies are paying out more. If most cars are $250k Waymo cars, and you hit one…and you are at fault, ouch. And we will know if it is your fault or not since the Waymo is constantly recording.
> If half of human drivers switched to self-driving cars but they were chosen completely at random then the insurance rates for the remaining drivers would be essentially unaffected.
That’s not how the math works out (smaller risk pools are more expensive per person period). And it won’t be people switching at random to self driving cars (the ones not switching will be the ones that are more likely to have accidents).
https://www.roadandtrack.com/news/a39481699/what-happens-if-...
It was way too limited to be useful to anyone.
Suppose ACME Corporation produces millions of self-driving cars and then goes out of business because the CEO was embezzling. They no longer exist. But the cars do. They work fine. Who insures them? The person who wants to keep operating them.
Which is the same as it is now. It's your car so you pay to insure it.
I mean think about it. If you buy an autonomous car, would the manufacturer have to keep paying to insure it forever as long as you can keep it on the road? The only real options for making the manufacturer carry the insurance are that the answer is no and then they turn off your car after e.g. 10 years, which is quite objectionable, or that the answer is "yes" but then you have to pay a "subscription fee" to the manufacturer which is really the insurance premium, which is also quite objectionable because then you're then locked into the OEM instead of having a competitive insurance market.
And the system is designed to set up drivers for failure.
An HCI challenge with mostly autonomous systems is that operators lose their awareness of the system, and when things go wrong you can easily get worse outcomes than if the system was fully manual with an engaged operator.
This is a well known challenge in the nuclear energy sector and airline industry (Air France 447) - how do you keep operators fully engaged even though they almost never need to intervene, because otherwise they’re likely to be missing critical context and make wrong decisions. These days you could probably argue the same is true of software engineers reviewing LLM code that’s often - but not always - correct.
Really? Thats crazy.
The last few years of Tesla 'growth' show how this transition is unfolding. S and X production is shutdown, just a few more models to shutdown.
Any car has varying degrees of autonomy, even the ones with no assists (it will safely self-drive you all the way to the accident site, as they say). But the car is either driven by the human with the system's help, or is driven by the system with or without the human's help.
A car can't have 2 drivers. The only real one is the one the law holds responsible.
it's why young drivers pay more for insurance
In reality, you acquired a license to use it. Your liability should only go as far as you have agreed to identify the licenser.
Companies exist that buy cars just to tear them down and publish reports on what they find.
What does it mean to tear down software, exactly? Are you thinking of something like decompilation?
You can do that, but you're probably not going to learn all that much, and you still can't use it in any meaningful sense as you never bought it in the first place. You only licensed use of it as a consumer (and now that it is subscription-only, maybe not even that). If you have to rebuild the whole thing yourself anyway, what have you really gained? Its not exactly a secret how the technology works, only costly to build.
> Except that they could just buy one themselves.
That is unlikely, unless you mean buying Tesla outright? Getting a license to use it as a manufacturer is much more realistic, but still a license.
In case you have forgotten, the discussion is about self-driving technology, and specifically Tesla's at that. The original questioner asked why he is liable when it is Tesla's property that is making the decisions. Of course, the most direct answer is because Tesla disclaims any liability in the license agreement you must agree to in order to use said property.
Which has nothing to do with an independent consulting firm or "the whole car" as far as I can see. The connection you are trying to establish is unclear. Perhaps you pressed the wrong 'reply' button by mistake?
Cars are traditionally sold as the customer has liability. Nothing stops a car maker (or even an individual dealer) from selling cars today taking all the insurance liability in any country I know of - they don't for what I hope are obvious reasons (bad drivers will be sure to buy those cars since it is a better deal for them an in turn a worse deal for good drivers), but they could.
Self driving is currently sold as customers has liability because that is how it has always been done. I doubt it will change, but it is only because I doubt there will ever be enough advantage as to be worth it for someone else to take on the liability - but I could be wrong.
and Musk for removing lidar so it keeps jumping across high speed traffic at shadows because the visual cameras can't see true depth
99% of the people on this website are coders and know how even one small typo can cause random fails, yet you trust them to make you an alpha/beta tester at high speed?
(Though, there is still an element of owner/operator maintenance for level 4/5 vehicles -- e.g., if the owner fails to replace tires below 4/32", continues to operate the vehicle, and it causes an injury, that is partially the owner/operator's fault.)
I realize it would suck to be blamed for something the car did when you weren't driving it, but I'm not sure how else it could be financially feasible.
Doesn't come close to the safety I feel in the Tesla. Not even close. I know anecdotal
If Tesla didn't want Lemonade to provide this, they could block them.
Strategically, Tesla doesn't want to be an insurer. They started the insurance product years ago, before Lemonade also offered this, to make FSD more attractive to buyers.
But the expansion stalled, maybe because the state bureaucracy or maybe because Tesla shifted priority to other things.
In conclusion: Tesla is happy that Lemonade offers this. It makes Tesla cars more attractive to buyers without Tesla doing the work of starting an insurance company in every state.
If the math was mathing, it would be malpractice not to expand it. I'm betting that their scheme simply wasn't workable, given the extremely high costs of claims (Tesla repairs aren't cheap) relative to the low rates that they were collecting on premiums. The cheap premiums are probably a form of market dumping to get people to buy their FSD product, the sales of which boosts their share price.
They released the Tesla Insurance product because their cars were excessively expensive to insure, increasing ownership costs, which was impacting sales. By releasing the unprofitable Tesla Insurance product, they could subsidize ownership costs making the cars more attractive to buy right now which pumped revenues immediately in return for a "accidental" write-down in the future.
[1] https://peakd.com/tesla/@newageinv/teslas-push-into-insuranc...
Remember with their own insurance they also have access to the parts at cost.
The people paying were actually the retirement funds who fronted Tesla's cash reserves when they purchased Tesla stock and the US government paying for it in the form of more tax credits on sales that would not have otherwise materialized without this financial fraud. But do not worry, retirement funds and the US government may have lost, but it boosted Tesla sales and stock valuation so that Elon Musk could reach his KPIs to get his multiple tens of billions of dollars of payout.
It'll come back.
Lemonade or Tesla if you find this, let's pilot, i'm a founder in sunnyvale, insurtech vertical at pnp
The two are measuring data for different sources of losses for carriers.
I believe, at the end of the day, insurance companies will be the ones driving FSD adoption. The media will sensationalize the outlier issues of FSD software, but insurance companies will set the incentives for humans to stop driving.
Are Teslas still ridiculously-expensive to repair? (I pay $1,100 a year (~$92/month) to insure my Subaru, which costs more than a Model 3.)
Now that they are offering this program, they should start getting much better data by being able to correlate claims with actual FSD usage. They might be viewing this program partially as a data acquisition project to help them insure autonomous vehicles more broadly in the future.
In fact, Tesla Insurance, the people who already have direct access to the data already loses money on every claim [1].
[1] https://peakd.com/tesla/@newageinv/teslas-push-into-insuranc...
What do you mean?
Its their own bet to make
Teslas only do FSD on motorways where you tend to have far fewer accidents per mile.
Also, they switch to manual driving if they can't cope, and because the driver isn't paying attention this usually results in a crash. But hey, it's in manual driving, not FSD, so they get to claim FSD is safer.
FSD is not and never will be safer than a human driver.
They have been end to end street level for the past two years.
It’s not perfect but I’d consider it a smashing success for something I rely on for safely transporting my family every day.
They are not safe and they will never be safe.
Debatable, but you won't be convinced.
> they will never be safe.
Define safe? Would be interested to see you provide a benchmark that is reasonable, and lock it in now so we can see if this statement is falsified in the future.
Why will they never be safe?
Also can you define safe?
I also like how you completely avoided addressing my argument in favor of a attempted ad hominem.
Why haven't you acknowledged this video as fake? https://www.youtube.com/watch?v=Tu2N8f3nEYc
I'd like to know what data this is based on, and if Tesla is providing any kind of subsidy or guarantee.
There's also a big difference between the value of car damages and, well, death. E.g. what if FSD is much less likely to get into otherwise common fender benders that don't harm you, but more likely to occasionally accidentally drive you straight into a divider, killing you?
Analysts saying tortilla industry in shambles.
It may not be on the marketing copy but it’s almost certainly present in the contract.
Half-jokes aside, if you don't own it, you'll end up paying more to the robotaxi company than you would have paid to own the car. This is all but guaranteed based on all SaaS services so far.
Maybe for you, I already don't own it and have not found that to be true. I pretty much order an uber whenever I don't feel like riding my bike or the bus, and that costs <$300 most months. Less than the average used car payment in the US before you even consider insurance, fuel, storage, maintenance, etc.
I also rent a car now and then for weekend trips, that also is a few hundred bucks at most.
I would be surprised if robotaxis were more expensive long term.
The point of a car is takes you door to door. There's no expectation to walk three blocks from a stop; many US places are not intended for waking anyway. Consider heavy bags from grocery shopping, or similar.
Public transit works in proper cities, those that became cities before the advent of the car, and were not kept in the shape of large suburban sprawls by zoning. Most US cities only qualify in their downtowns.
Elsewhere, rented / hailed self-driving cars would be best. First of all, fewer of them would be needed.
If all of those people switch to cars, you end up with it taking an hour to travel 1 mile by car.
It's almost as if they have busses for a reason.
But anyway, your statement is actually not true anywhere in the US except NYC. Even in Chicago, removing ALL the local transit and switching to 6-seater minivans will eliminate all the traffic issues.
Efficient for who, is the problem
Cars are mostly idle and could be cheaper if shared. But why make them significantly cheaper when you can match the price and extract more profits?
And with just 6 people the overhead if an imperfect route and additional stops will be measured in minutes.
And of course, it's pretty easy to imagine an option to pay a bit more for a fully personal route.
For example, in Seattle I can get a shared airport shuttle for $40 with the pick-up/drop-off at my front door. And this is a fully private ADA-compliant commercial service, with a healthy profit margin, not a rideshare that offloads vehicle costs onto the driver. And a self-driving van can be even cheaper than that, since it doesn't need a driver.
Meanwhile, transit also costs around $40 per trip and takes at least 1 hour more. And before you tell me: "no way, the transit ticket is only $2.5", the TRUE cost of a transit ride in Seattle is more than $20. It's just that we're subsidizing most of it.
So you can see why transit unions are worrying about self-driving. It'll kill transit completely.
The _only_ issue with the old "microtransit" is the _driver_. Each van ends up needing on average MORE drivers than it moves passengers. It does solve the problem of throughput, though.
But once the driver is removed, this problem flips on its head. Each regular bus needs around 4 drivers for decent coverage. It's OK-ish only when the average bus load is at least 15-20 people. It's still much more expensive and polluting than cars, but not crazily so.
self driving changes some things, but there are a lot of other points in the many article linked from there that don't change.
Can you guess why?
Hint: think about the intervals between buses and how you should represent them to stay truthful. And that buses necessarily move slower than cars. And that passengers will waste some time due to non-optimal routes and transfers. And that passengers will waste some time because they need to walk to the station.
So back to my point, can you tell me EXACTLY what I should read in that article? Point out the paragraph, please.
Even better — charge 10% less and corner the market! As long as nobody charges 10% less than you…
Yeah, this would rely on robust competition.
For the rest - many of them live in a place where not enough others will follow the same system and so they will be forced to own a car just like today. If you live in a not dense area but still manage to walk/bike almost everywhere (as I do), renting a car is on paper cheaper the few times when you need a car - but in practice you don't know about that need several weeks in advance and so they don't have one they can rent to you. Even if you know you will need the car weeks in advance, sometimes they don't have one when you arrive.
If you live in a very dense area such that you almost regularly use transit (but sometimes walk, bike), but need a car for something a few times per year, then not owning a car makes sense. In this case the density means shared cars can be a viable business model despite not being used very much.
In short what you say sound insightful, but reality of how cars are used means it won't happen for most car owners.
Or, if they are Hertz, they might have one but refuse to give it to you. This happened to my wife. In spite of payment already being made to Hertz corporate online, the local agent wouldn't give up a car for a one-way rental. Hertz corporate was less than useless, telling us their system said was a car available, and suggesting we pay them hundreds of dollars again and go pick it up. When I asked the woman from corporate whether she could actually guarantee we would be given a car, she said she couldn't. When I suggested she call the local agent, she said she had no way to call the local office. Unbelievable.
Since it was last minute, there were... as you said, no cars available at any of the other rental companies. So we had to drive 8 hours to pick her up. Then 8 hours back, which was the drive she was going to make in the rental car in the first place.
Hertz will hurts you.
Subscription for self driving will almost be a given with so many bad actors in tech nowadays, but never even being allowed to own the car is even worse.
And what do you even mean by subscription to changes to the law?
Law - when a government changes the driving laws. Government can be federal (I have driven to both Canada and Mexico. Getting to Argentina is possible though I don't think it has ever been safe. Likewise it is possible to drive over the North Pole to Europe), state (or whatever the country calls their equivalent). When a city changes the law they put up signs, but if a state passes a law I'm expected to know even if I have never driven in that state before. Right turn on red laws are the only ones I can think of where states are different - but they are likely others.
Laws also cover new traffic control systems that may not have been in the original program. If the self driving system can't figure out the next one (think roundabout) then it needs to be updated.
This is about a self-driving car you own.
If FSD is going to be a subscription and you will never own our fancy autopilot feature. Why should the user pay for insurance?
The user is paying for a service that they do not control and which workings are completely opaque. How can responsibility ever lie with the user in such a situation?
Anybody know??
Tesla FSD is still a supervised system (= ADAS), afaik.
> Fair prices, based on how you drive [...] Get a discount, and earn a lower premium as you drive better.
I did get lots of traction issues with FWD EV, any sort of wet - you need to baby it.
Booting the go pedal at every stop sign or light just feels like being a bit of a childish jerk after a short while on public roads once the novelty wears off.
And yet people are skeptical. I mean, they should be skeptical, given that the company is using this for marketing purposes. It doesn't make sense to just believe them.
But it is strange to see this healthy skepticism juxtaposed with the many unskeptical comments attached to recent Electrek articles with outlandish claims.
As an extreme end of a spectrum example, there's been worry and debate for decades over automating military capabilities to the point where it becomes "push button to win war". There used to be, and hopefully still is, lots of restraint towards heading in that direction - in recognition of the need for ethics validation in automated judgements. The topic comes up now and then around Tesla's, and impossible decisions that FSD will have to make.
So at a certain point, and it may be right around the point of serious physical harm, the design decision to have or not have human-in-the-middle accountability seems to run into ethical constraints. In reality it's the ruthless bottom line focused corps - that don't seem to be the norm, but may have an outsized impact - that actually push up against ethical constraints. But even then, I would be wary as an executive documenting a decision to disregard potential harms at one of them shops. That line is being tested, but it's still there.
In my actual experience with automations, they've always been derived from laziness / reducing effort for everyone, or "because we can", and sometimes a need to reduce human error.
Are you saying that the investments in FSD by tesla have been with the goal of letting drivers get a way with accidents? The law is black and white
No thanks. I unplugged the cellular modem in my car precisely because I can't stand the idea that the manufacturer/dealer/insurance company or other unauthorized third parties could have access to my location and driving habits.
I also generally avoid dealers like the plague and only trust the kind of shops where the guy who answers the phone is the guy doing the work.
On the surface, this looks like an endorsement of Tesla's claims about FSD safety.