Google/Alphabet are so vertically integrated for AI when you think about it. Compare what they're doing - their own power generation , their own silicon, their own data centers, search Gmail YouTube Gemini workspace wallet, billions and billions of Android and Chromebook users, their ads everywhere, their browser everywhere, waymo, probably buy back Boston dynamics soon enough (they're recently partnered together), fusion research, drugs discovery.... and then look at ChatGPT's chatbot or grok's porn. Pales in comparison.
It’s kind of crazy that they have been slow to create real products and competitive large scale models from their research.
But they are in full gear now that there is real competition, and it’ll be cool to see what they release over the next few years.
[1]: https://research.google/blog/towards-a-conversational-agent-...
It'll be interesting to see which pays off and which becomes Quibi
Google's been thinking about world models since at least 2018: https://arxiv.org/abs/1803.10122
It sounds like they removed Lidar due to supplier issues and availability, not because they're trying to build self-driving cars and have determined they don't need it anymore.
0: https://techcrunch.com/2019/04/22/anyone-relying-on-lidar-is...
1: https://static.mobileye.com/website/corporate/media/radar-li...
2: https://www.luminartech.com/updates/luminar-accelerates-comm...
3: https://www.youtube.com/watch?v=Vvg9heQObyQ&t=48s
4: https://ir.innoviz.tech/news-events/press-releases/detail/13...
Um, yes they did.
No idea if it had any relation to Tesla though.
Having a self-driving solution that can be totally turned off with a speck of mud, heavy rain, morning dew, bright sunlight at dawn and dusk.. you can't engineer your way out of sensor-blindness.
I don't want a solution that is available to use 98% of the time, I want a solution that is always-available and can't be blinded by a bad lighting condition.
I think he did it because his solution always used the crutch of "FSD Not Available, Right hand Camera is Blocked" messaging and "Driver Supervision" as the backstop to any failure anywhere in the stack. Waymo had no choice but to solve the expensive problem of "Always Available and Safe" and work backwards on price.
Using vision only is so ignorant of what driving is all about: sound, vibration, vision, heat, cold...these are all clues on road condition. If the car isn't feeling all these things as part of the model, you're handicapping it. In a brilliant way Lidar is the missing piece of information a car needs without relying on multiple sensors, it's probably superior to what a human can do, where as vision only is clearly inferior.
https://www.yellowscan.com/knowledge/how-weather-really-affe...
Seeing how its by a lidar vendor, I don't think they're biased against it. It seems Lidar is not a panacea - it struggles with heavy rain, snow, much more than cameras do and is affected by cold weather or any contamination on the sensor.
So lidar will only get you so far. I'm far more interested in mmwave radar, which while much worse in spatial resolution, isn't affected by light conditions, weather, can directly measure stuff on the thing its illuminating, like material properties, the speed its moving, the thickness.
Fun fact: mmWave based presence sensors can measure your hearbeat, as the micro-movements show up as a frequency component. So I'd guess it would have a very good chance to detect a human.
I'm pretty sure even with much more rudimentary processing, it'll be able to tell if its looking at a living being.
By the way: what happened to the idea that self-driving cars will be able to talk to each other and combine each other's sensor data, so if there are multiple ones looking at the same spot, you'd get a much improved chance of not making a mistake.
7 cameras x 36fps x 5Mpx x 30s
48kHz audio
Nav maps and route for next few miles
100Hz kinematics (speed, IMU, odometry, etc)
Source: https://youtu.be/LFh9GAzHg1c?t=571Also, integration effort went down but it never disappeared. Meanwhile, opportunity cost skyrocketed when vision started working. Which layers would you carve resources away from to make room? How far back would you be willing to send the training + validation schedule to accommodate the change? If you saw your vision-only stack take off and blow past human performance on the march of 9s, would you land the plane just because red paint became available and you wanted to paint it red?
I wouldn't completely discount ego either, but IMO there's more ego in the "LIDAR is necessary" case than the "LIDAR isn't necessary" at this point. FWIW, I used to be an outspoken LIDAR-head before 2021 when monocular depth estimation became a solved problem. It was funny watching everyone around me convert in the opposite direction at around the same time, probably driven by politics. I get it, I hate Elon's politics too, I just try very hard to keep his shitty behavior from influencing my opinions on machine learning.
It's still rather weak and true monocular depth estimation really wasn't spectacularly anything in 2021. It's fundamentally ill posed and any priors you use to get around that will come to bite you in the long tail of things some driver will encounter on the road.
The way it got good is by using camera overlap in space and over time while in motion to figure out metric depth over the entire image. Which is, humorously enough, sensor fusion.
None of these technologies can ever be 100%, so we’re basically accepting a level of needless death.
Musk has even shrugged off FSD related deaths as, “progress”.
FSD: 2 deaths in 7 billion miles
Looks like FSD saves lives by a margin so fat it can probably survive most statistical games.
I will never trust 2d camera-only, it can be covered or blocked physically and when it happens FSD fails.
As cheap as LIDAR has gotten, adding it to every new tesla seems to be the best way out of this idiotic position. Sadly I think Elon got bored with cars and moved on.
I thought it was the Nazi salutes on stage and backing neo-nazi groups everywhere around the world, but you know, I guess the lidar thing too.
The issue with lidar is that many of the difficult edge-cases of FSD are all visible-light vision problems. Lidar might be able to tell you there's a car up front, but it can't tell you that the car has it's hazard lights on and a flat tire. Lidar might see a human shaped thing in the road, but it cannot tell whether it's a mannequin leaning against a bin or a human about to cross the road.
Lidar gets you most of the way there when it comes to spatial awareness on the road, but you need cameras for most of the edge-cases because cameras provide the color data needed to understand the world.
You could never have FSD with just lidar, but you could have FSD with just cameras if you can overcome all of the hardware and software challenges with accurate 3D perception.
Given Lidar adds cost and complexity, and most edge cases in FSD are camera problems, I think camera-only probably helps to force engineers to focus their efforts in the right place rather than hitting bottlenecks from over depending on Lidar data. This isn't an argument for camera-only FSD, but from Tesla's perspective it does down costs and allows them to continue to produce appealing cars – which is obviously important if you're coming at FSD from the perspective of an auto marker trying to sell cars.
Finally, adding lidar as a redundancy once you've "solved" FSD with cameras isn't impossible. I personally suspect Tesla will eventually do this with their robotaxis.
That said, I have no real experience with self-driving cars. I've only worked on vision problems and while lidar is great if you need to measure distances and not hit things, it's the wrong tool if you need to comprehend the world around you.
But the Tesla engineers are "in the right place rather than hitting bottlenecks from over depending on Lidar data"? What?
The real question is whether doing so is smart or dumb. Is Tesla hiding big show-stopper problems that will prevent them from scaling without a safety driver? Or are the big safety problems solved and they are just finishing the Robotaxi assembly line that will crank out more vertically-integrated purpose-designed cars than Waymo's entire fleet every day before lunch?
What good is a huge fleet of Robotaxis if no one will trust them? I won't ever set foot in a Robotaxi, as long as Elon is involved.
As soon as Waymo's massive robotaxi lead became undeniable, he pivoted to from robotaxis to humanoid robots.
But Codex/5.2 was substantially more effective than Claude at debugging complex C++ bugs until around Fall, when I was writing a lot more code.
I find Gemini 3 useless. It has regressed on hallucinations from Gemini 2.5, to the point where its output is no better than a random token stream despite all its benchmark outperformance. I would use Gemini 2.5 to help write papers and all, can't see to use Gemini 3 for anything. Gemini CLI also is very non-compliant and crazy.
I don't think Google is targeting developers with their AI, they are targeting their product's users.
They should be bought by a rocket company. Then they would stand a chance.
Boston Robotics is working on a smaller robot that can kill you.
Anduril is working on even smaller robots that can kill you.
The future sucks.
[1] https://www.wsj.com/tech/personal-tech/i-tried-the-robot-tha...
[2] https://futurism.com/advanced-transport/waymos-controlled-wo...
If that doesn't make it obvious what they can and cannot do then I can't respect the tranche of "hackers" who blindly cheer on this unchecked corporate dystopian nightmare.
Erm, a dishwasher, washing machine, automated vacuum can be considered robots. Im confused as to this obsession of the term - there are many robots that already exist. Robotics have been involved in the production of cars for decades.
......
Dictionary def: "a machine controlled by a computer that is used to perform jobs automatically."
Even if that definition were universally agreed on l upon though, that's not really enough to understand what the parent comment was saying. Being a robot "in the same way" as something else is even less objective. Humans are humans, but they're also mammals; is a human a mammal "in the same way" as a mouse? Most humans probably have a very different view of the world than most mice, and the parent comment was specifically addressing the question of whether it makes sense for an autonomous car to model the world the same way as other robots or not. I don't see how you can dismiss this as "irrelevant" because both humans and mice are mammals (or even animals; there's no shortage of classifications out there) unless you're completely having a different conversation than the person you responded to. You're not necessarily wrong because of that, but you're making a pretty significant misjudgment if you think that's helpful to them or to anyone else involved in the ongoing conversation.
But in my mind a waymo was always a "car with sensors", but more recently (especially having recently used them a bunch in California recently) I've come to think of them truly as robots.
Maybe we need to nitpick about what a job is exactly? Or we could agree to call Waymos (semi)autonomous robots?
Subtle brag that Waymo could drive in camera-only mode if they chose to. They've stated as much previously, but that doesn't seem widely known.
(edit - I'm referring to deployed Tesla vehicles, I don't know what their research fleet comprises, but other commenters explain that this fleet does collect LIDAR)
https://youtu.be/LFh9GAzHg1c?t=872
They've also built it into a full neural simulator.
https://youtu.be/LFh9GAzHg1c?t=1063
I think what we are seeing is that they both converged on the correct approach, one of them decided to talk about it, and it triggered disclosure all around since nobody wants to be seen as lagging.
Humans do this, just in the sense of depth perception with both eyes.
And I'll add that it in practice it is not even that much unless you're doing some serious training, like a professional athlete. For most tasks, the accurate depth perception from this fades around the length of the arms.
There have been a few attempts at solving this, but I assume that for some optical reason actual lenses need to be adjusted and it can't just be a change in the image? Meta had "Varifocal HMDs" being shown off for a bit, which I think literally moved the screen back and forth. There were a couple of "Multifocal" attempts with multiple stacked displays, but that seemed crazy. Computer Generated Holography sounded very promising, but I don't know if a good one has ever been built. A startup called Creal claimed to be able to use "digital light fields", which basically project stuff right onto the retina, which sounds kinda hogwashy to me but maybe it works?
Also subtle head and eye movements, which is something a lot of people like to ignore when discussing camera-based autonomy. Your eyes are always moving around which changes the perspective and gives a much better view of depth as we observe parallax effects. If you need a better view in a given direction you can turn or move your head. Fixed cameras mounted to a car's windshield can't do either of those things, so you need many more of them at higher resolutions to even come close to the amount of data the human eye can gather.
More subtly, a lot of depth information comes from how big we expect things to be, since everyday life is full of things we intuitively know the sizes of, frames of reference in the form of people, vehicles, furniture, etc . This is why the forced perspective of theme park castles is so effective— our brains want to see those upper windows as full sized, so we see the thing as 2-3x bigger than it actually is. And in the other direction, a lot of buildings in Las Vegas are further away than they look because hotels like the Bellagio have large black boxes on them that group a 2x2 block of the actual room windows.
It's possible they get headaches from the focal length issues but that's different.
The next generation of that, the ATX, is the one they have said would be half that cost. According to regulator filings in China BYD will be using this on entry level $10k cars.
Hesai got the price down for their new generation by several optimizations. They are using their own designs for lasers, receivers, and driver chips which reduced component counts and material costs. They have stepped up production to 1.5 million units a year giving them mass production efficiencies.
That was 2 generations of hardware ago (4th gen Chrysler Pacificas). They are about to introduce 6th gen hardware. It's a safe bet that it's much cheaper now, given how mass produced LiDARs cost ~$200.
> Then, in December 2016, Waymo received evidence suggesting that Otto and Uber were actually using Waymo’s trade secrets and patented LiDAR designs. On December 13, Waymo received an email from one of its LiDAR-component vendors. The email, which a Waymo employee was copied on, was titled OTTO FILES and its recipients included an email alias indicating that the thread was a discussion among members of the vendor’s “Uber” team. Attached to the email was a machine drawing of what purported to be an Otto circuit board (the “Replicated Board”) that bore a striking resemblance to – and shared several unique characteristics with – Waymo’s highly confidential current-generation LiDAR circuit board, the design of which had been downloaded by Mr. Levandowski before his resignation.
The presiding judge, Alsup, said, "this is the biggest trade secret crime I have ever seen. This was not small. This was massive in scale."
(Pronto connection: Levandowski got pardoned by Trump and is CEO of Pronto autonomous vehicles.)
https://arstechnica.com/tech-policy/2017/02/waymo-googles-se...
Tesla told us their strategy was vertical integration and scale to drive down all input costs in manufacturing these vehicles...
...oh, except lidar, that's going to be expensive forever, for some reason?
Humans do this with vibes and instincts, not just depth perception. When I can't see the lines on the road because there's too much slow, I can still interpret where they would be based on my familiarity with the roads and my implicit knowledge of how roads work, e.g. We do similar things for heavy rain or fog, although, sometimes those situations truly necessitate pulling over or slowing down and turning on your 4s - lidar might genuinely given an advantage there.
So...nowhere?
Why should you be able to do that exactly? Human vision is frequently tricked by it's lack of depth data.
As soon as a mode of transport actually has to compete in a market for scarce & valuable land to operate on, trains and other forms of transit (publicly or privately owned) win every time.
IMO, access to DeepMind and Google infra is a hugely understated advantage Waymo has that no other competitor can replicate.
____.----.____
______/ \______
_____/ \_____
________________________________________
(simulations) (real world data) (simulations)
Seems like it, no?We started with physics-based simulators for training policies. Then put them in the real world using modular perception/prediction/planning systems. Once enough data was collected, we went back to making simulators. This time, they're physics "informed" deep learning models.
Seems like there ought to be a name for this, like so-and-so's law.
A power outage feels like a baseline scenario—orders of magnitude more common than the disasters in this demo. If the system can’t degrade gracefully when traffic lights go dark, what exactly is all that simulation buying us?
That is, both are true: this high-fidelity simulation is valuable and it won't catch all failure modes. Or in other words, it's still on Waymo for failing during the power outage, but it's not uniquely on Waymo's simulation team.
https://www.reddit.com/r/SelfDrivingCars/comments/1pem9ep/hm...
https://deepmind.google/blog/genie-3-a-new-frontier-for-worl...
Discussed here,eg.
Genie 3: A new frontier for world models (1510 points, 497 comments)
https://news.ycombinator.com/item?id=44798166
Project Genie: Experimenting with infinite, interactive worlds (673 points, 371 comments)
2. No seriously, is the filipino driver thing confirmed? It really feels like they're trying to bury that.
For context, my "driver's test" was going to the back of the office, and driving some old car backwards and forwards a few meters.
But eventually I think we will get there. Human drivers will be banned, the roads will be exclusively used by autonomous vehicles that are very efficient drivers (we could totally remove stoplights, for example. Only pedestrian crossing signs would be needed. Robo-vehicles could plug into a city-wide network that optimizes the routing of every vehicle.) At that point, public transit becomes subsidized robotaxi rides. Why take a subway when a car can take you door to door with an optimized route?
So in terms of why it isn’t a waste of time, it’s a step along the path towards this vision. We can’t flip a switch and make this tech exist, it will happen in gradual steps.
Automation makes public transit better. There will be automated minibuses that are more flexible and frequent than today's buses. Automation also means that buses get a virtual bus lane. Taxis solve the last mile problem, by taking taxi to the station, riding train with thousands of people, and then taking more transit.
Also, we might discover the advantage of human powered transit. Ebikes are more efficient than cars and give health benefits. They will be much safer than automated cars. Could use the extra capacity for bike and bus lanes.
I basically agree with your premise that public transit as it exists today will be rendered obsolete, but I think this point here is where your prediction hits a wall. I would be stunned if we agreed to eliminate human drivers from the road in my lifetime, or the lifetime of anyone alive today. Waymo is amazing, but still just at the beginning of the long tail.
It basically happened for horses.
- I would be stunned if we agree to eliminate human drivers from 100% of roads in the lifetime of anyone alive today.
or
- I would be stunned if we agree to eliminate human drivers from 10% of roads...
...or is there some other percentage to qualify this? I guess I wouldn't expect there to be a decree that makes it happen all at once for a country. Especially a large country like the U.S.. More like, some really dense city will decide to make a tiny core autonomous vehicles only, and then some other cities also do years later. And then maybe it expands to something larger than just the core after 5 or 10 years. And so on...
Once it gets unstuck, it runs autonomously.
Anyway you can think it's a waste but they're wasting their money, not yours. If you want a train in your town, go get one. Waymo has only spent, cumulatively, about 4 months of the budgets of American transit agencies. If you had all that money it wouldn't amount to anything.
As always tho the devil lies in the details: is an LLM based generation pipeline good enough? What even is the definition of "good enough"? Even with good prompts will the world model output something sufficiently close to reality so that it can be used as a good virtual driving environment for further training / testing of autonomous cars? Or do the kind of limitations you mentioned still mean subtle but dangerous imprecisions will slip through and cause too poor data distribution to be a truly viable approach?
My personal feeling is that this we will land somewhere in between: I think approaches like this one will be very useful, but I also don't think the current state of AI models mean we can have something 100% reliable with this.
The question is: is 100% reliability a realistic goal? Human drivers are definitely not 100% reliable. If we come up with a solution 10x more reliable than the best human drivers, that maybe has some also some hard proof that it cannot have certain classes of catastrophic failure modes (probably with verified code based approaches that for instance guarantees that even if the NN output is invalid the car doesn't try to make moves out of a verifiably safe envelope) then I feel like the public and regulators would be much more inclined to authorize full autonomy.
> there's probably no examples in the training data where the car is behind a stopped car, and the driver pulls over to another lane and another car comes from behind and crashes into the driver because it didn't check its blindspot
This specific scenario is in the examples: https://videos.ctfassets.net/7ijaobx36mtm/3wK6IWWc8UmhFNUSyy...
It doesn't show the failure mode, it demonstrates the successful crash avoidance.
[1] https://people.com/waymo-exec-reveals-company-uses-operators...
edit: fixed kill -> hit
Under the same circumstances (kid suddenly emerging between two parked cars and running out onto the street), it could be debated that the outcome could have been worse if a human were driving.
[1] https://people.com/waymo-car-hits-child-walking-to-school-du...
[1] I've seen a couple of them but they're not available to hire yet and are still very rare.
Or the most realistic game of SimCity you could imagine.
Also we record body position actuation and self speech. As output then we put this on thousands of people to get as much data as Waymo gets.
I mean that’s what we need to imitate agi right? I guess the only thing missing is the memory mechanism. We train everything as if it’s an input and output function without accounting for memory.
Not for the rendering (that's still way too expensive), but for the initial world generation that gets iteratively refined and then still ultimately gets converted into textured triangles.
[*] https://futurism.com/advanced-transport/waymos-controlled-wo...
Listen to the statement.
The operators help when the Waymo is in a "difficult situation".
Car drives itself 99% of the time, long tail of issues not yet fixed have a human intervene.
Everyone is making out like it's an RC car, completely false.
And apparently some people still haven't caught on.
Have a look if you don't believe me:
https://hn.algolia.com/?dateRange=custom&page=0&prefix=false...
Having humans in the loop at some level is necessary for handling rare edge cases safely.
Its much easier to build everything into the compressed latent space of physical objects and how they move, and operate from there.
Everyone jumped on the end-2-end bandwagon, which then locks you into the input to your driving model being vision, which means that you have to have things like genie to generate vision data, which is wasteful.
This is legit hilarious to read from some random HN account.
Anyway, we'll see how the London rollout goes, but I get the impression London's got a lot more of those kinds of roads.
That is extremely narrow, I wonder why the city has not designated it as a one-way street? They've done that for other similarly narrow sections of the same street farther north.
"we’re excited to continue effectively adapting to Boston’s cobblestones, narrow alleyways, roundabouts and turnpikes."
edit: Case in point:
https://maps.app.goo.gl/xxYQWHrzSMES8HPL8
This is an alley in Coimbra, Portugal. A couple years ago I stayed at a hotel in this very street and took a cab from the train station. The driver could have stopped in the praça below and told me to walk 15m up. Instead the guy went all the way up then curved through 5-10 alleys like that to drop me off right right in front of my place. At a significant speed as well. It was one of the craziest car rides I've ever experienced.
Human drivers routinely do worse than Waymo, which I take 2 or 3 times a week. Is it perfect? No. Does it handle the situation better than most Lyft or Uber drivers? Yes.
As a bonus: unlike some of those drivers the Waymo doesn't get palpably angry at me for driving the route.
Not taking paying passengers yet though!
It probably doesn't matter though, "this general blob over there"
Talk about edge cases.
But, what would you do? Trust the Waymo, or get out (or never get in) at the first sign of trouble?
This does bring up something, though: Waymo has a "pull over" feature, but it's hidden behind a couple of touch screen actions involving small virtual buttons and it does not pull over immediately. Instead, it "finds a spot to pull over". I would very much like a big red STOP IMMEDIATELY button in these vehicles.
It was on the home screen when I've taken it, and when I tested it, it seemed to pull to the first safe place. I don't trust the general pubic with a stop button.
I started working heavily on realizing them in 2016 and it is unquestionably (finally) the future of AI
Edit: or are you talking about the allegations of workers in the Philippines controlling the Waymos: https://futurism.com/advanced-transport/waymos-controlled-wo... I guess both are valid.
Vivaldi 7.8.3931.63 on iOS 26.2.1 iPhone 16 pro
I think you meant, "Attempt" to limit what people can do.
Driving in SF (for example) provides many opportunities to see "free will" exerted in the most extreme ways -- laws be damned.
> After being pressed for a breakdown on where these overseas operators operate, Peña said he didn’t have those stats, explaining that some operators live in the US, but others live much further away, including in the Philippines.
> “They provide guidance,” he argued. “They do not remotely drive the vehicles. Waymo asks for guidance in certain situations and gets an input, but the Waymo vehicle is always in charge of the dynamic driving tasks, so that is just one additional input.”
“When the Waymo vehicle encounters a particular situation on the road, the autonomous driver can reach out to a human fleet response agent for additional information to contextualize its environment,” the post reads. “The Waymo Driver [software] does not rely solely on the inputs it receives from the fleet response agent and it is in control of the vehicle at all times.” [from Waymo's own blog https://waymo.com/blog/2024/05/fleet-response/]
What's the problem with this?
We've simply relabeled the "Mechanical Turk" into "AI."
The rest is built on stolen copyrighted data.
The new corporate model: "just lie the government clearly doesn't give a shit anymore."
Self driving cars is a dead end technology, that will introduce a whole host of new problems which are already solved with public transit, better urban planning, etc.
Trains need tracks, cars - already have the infrastructure to drive on.
> Self driving cars is a dead end technology, that will introduce a whole host of new problems which are already solved with public transit, better urban planning, etc.
Self driving cars will literally become a part of public transit
I’ve been hearing people say that for almost 15 years now. I believe it when I see it.
I'm willing to wager that you might not actually believe it at that point either.
It will prove disruptive to the driving industry, but I think we’ve been through worse disruptions and fared the better for it.
I would be happy to bet on some strict definition of your claim.
The US already did it once (just in the wrong direction) by redesigning all cities to be unfriendly to humans and only navigable by cars. It should be technically possible to revert that mistake.
> Redesigning and rebuilding city transportation infrastructure isn't happening, look around.
We have been redesigning and rebuilding city transportation infrastructure since we had cities. Where I live (Seattle) they are opening a new light rail bridge crossing just next month (first rail over a floting bridge; which is technologically very interesting), and two new rail lines are being planned. In the 1960s the Bay area completely revolutionized their transit sytem when they opened BART.
I think you are simply wrong here.
66 years later we see California struggling terribly with implementation of a high-speed rail system -- where the placement/location of the infrastructure largely is targeted for areas far less dense than the Bay Area.
I don't think there is any single reason why this is so much more difficult now then it was in 1960 -- but clearly things have changed quite a lot in that time.
Same was said about electricity, or the internet.
As to the revolt, America doesn't do that any more. Years of education have removed both the vim and vigor of our souls. People will complain. They will do a TikTok dance as protest. Some will go into the streets. No meaningful uprising will occur.
The poor and the affected will be told to go to the trades. That's the new learn to program. Our tech overlords will have their media tell us that everything is ok (packaging it appropriately for the specific side of the aisle).
Ultimately the US will go down hill to become a Belgium. Not terrible, but not a world dominating, hand cutting entity it once was.
Sharing one's opinion in a respectful way is possible. Less spectacle, so less eyeballs, but worth it. Try it.
The original Luddite movement arose in response to automation in the textile industry.
They committed violence. Violence was committed against them. All tragic events when viewed from a certain perspective.
My rhetorical question is this: did any of this result in any meaningful impedance of the "march of technological progress"?
I'm curious why you say this given you start by highlighting several characteristics that are not like Belgium (to wit, poor education, political media capture, effective oligarchy). I feel there are several other nations that may be better comparators, just want to understand your selection.