"We" have been mainstream (?) talking about AI killing since (at least) the first Terminator movie in 1984. The geeks/nerds have much earlier: Frank Herbert talked about humans outsourcing their thinking and being 'enslaved' in Dune with the Butlerian Jihad in 1965. Isaac Asimov's Three Laws of Robotics are from 1942.
The US can’t even make Patriot missiles fast enough [1]. If a nation state wanted to, they could simply kill SpaceX’s supply chain (either via targeting companies or key personnel SpaceX relies on).
[1] https://www.businessinsider.com/patriot-missiles-fired-in-ir...
If Russia or China is targeting the SpaceX supply chain then we're in WWIII, and nobody is building space ships for a long time.
It is actually Palantir using Claude AI in its "Maven Smart System" for real-time battlefield analysis which is being used by the US Military.
More details at - https://news.ycombinator.com/item?id=47275936
Also see Palantir’s Double Conflict of Interest in the War Against Iran - https://bylinetimes.com/2026/03/05/palantirs-double-conflict...
After Trump's tantrum with Anthropic, no doubt Palantir will be switching to OpenAI based models/agents/chatbots.
From the pov of data analysis and inference, they should be comparable though Anthropic's AI predictions _might_ be better than OpenAI's (maybe the reason why Palantir chose them in the first place).
Iran slaughtered 30k people in a matter of days for the crime of "protesting". No tears shed for the Mullahs here, IMHO Israel and the US are doing the world a service here by finally cleaning up the last terrorist regime keeping the region in a constant state of aggression. Note that before and after Oct 7th, it only was Iranian backed forces stirring shit (Houthis, Hezbollah, Gaza's Hamas), while everyone else stayed put.
The leader of Syria is a Sunni terrorist but he stays in place because he blocks arms shipments through Syria from Iran to Hezbollah. That suits Israel. Israel doesn't care how the Syrian leader treats his own people so long as he continues to disallow arms to Hezbollah from Iran. And the USA goes along with whatever Israel wants.
It has nothing to do with how the Iranian regime treats its own people. Just look at how the Saudi regime treats its people.
That's one thing, the more important (and sad) thing is that the Islamists are the ones who eventually won the war and there is no one left as a contender for governing the country. The Kurds alone are too small.
In Iran the situation is different, as Iran always had a vibrant civil society that is only held back by the Mullahs' sheer military and police gun power. I'm confident they will manage something decent once Israel and the US have bombed enough of the Mullahs, IRGC, Basij etc. to cause the rest of them to flee to Moscow.
Blame Palantir if you want to vent; Dario is literally putting Anthropic's future at risk by not kowtowing to DoW. Also, when Anthropic and Palantir finalized their partnership in 2024, many Anthropic employees raised concerns, which the company addressed by holding AMA meetings.
Anthropic and Google (to a certain extent) are far better when it comes to principles in AI-usage in the context of "Realpolitik" than OpenAI and xAI, both of whom have zero scruples as is personified by their CEOs.
Palantir partnership is at heart of Anthropic, Pentagon rift - https://www.semafor.com/article/02/17/2026/palantir-partners...
Palantir CEO’s rant about the Anthropic-Pentagon feud threatening his company was about a lot more than a dirty word - https://fortune.com/2026/03/05/palantir-ceo-alex-karp-anthro...
Anthropic-Palantir Partnership at Risk After Pentagon Ruling - https://archive.ph/EWmay#selection-993.0-993.60
Anthropic is selling a model and not applications using that model. The latter is not in their hands but they have drawn two specific red lines over which they are willing to go against the mighty DoW.
If you think that no AI model should be allowed to be used by the Military, then you are living in a clueless la-la land. There are perfectly justified Military/Law-Enforcement uses of AI. What we can demand are controls/oversight by a Human over its usage in a lawful manner. Anthropic has done its part by drawing two red lines and cannot be expected to do more.
It is companies like Palantir who build applications for warfare using Anthropic's (and others) models, enabling features like "shortening the kill chain", "enabling decision compression" etc. who need major oversight.