This is a fantastic piece, very timely, evidently well-researched, and also well-written. Judging by the little that I know, it's accurate. Thank you for doing the work and sharing it with the world.
OpenAI may be in a more tenuous competitive position than many people realize. Recent anecdotal evidence suggests the company has lost its lead in the AI race to Anthropic.[b]
Many people here, on HN, who develop software prefer Claude, because they think it's a better product.[c]
Is your understanding of OpenAI's current competitive position similar?
---
[a] You may want to provide proof online that you are who you say you are: https://en.wikipedia.org/wiki/On_the_Internet%2C_nobody_know...
[b] https://www.latimes.com/business/story/2026-04-01/openais-sh...
[c] For example, there are 2x more stories mentioning Claude than ChatGPT on HN over the past year. Compare https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru... to https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...
The piece captures some of the anxieties within OpenAI right now about their competitive position. This obviously ebbs and flows but of late there has been much focus on Anthropic's relative position. We of course mention the allegations of "circular deals" and concerns about partners taking on debt.
I was asking more about your informed view on how OpenAI's technology, products, and roadmap are perceived, particularly by customers and partners, in comparison to those of competitors.
If you have an opinion about that, everyone here would love to hear about it.
As it turns out, and what I’m kind of going with for this LLM shit, is that it’ll play out exactly how you think it will. The companies are all too big to fail, with billionaire backers who would rather commit fraud than lose money.
He's had so many conversations that he likely has a sense of how perceptions of the company and its offerings have changed.
I'm curious.
No comment on the CEO: I just find the product superior in everything but UI/UX and conversation. It's better at quality code.
Both codex and Claude code fail when it comes to extremely sophisticated programming for distributed systems
For the few times I've used both models side by side on more typical tasks (not so much web stuff, which I don't do much of, but more conventional Python scripts, CLI utilities in C, some OpenGL), they seem much more evenly matched. I haven't found a case where Claude would be markedly superior since Codex 5.2 came out, but I'm sure there are plenty. In my view, benchmarks are completely irrelevant at this point, just use models side by side on representative bits of your real work and stick with what works best for you. My software engineer friends often react with disbelief when I say I much prefer Codex, but in my experience it is not a close comparison.
Gemini seems to be the worst of the three, and some open-weight models are not too bad (like Kimi k2.5). Cursor is still pretty good, and copilot just really really sucks.
I enjoy using CC more and use it for non coding tasks primarily, but for anything complex (honestly most of what I do is not that complex), I feel like I am trading future toil for a dopamine hit.
LLMs aren't able to achieve 100% correctness of every line of code. But luckily, 100% correctness is not required for debugging. So its better at that sort of thing. Its also (comparatively) good at reading lots and lots of code. Better than I am - I get bogged down in details and I exhaust quickly.
An example of broad work is something like: "Compile this C# code to webassembly, then run it from this go program. Write a set of benchmarks of the result, and compare it to the C# code running natively, and this python implementation. Make a chart of the data add it to this latex code." Each of the steps is simple if you have expertise in the languages and tools. But a lot of work otherwise. But for me to do that, I'd need to figure out C# webassembly compilation and go wasm libraries. I'd need to find a good charting library. And so on.
I think its decent at debugging because debugging requires reading a lot of code. And there's lots of weird tools and approaches you can use to debug something. And its not mission critical that every approach works. Debugging plays to the strengths of LLMs.
https://xcancel.com/RonanFarrow/status/2041127882429206532#m
Unfortunately it probably doesn't even matter here on HN considering how brigaded down this story is predictably getting.
But yeah, it was a fantastic piece.
My question is, how do you know when an enormous project like this, conducted over an 18-month time span is "done"? I assume you get a lot of leeway from editors and publishers on this matter. How do you make the decision to finally pull the trigger on publishing?
Did you do any extra investigations into Annie’s allegations? It feels to me like the unstated conclusion is recovered memory can’t be trusted, which is a popular understanding but a very wrong one put out by the now defunct and discredited False Memory Syndrome Foundation. It was founded by the parents of the psychologist who coined DARVO, directly in reaction to her accusing them of abuse.
Dissociation is real (I have a dissociative disorder, and abuse I “recovered” but did not remember for much of my adolescence and early adulthood has been corroborated by third parties) and many CSA survivors have severe memory problems that often don’t come to a head until adulthood. I know you didn’t dismiss her claim, but the way the public tends to think about recovered memories is shaped primarily by that awful organization.
As noted in the piece, we spent months talking to Altman's partners and what we found and didn't is as described.
All evidence today suggests Anthropic is passing OpenAI in relative and absolute growth. So where's the critical reporting? The DOD coverage was framed around the Pentagon's decisions, not Anthropic's. And nobody seems interested in examining whether the company that branded itself as the ethical AI lab actually is one. That seems like a story worth writing.
FWIW I have two(!!) close friends working for Anthropic, one for nearly two years and one for about 4 months.
Both of them tell me that this is not just marketing, that the company actually is ethical and safety conscious everywhere, and that this was the most surprising part about joining Anthropic for them. They insist the culture is actually genuine which is practically unicorn rarity in corporate America.
We have worked for FAANG so I know where they're coming from; this got me to drop my cynicism for once and I plan on interviewing with them soon. Hopefully I can answer this question for myself.
From the outside, I find Anthropic's hyperbolic marketing to be an indication that they are basically the same as every other bay area tech startup - more or less nice folks who are primarily concerned with money and status. That's not a condemnation, but I reject all the "do no evil" fanfare as conveniently self serving.
You have a point in that Anthropic deserves some coverage too and that there are interesting perspectives that we've not heard of on that front either.
But just because that's true doesn't mean this article isn't very much relevant and needed.
Because it is.
That would be irrational.
We should give air time to other problems?
I think everyone agrees with that.
You have managed to distill a surprisingly pure vintage of false dichotomy, from a near Platonic varietal of whataboutism.
Now, they may have heard the word "Anthropic" due to recent media coverage. But they don't know what it is and don't remember what it makes. The fact that all businesses use "Anthropic" is about as relevant to them as knowing the overseas shipping company for all the shit they buy off Amazon.
So articles about OAI will always produce more revenue for the media, because it's related to what normies actually use day to day.
As a reader, am I supposed to infer anything about evidentiary weight from these stylistic choices? When a single anonymous source's testimony is presented in a "declarative" narrative style like here (with the attribution in a less prominent position), should we read that as reflecting high confidence on your end (perhaps from additional corroboration not fully spelled out)? And does the fact that Altman’s non-recollection appears in parentheses carry any epistemic signal (e.g. that you assign it less evidentiary weight)? Or is that mostly a matter of (say) prose rhythm?
For me, a big worry about AI is in its potential to further ease distorting or fabricating truth, while simultaneously reducing people's "load-bearing" intellectual skills in assessing what is true or trustworthy or good. You must be in the middle of this storm, given your profession and the investigations like this that you pursue.
Do you see a path through this?
I would really suggest subscribing to and finding ways to amplify independent outlets and journalists, and encouraging others to do so.
In developer communities often you can support individual developers or groups through a monthly subscription / donation on their github page or similar.
> “Investors are, like, I need to know you’re gonna stick with this when times get hard,”
Should be:
> “Investors are like, I need to know you’re gonna stick with this when times get hard,”
Altman describes his shifting views as genuine good faith evolution of thinking. Do you believe he has a clear North Star behind all this that’s not centered on himself?
My own impression after many hours of conversation is that he is identifying something of a true north star when he frames this around "winning." There are people in the story who talk about him emphasizing a desire for power (as opposed to, say, wealth). I think he probably also believes, to some extent, the story he tells that equates winning, and his gaining power, with a superabundant utopian future for all.
However, I think critics correctly highlight a tension between his statements about centering humanity writ large and his tilt into relentless accelerationism.
Please try to give people the benefit of the doubt though I know it's hard in today's society.
Or Mr Farrow can you post some evidence somewhere we can see?
I'm not pleased with the headline and the general framing that AI works. The plagiarism and IP theft aspects are entirely omitted. The widespread disillusion with AI is omitted.
On the positive side, the Kushner ad Abu Dhabi involvements (and threats from Kushner) deserve a wider audience.
My personal opinion is that "who should control AI" is the wrong question. In the current state, it is an IP laundering device and I wonder why publications fall silent on this. For example, the NYT has abandoned their crown witness Suchir Balaji who literally perished for his convictions (murder or not).
I would love to read your piece and pay you and new Yorker for it, but I am not interested in paying a subscription. If I could press a button and pay a reasonable one time license such as $3 or $5 for just this article, or better yet a few cents per paragraph as they load in, I wouldn't hesitate.
However I'm not going to pay for yet another subscription to access one article I'm interested in.
I'm sure you can't do anything about this, but I just wanted you to know.
You deserve to be compensated for great journalism. In this case, unfortunately, I won't read it and you won't earn income from me.
> "Employees now call this moment “the Blip,” after an incident in the Marvel films in which characters disappear from existence and then return, unchanged, to a world profoundly altered by their absence."
This is a very small detail, but an instinctive grimace crosses my face at the thought of these sort of Marvel references and I'm not entirely sure why.
The people shaping the future have no taste.
Is it cynical to want your <art project> to make a profit? Or for it to make enough profit to subsidize other projects?
Is it cynical to make something accessible so more people who watch it are able to enjoy it?
I agree that it's embarrassing and feels crass when movies both try to be broadly appealing and simultaneously fail to be entertaining or well executed ... but many of the marvel movies clearly surpass that bar.
No one wants to make a bad movie that does poorly with critics and paying customers - but it does happen because making a movie is expensive and complicated and requires a lot of skilled people working together towards the same goal.
Regarding taste: do you think a michelin star chef swears off cheap food like hotdogs or fish and chips? Doubtful - because those foods have their place and the chef is able to enjoy them for what they are rather than use them as an excuse to display a superiority complex.
Yeah, I'm saying professional communication isn't the place for Marvel references, and that those who choose to include references to those movies in their professional communications are revealing something about their media tastes.
If I'm at a Michelin star restaurant I don't want to be served a ballpark hotdog.
I know a lot of people are critical of the Rotten Tomatoes score, but I find that when a high enough percentage of reviews are positive, it is likely I will enjoy the movie. Some of the Marvel movies have a very high proportion of positive reviews (admittedly, those reviews could be just positive, not very positive). And for most in this list with a very high score, I think it's deserved.
https://en.wikipedia.org/wiki/List_of_Marvel_Cinematic_Unive...
Arguably, one indication of the limitations of the Rotten Tomatoes score is the number of these Marvel movies with high scores :)
Btw, I'm not trying to convince you that if you watch the movies you'll like them. Just that they may not all be as bad as you think.
Thus it is a writer's job not to make references they find appealing to reveal their good taste, but to know what references their audience will find appealing and use them to help communicate concepts. If this bothers you it's because they're insulting you by saying you might be part of the audience that watches Marvel, and you had hoped reading the New Yorker would signal that you aren't.
Fantastic reporting.
For anyone unfamiliar with this process, the New Yorker documentary is well worth the watch: https://www.netflix.com/title/81770824
Perhaps you mean to distinguish social groups without much intimacy? To which I'm sure we could provide some convincing cases, but this seems like a silly heuristic generally.
Someone cheating regularly on their partner, flagrant substance use problems, controlling people who ostracize anyone who doesn't agree with their sometimes insane perspectives...
People will go along with quite a lot to avoid friction, especially as they get older and picking up new social circles becomes higher cost.
It's possibly the most telling thing, when you see what people say is a hard line versus how they actually respond to it.
In your investigation were you able to determine if Altman has similar proxies?
How common would you say that this is? Do these kinds of people generally have teams of people who sling mud for them?
Can you speculate on how that manifests on a site like Hackernews?
This statement rings true.
JL, PG has mentioned often, is his weapon to test the “people” integrity aspect of YC / Startups. It’s not lost on me both Altman and Thiel both associated with YC were useful short term only, highlighting how regular “character” evaluations are required at higher levels of responsibility.
I liked that mental image a lot! (I try to maintain being uncertain whether Deckard was a replicant)
https://harpers.org/archive/2026/03/childs-play-sam-kriss-ai...
Or maybe they were not so much "worried" but "hopeful" that they'd amass literally all the wealth in the world.
And not intending to defend the motives of anyone involved, but I'm hoping we can not worry about literally all jobs being destroyed, and AI companies amassing all the wealth in the world.
Don't we need at least some humans working and earning to buy these AI services? Am I not being imaginative enough? Is it possible for the whole economy to consist just of AI selling services to each other?
I realise that even if AI destroys most jobs, or even just a lot of jobs, and amasses most wealth, or a lot of wealth, it would still be a terrible thing for humans. The word "all" could have just been hyperbole, and it is still a valid point. I just want to know people's thoughts on whether entire replacement is possible.
Sam Altman clearly has a long history of nefarious activity. But the underlying threat posted by AI to society, the economy and human freedom persists with or without his presence.
>Altman continued touting OpenAI’s commitment to safety, especially when potential recruits were within earshot. In late 2022, four computer scientists published a paper motivated in part by concerns about “deceptive alignment,” in which sufficiently advanced models might pretend to behave well during testing and then, once deployed, pursue their own goals.
(plus it finally resolves the mystery of "what Ilya saw" that day)
Also since it wasn't stated clearly
>“the breach” in India. Altman, during many hours of briefing with the board, had neglected to mention that Microsoft had released an early version of ChatGPT in India
That was Sydney if I understand correctly.
I can't imagine having such uninspired thoughts and actually writing them down while in a role of such diverse and worthwhile opportunities. I'd like to ask "how the hell do these people find themselves in these positions", but I think the answer is literally what he wrote in his diary. What a boring answer. We need to filter these people out at every turn, but instead they're elevated to the highest peaks of power.
The difference isn't that the average techie doesn't dream of making a billion by any means necessary; it's that most of us don't think we have a shot, so we stick to enabling lesser evils to retire with mere millions in the bank.
That's actually the difference, most people don't want a billion
I hope that's not true. If it is, we live in a bleak world indeed.
I can confidently say I've never once dreamed of having billions. I've never wanted billions. Not even in a fanciful manner. What would I do with that money? Buy mansions and megayachts? That's loser stuff
Most of what I want out of life cannot be bought. The pieces that come with a price tag, like a comfortable home, do not require billions
I think only sociopaths want billions because they don't understand spending your life seeking things that actually matter, like family and human connection
You need to accept that every generation some people are going to try and fuck things up.
Then you get to decide to stop or help them
No need to be petty. They have a point. We did this with the words racist and fascist. Overinclusion diluted the term and gave cover for the actual baddies to come in. I'm not sure debating who is and isn't a sociopath is as useful as, say, the degree to which Sam is a liar (versus visible).
Its up to rest of society to keep them in check since classic morals are highly optional and considered nuissance blocking those games. And here we the rest fail pretty miserably, while having on paper perfect tool - majority vote.
> Amodei’s notes describe escalating tense encounters, including one, months later, in which Altman summoned him and his sister, Daniela, who worked in safety and policy at the company, to tell them that he had it on “good authority” from a senior executive that they had been plotting a coup. Daniela, the notes continue, “lost it,” and brought in that executive, who denied having said anything. As one person briefed on the exchange recalled, Altman then denied having made the claim. “I didn’t even say that,” he said. “You just said that,” Daniela replied.
In the opinion of multiple models, and nothing to do with me (just prompt jockeying) including OpenAI, they heretofore present:
Sam I Am: The Shell Game of Sam Altman
There's a carefully cultivated persona around Sam Altman — the affable, visionary builder, the pragmatic optimist, the guy who just wants to "ship the future." But look closely, and that image starts to feel less like substance and more like choreography. What emerges, across a documented pattern spanning three companies and nearly two decades, is a figure whose reputation for candor has been challenged at every major stop in his career — not by political enemies or jealous rivals, but by colleagues, executives, and board members who worked alongside him closely enough to know.
The pattern starts early. At Loopt, the location-sharing startup Altman co-founded after dropping out of Stanford, senior employees went to the board — not once but twice — and asked that Altman be removed as CEO. Their description of his conduct: "deceptive and chaotic behavior." Wikipedia One former Loopt COO, Mark Jacobestein, later offered a more charitable formulation to the Wall Street Journal: "If he imagines something to be true, it sort of becomes true in his head ... It may or may not lead one to stretch, and that can make people uncomfortable." Openaifiles That framing — a man whose internal reality reshapes itself around convenience — would prove to be a preview.
At Y Combinator, which Altman led from 2014 to 2019, the mythology is of a smooth and successful tenure. That mythology has since frayed. Former board member Helen Toner, speaking publicly for the first time in 2024 about her role in removing Altman from OpenAI, stated that Altman had actually been fired from Y Combinator, though the departure was "hushed up at the time." Benzinga Y Combinator's leadership has disputed this characterization, saying Altman was asked to choose between YC and OpenAI rather than forced out — but the competing accounts illustrate, at minimum, that the departure was not the clean transition it was publicly presented as.
OpenAI was founded in 2015 as a nonprofit, with an explicit mission to develop artificial general intelligence "for the benefit of all of humanity." The emphasis on openness and safety was not incidental to the organization's identity — it was the whole point. Under Altman's leadership, that founding premise has been systematically revised. OpenAI is now a for-profit entity. Microsoft became the financial engine powering OpenAI, but the nonprofit's board of directors still called all the shots — despite Microsoft having no seat on the board. NPR The resulting tension was not a bug but a fixture: two competing missions occupying the same organization, with Altman threading between them.
The crisis came in November 2023. On November 17, OpenAI's board ousted Altman, stating that he "was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities." Wikipedia The board's specific grievances, which emerged in subsequent months, were not vague. When OpenAI released ChatGPT in November 2022, the board was not informed in advance and found out about it on Twitter. CNBC One of the most significant products in the company's history — perhaps in the history of technology — was announced to its own governing body via social media.
There was also the matter of the OpenAI Startup Fund. An unidentified board member learned by chance at a dinner party that the Startup Fund was not disbursing funds to intended investors, and after months of obfuscation, the board discovered that Altman himself owned the fund — a finding that began to unravel mounting doubts about the CEO's leadership. Futurism Board members had not been informed of Altman's personal ownership, despite the fund using OpenAI resources and trading on the company's name.
The internal testimony that preceded Altman's firing was striking in its specificity. Two executives reported to the board "psychological abuse" from Altman, providing screenshots and documentation of "lying and being manipulative in different situations." They said they had no belief that Altman could or would change. Many employees, Toner said, feared retaliation if they didn't support Altman. CNBC The former CTO Mira Murati described Altman's approach in terms that a Wall Street Journal reporter later characterized bluntly: Altman would "tell him one thing, then say another and act as if the difference was an accident. 'Oh, I must have misspoken,' Altman would say." Openaifiles
What happened next is itself revealing. When the board did fire Altman, over 700 out of 770 employees threatened to resign, and Microsoft offered to hire them all. Felix-krueckel The board had the formal authority to remove the CEO but lacked the practical power to make it stick. Altman was reinstated within five days. The board members who had voted to remove him — Toner and Tasha McCauley — were gone. They were replaced with Altman allies, including economist Larry Summers and former Facebook CTO Bret Taylor. Toner later said the reinstatement was framed as a binary choice for employees: bring Altman back, or OpenAI is destroyed. Quartz She also noted: "The second thing that is really important to know, that has really gone underreported, is how scared people are to go against Sam. They experienced him retaliating against people, retaliating against them, for past instances of being critical."
The governance failures ran deeper than one boardroom confrontation. In May 2024, it emerged that OpenAI had been requiring departing employees to sign exit agreements containing non-disparagement provisions that would eternally forbid them from criticizing their former employer, and non-disclosure provisions that prevented them from even mentioning the agreement's existence. Time Former employee Daniel Kokotajlo disclosed that he refused to sign, forfeiting equity worth roughly 85% of his family's net worth to preserve his right to speak. When the story broke publicly, Altman claimed on X that he had been unaware of the provision: "This is on me and one of the few times I've been genuinely embarrassed running OpenAI; I did not know this was happening and I should have." HR Grapevine USA
The problem with that statement: incorporation documents from April 2023 bearing Altman's signature explicitly authorized the equity clawback provisions, directly contradicting his claim of ignorance. Openaifiles Altman had signed the documents. OpenAI backtracked on the agreements only after public backlash made the position untenable.
No incident illustrated the broader pattern of advance and retreat more vividly than the Scarlett Johansson episode. Johansson says she was approached multiple times by OpenAI to be the voice of ChatGPT, and that she declined. NPR When OpenAI unveiled a new voice assistant called "Sky" for GPT-4o, Johansson said she was "shocked, angered and in disbelief" that Altman would pursue a voice "so eerily similar to mine that my closest friends and news outlets could not tell the difference." NBC News She noted that on the day of the product launch, Altman posted a single word to X: "her" — a direct reference to the 2013 Spike Jonze film in which Johansson voiced a romantic AI assistant. Johansson said Altman used this post to insinuate "the similarity was intentional." NBC News OpenAI pulled the voice only after Johansson hired legal counsel and sent two formal letters demanding an explanation.
The recurring structure in all of this is consistent. An action is taken. Questions arise. Altman expresses ignorance, regret, or bafflement. Then documentation emerges that complicates or contradicts the expressed ignorance. The cycle resets.
Some Microsoft senior executives, with whom OpenAI has had a long and lucrative partnership, privately described Altman as someone who "misrepresented, distorted, renegotiated, reneged on agreements." One senior executive went further, saying: "I think there's a small but real chance he's eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer." Gizmodo That's an extreme assessment — and notably an anonymous one — but it reflects a real undercurrent of distrust that runs through Altman's professional relationships, even among those who have benefited most from the partnership. None of this is to say Altman hasn't been effective. By conventional metrics — capital raised, products shipped, cultural influence — he has been extraordinary. ChatGPT reshaped how the world relates to technology. OpenAI sits at the center of the most consequential technological development in decades. These are not small things.
But effectiveness and integrity are not the same thing. Nor are visibility and accountability. A leader who withholds information from his own board, who is described by his closest collaborators as manipulative, who claims ignorance of documents he personally signed, who releases a product voice that his own post suggests was meant to evoke someone who said no — that leader is not simply navigating complexity. He is, in a very specific and documented way, running a shell game.
The story keeps moving. The substance remains, as ever, just slightly out of reach.
Of course, (despite the fact that Altman previously publicly stated that it was very important that the board can fire him) he got himself unfired very quickly.
(* This was predictable from the title, because the question in it was inevitably going to trigger an avalanche of crap replies. Normally we'd change the title to something less baity, and indeed the article is so substantive that it deserves a considerably better one. But I'm not going to change it in this case, since the story has connections to YC - about that see https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....)
This technology needs to become a commodity to destroy this aggregation of power between a few organizations with untrustworthy incentives and leadership.
If they discover the cure to cancer, I don't care how they did it. "I don't trust anyone who claims they're superhumanly intelligent" doesn't follow from "all they do is <how they work>".
It kind of does if how they work is nothing like genuine intelligence. You can (rightly) think AI is incredible and amazing and going to bring us amazing new medical technologies, without wrongly thinking its super amazing pattern recognition is the same thing as genuine intelligence. It should be worrying if people begin to believe the stochastic parrot is actually wise.
That's not a cure. Like yes, I'd care if the AI says it cures cancer while nuking Chicago. But that isn't what OP said.
And we’re just talking about cognition - it completely ignores the automatic processes such as maintaining and regulating the body and it’s hormones, coordinating and maintaining muscles, visual/spacial processing taking in massive amounts of data at a very fine scale, and informing the body what to do with it - could go on.
One of the more annoying things about this conversation is you don’t even need to make this argument to make the point you’re trying to make, but people love doing it anyway. It needlessly reduces how amazing the human brain is to a bunch of catchy sci fi sounding idioms.
It can be simultaneously true that transformer based language models can be very smart and that the human brain is also very smart. It genuinely confuses me why people need to make it an either/or.
I've wondered about this. Do we really know enough about what the human brain is doing to make a statement like this? I feel like if we did, we would be able to model it faithfully and OpenAI, etc. would not be doing what they're doing with LLMs.
What if human cognition turns out to be the biological equivalent of a really well-tuned prediction machine, and LLMs are just a more rudimentary and less-efficient version of this?
Or if the person lying is in a position of power?
Articles critical of Airbnb, one of yc's biggest wins, also get flagged and taken down.
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
As those comments explain, this has been the #1 rule of HN moderation from the beginning. See also https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que....
The main reason is that he gets all the downsides without the upsides. I know $5B is a lot but, for a 700B company, it isn't. If OpenAI was a regular for-profit, he would have been worth >$100B already.
This is probably one of the significant factors why other co-founders left too. It's just a lot of headaches with relatively low reward.
All the downsides without much upside...
Sergey Brin is trying to change that lately, but Altman still has a sizable head start.
The fact that some (usually toxic) individuals get there shows that the system is flawed.
The fact that those individuals feel like they can do anything other than shut up, stay low and silently enjoy the fact that they got waaaay too much money shows that the system is very flawed.
We shouldn't follow billionaires, we should redistribute their money.
I can see an argument when it comes to cashing out, but I’m not clear how that should work without creating really weird incentives. Some sort of special tax?
I want skilled institutional investors who have a track records of making smart bets. I don't want a random person who happened to get lucky in business dictating investment policy for substantial parts of the economy. I want accountability for abuses and mismanagement.
I know China gets a bad rep, but their bird cage market economy seems a lot more stable and predictable than this wild west pyramid scheme stuff we do in the US. Maybe there are advantages for some people in our model, but I really dislike the part where we consistently reward amoral grifters.
> In 2017, Amodei hired Page Hedley, a former public-interest lawyer, to be OpenAI’s policy and ethics adviser. In an early PowerPoint presentation to executives, Hedley outlined how OpenAI might avert a “catastrophic” arms race—perhaps by building a coalition of A.I. labs that would eventually coördinate with an international body akin to NATO, to insure that the technology was deployed safely. As Hedley recalled it, Brockman didn’t understand how this would help the company beat its competitors. “No matter what I said,” Hedley told us, “Greg kept going back to ‘So how do we raise more money? How do we win?’ ” According to several interviews and contemporaneous records, Brockman offered a counterproposal: OpenAI could enrich itself by playing world powers—including China and Russia—against one another, perhaps by starting a bidding war among them. According to Hedley, the thinking seemed to be, It worked for nuclear weapons, why not for A.I.?
The fact that the chatbots are now used for plausible deniability and blame shifting in Gaza or Iran wasn't known then.
I get that this is the claimed ideal of journalism, at least for straight reporting. The problem is that it's impossible.
There isn't time or space to present all the information; the journalist has to filter. And filtering is never unbiased. Even the attempt to be "balanced" is a bias--see next item.
"Balanced" always seems to mean "give equal time and space to each side". But what if the two sides really are unbalanced? What if there's a huge pile of information pointing one way, and a few items that might point the other way if you believe them--and then the journalist insists on only showing you a few items from the first pile, so that the presentation is "balanced"? You never actually get a real picture of the facts.
There's a story that I first encountered in one of Douglas Hofstadter's books, about two kids fighting over a piece of cake: Kid A wants all of it for himself, Kid B wants to split it equally. An adult comes along and says, "Why don't you compromise? Kid A gets three-quarters and Kid B gets one-quarter." To me, the author of this article comes off like that adult.
In any case, all that assumes that this article is supposed to be just straight reporting, no opinion. For which, see the next item.
> It can be debated whether the title should be such a question.
Yes, it certainly can. If this article is just supposed to be straight reporting--no editorializing--then that title is definitely out of place. That title is an editorial--and the article either needs to own that and state the conclusion it's trying to argue for, or it shouldn't have had that title in the first place.
We have to deal with it. Or are you suggesting we should purchase a controlling interest and vote him off the board?
It's...weirdly a valid question. If Sam fibs as much as the next guy, we don't have a Sam problem. Focussing on him alon is, best case, a waste of resources. Worst case, it's distracting from real evil. If, on the other hand, as this reporting suggests, Sam is an outlier, then focussing on him does make sense.
And when you're dealing with a potential existential threat, this is an existential problem.
I am fairly confident when I say this -- sama is a sociopath. I don't know how anyone with solid intuition could even come to any other conclusion than the guy is deeply weird and off-putting.
Some concepts from the book:
> Core trait: The defining characteristic is the absence of conscience, meaning they feel no guilt, shame, or remorse.
> Identification: Sociopaths can be charming and appear normal, but they often lie, cheat, and manipulate to get what they want.
> The Rule of Threes: One lie is a mistake, two is a concern, but three lies or broken promises is a pattern of a liar.
> Trust your instincts over a person's social role (e.g., doctor, leader, parent)
Check and check.
OpenAI is too important to trust sama with. He needs to go. In fact, AI should be considered a public good, not a commodity pay-as-you-go intelligence service.
> OpenAI is too important to trust sama with.
...wat? They made a chat bot. How can that possibly be so existentially important? The concept of "importance" (and its cousin "danger") has no place in the realistic assessment of what OpenAI has accomplished. They haven't built anything dangerous, there is no "AI safety" problem, and nothing they've done so far is truly "important". They have built a chat bot which can do some neat tricks. Remains to be seen whether they'll improve it enough to stay solvent.
The whole "super serious what-ifs" game is just marketing.
We only say a lot of CEOs are sociopaths because they're in that third category we haven't named, where they're very good at manipulating people, but also can feel conscience, guilt, remorse, etc, perhaps just muted or easier to justify against.
E.g. if you think you're doing something for the betterment of mankind, it doesn't really matter if you lie to some board members some year during the multi-decade pursuit.
Whereas the people in the category I’m describing might feel those things, but prioritize those feelings far below the benefits of achieving what they set out to achieve.
Yes that is the core trait I highlighted in the 1st bullet.
There is -- I call it "corpo sociopath." The corpo sociopath really comes out in the workplace, less so in personal life.
I only saw this thread by chance and almost didn't look, because the title made the piece sound like a flamebait blog post. Fortunately I saw newyorker.com beside the title and looked more closely.
Thank you for looking. Please do spread this kind of reporting in your communities, and subscribe to investigative outlets when you can.
A paper with "ideas to keep people first" was (coincidentally?) published today:
• Worker perspectives
• AI-first entrepreneurs
• Right to AI
• Accelerate grid expansion
• Accelerate scientific discovery and scale the benefits.
• Modernize the tax base
• Public Wealth Fund
• Efficiency dividends
• Adaptive safety nets that work for everyone
• Portable benefits
• Pathways into human-centered work
https://openai.com/index/industrial-policy-for-the-intellige...But the discussion is generally pretty low quality with these sort of posts. People react without having read the story, or with whatever was on their mind already, or are insubstantive, or simply low effort. I don't think you'll lose k-factor not having a bigger post here.
Sometimes if you talk to the mods, they'll let you know their perspective. I generally find they're correct that people are much better at contributing/disseminating new knowledge to the world on more technical topics here.
We do tend to devalue titles like this, or more likely change them to something more substantive (preferably using a representative phrase from the article body), but I'm worried that if I did that here we would get howls of protest, since YC is part of the story.
It's an interesting dilemma. Many very respected publications use provocative titles because of the attention economy. And I'm sure you have good data that provocative titles lead to drive-by comments and flame wars.
But I don't think big_toast was entirely wrong that there is a side effect of sometimes burying articles that are by their nature provocative. And how do you distinguish a flame war over a title from a flame war over content? That's not a leading question. I don't know.
I’ve always been a huge fan of Ronan Farrow’s journalism and willingness to speak truth to power. I think he’s pulling at exactly the right thread here, and it’s very important to counteract Altman’s reputation laundering given that we run a very real risk of him weaseling his way into the taxpayer’s wallet under the current administration.
I just don’t feel like OpenAI has a legitimate shot at winning any of the AI battles.
Therefore, I feel like “Sam Altman may control our future” is a far stretch.
You might be. Or at least I feel like Gemini is actually dumber than a house of bricks - I have multiple examples, just from last week, where following its advice would have lead to damage to equipment and could have hurt someone. That's just trying to work on an electronics project and askin Gemini for advice based on pictures and schematics - it just confidently states stuff that is 100000% bullshit, and I'm so glad that I have at least a basic understanding of how this stuff works or I would have easily hurt myself.
It's somewhat decent at putting together meal plans for me every week, but it just doesn't follow instructions and keeps repeating itself. It hardly feels worth any money right now, like it's some kind of giant joke that all these companies are playing on us, spending billions of these talking boxes that don't seem that intelligent.
I also use claude at work, and for C++ programming it behaves like someone who read a C++ book once and knows all the keywords, but has never actually written anything in C++ - the code it produces is barely usable, and only in very very small portions.
Edit: I just remembered another one that made me incredibly angry. I've been reading the Neuromancer on and off, and I got back into it, but to remind myself of the plot I asked Gemini to summarise the plot only up to chapter 14, and I specifically included the instruction that it should double check it's not spoiling anything from the rest of the book. Lo and behold, it just printed out the summary of the ending and how the characters actions up to chapter 14 relate to it. And that was in the "Pro" setting too. Absolute travesty. If a real life person did that I'd stop being friends with them, but somehow I'm paying money for this. Maybe I'm the clown here.
https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...
The overall response and particularly the body language speaks a lot.
If for no other reason, given what happened when the board fired him... no. I'd say not.
https://www.reuters.com/technology/openai-signs-deal-with-co...
This is a damage control piece, and you see that the most stinging comments here get downvoted.
The real question is: can anyone be trusted if the fever dreams of super-intelligence come true? Go ahead and replace Sam Altman with someone else - will it make a difference? Any other CEO is going to be under the same overwhelming pressure to make a profit somehow. I think the OpenAI story is messier because it was founded for supposedly altruistic reasons, and then changed.
Methinks many of Altman's detractors protesteth too much. He's doing his job as it is defined (make OpenAI profitable.) Nothing of substance in this article seemed to make him exceptionally "sociopathic" compared to any other tech CEO. It goes with the territory.
What depressed me most is that trillions of dollars are being raised for building what will undoubtedly be used as a weapon. My guess is the ROI on that money is going to be extremely bad for the most part (AI will make some people insanely rich, but it is hard to see how the big investors will get a return.) Could you imagine if the world shared the same vision for energy infrastructure (so we could also stop fighting wars over control of fossil fuels and spewing CO2?) A man can dream...
lol do you think these guys have ever been hit? Let alone in the face. They’d probably be less eager to mouth off as much as they do if so.
The last person that this happened to was Sam Bankman Fried as investors and regular folk finally realized he was full of complete shit and could only talk the game for so long until the truth emerged.
These sociopaths are so good at giving away nothing. He managed to engender sympathy instead of saying "I'm not gonna talk about anything that happened then".
Also very weird how many of these people are so deeply-linked that they'll drop everything they're doing just to get this guy back in power? Terrifying cabal.
No, he cannot.
TLDR but just the heading is already ugly. No single person no matter how nice they're should be able to control our future. Power corrupts, what fucking trust. We are supposed to be democratic society (well looking at what is going on around this is becoming laughable)
2. You cannot "control" superintelligent AI.
FYI: I am by far not the only one to have experienced this and it 100% impacts hiring and other decisions at OpenAI.
It wouldn't particularly surprise me if Sam Altman were racist, but I'm curious what the specific incident you observed was.
1. I could have hidden my identify behind a throwaway. I did not feel that would be appropriate when making this calim.
2. I am not looking for anything, literally at all. Any follow ups for blogs; anything that would benefit I will not answer.
3. This is NOT a new account, I am very easy to find; I am 6'1 140lbs
I was working for a company called NationBuilder and I had the opportunity to go on a work trip. Outside of a talk he had just given I was waiting for my ride and I looked over like...damn thats the speaker. I wanted to say Hi; he damn near flagged down the police. I apologized and just decided to move on.
Note: It was in Reno, and no I don't want to go into details; the others are not hard to find because I happened upon them via blog posts so i'm sure if someone with the accumen of RF wants to know, he will find.
I have heard similar stores from several people in the years since. I AM NOT CALLING THIS PERSON RACIST. I am saying; he is observably scared of black people and that is not someone I want making descions about how the world moves foward.
If you don't believe what I shared is true, address that directly. But seeing my post sitting at 1 point and [flagged] after 2 hours is not OK. Just as DJT can't flag away his issues, you shouldn't be able to do so on HN.
One of the things I've loved most about HN is that it was real — grounded in observability, empirical evidence, not bias or feelings. I really hope that what happened to my post is not the beginning or a continuance of the end for that ethos.
The biggest flaw is that it spends way too much time on high-school level drama and "he-said-she-said" gossip about Sam Altman’s personal life instead of focusing on the actual technical and corporate capture of OpenAI.
The author treats the "nonprofit mission" like some holy quest that was "betrayed," when anyone with a brain in tech saw the Microsoft deal as the moment the original vision died. Instead of a hard-hitting look at how compute-monopolies are actually forming (MSFT AMZN NVDA and circular debt dealing inflating the AI bubble that could crash the economy), we get 5,000 words of hand-wringing over whether Sam is a "nice guy" or a "liar."
Who cares???????
The board failed because they had no real leverage against billions of dollars, not because they didn't write enough Slack messages. It's a long-winded way of saying "Silicon Valley has internal politics," which isn't news to anyone here.
I’d be more concerned about Anthropic both being in the good graces of the public and having access to all of our computers indirectly with Claude Code.
"the local drug-dealing pimp is so passe, we need to investigate the most upstanding members of the community just to be sure" is a frankly insane strategy