▲It says something that he still believes he has "missionaries" after betraying all the core principles that OpenAI was founded on. What exactly is their mission now other than generating big $?
reply▲motorest46 minutes ago
[-] > It says something that he still believes he has "missionaries" after betraying all the core principles that OpenAI was founded on.
What I find the most troubling in this reaction is how hostile it is to the actual talent. It accuses everyone and anyone who is even considering to join Meta in particular but any competitor in general as being a mercenary. It's using the poisoning the well fallacy to shield OpenAI from any competition. And why? Because he believes he is in a personal mission? This emits "some of you may die, but it's a sacrifice I am willing to make" energy. Not cool.
reply▲It's absoluely ridiculous that investors are driven (and are expected to be driven) by maximisation of return of investment, and that alone, but when labour/employees exhibit that same behaviour, they are labeld "mercenaries" or "transacitonal".
reply▲misterhill18 minutes ago
[-] He was very happy when money caused them all to back him despite the fact that he obviously isn't a safe person to have in a position of power. But if they realize they have better money options than turning OpenAI into a collusion against its original foundation and mostly for his benefit, well then they are mercenaries..
reply▲He claims to be advancing progress. He believes that progress comes from technology plus good governance.
Yet our government is descending into authoritarianism and AI is fueling rising data center energy demands exacerbating the climate crisis. And that is to say nothing of the role that AI is playing in building more effective tools for population control and mass surveillance. All these things are happening because the governance of our future is handled by the ultra-wealthy pursuing their narrow visions at the expense of everyone else.
Thus we have no reason to expect good “governance” at the hands of this wealthy elite and we only see evidence to the opposite. Altman’s skill lies in getting people to believe that serving these narrow interests is the pursuit of a higher purpose. That is the story of OpenAI.
reply▲It is a widely accepted definition of AGI that it is something that is either really smart (or generates more than 100B usd in revenue).
It is also clear Sam Altman and OpenAI’s core values remain intact.
reply▲spencerflem27 minutes ago
[-] Lol, I'm sure Sam Altman's ideals haven't changed but you're a fool if you think OpenAI is aiming for anything loftier than a huge pile of money for investors.
reply▲Exactly. He says missionaries and immediately follows it by talking about compensation (ie a mercenary incentive)
reply▲They haven't released much closed source, open weights in comparison to their competitors, but they made their Codex agent Open Source while Claude Code is still closed source.
reply▲jonplackett14 minutes ago
[-] That’s just a wrapper though isn’t it. The secret sauce is still secret.
reply▲A company's mission is not an individual's mission. I personally would never hire an engineer whose main pursuit is money or promotions. These are the laziest engineers that exist and are always a liability.
reply▲quantified9 minutes ago
[-] Everyone is the chairman of the board of their lives, with a fiduciary duty to their shareholder, namely themselves. You can decide to hire only employees who either believe in mission over pay or who are willing to mouth the words, but you will absolutely miss out on good employees.
I remember defending a hiring candidate who had said he got into his specialty because it paid better than others. We hired him and he was great, worth his pay. No one else on the hiring team could defend a bias against someone looking out for themselves.
reply▲What is your opinion on managerial virtue signaling?
reply▲this is so, so out of touch.
reply▲cubancigar1142 minutes ago
[-] "I would never give me money to someone who wants money."
reply▲noufalibrahim1 minute ago
[-] One variable that I think it missing here is that Meta is profitable whereas OpenAI is not.
reply▲“I don’t think Sam is the guy who should have the finger on the button for AGI.”
- Ilya Sutskever, Co-founder, co-lead of Superalignment Team , Departed early 2024
- May 15, 2025, The Atlantic
Anyway, I concur it's a hard choice as one other comment mentions.
reply▲godelski34 minutes ago
[-] Is this a button any one person should have their finger on?
reply▲Exactly. AGI is something that will significantly affect all of humanity. It should be treated like nuclear weapons.
reply▲> It should be treated like nuclear weapons.
Seeing how currently nuclear weapon holders are elected, that would be a disaster
reply▲Effectively kept secret and in the shadows by those working on it, until a world-altering public display makes it a hot politically charged issue, unaltering even 80 years later?
Edit: Honestly, I bet that "Altman", directed by Nolan's simulacrum and starring a de-aged Cillian Murphy (with or without his consent), will in fact deservedly win a few oscars in 2069.
reply▲jonplackett11 minutes ago
[-] And quickly proliferated around the world to other superpowers and rogue states…
Remember the soviets got the nuke so quick because they just exfiltrated the US plans
reply▲How many ethnic cleansings has Sam Altman facilitated?
reply▲Even if it's zero, he could still be a shitty person who shouldn't have access to that button. If anyone should have such access at all, of course.
reply▲Gigablah31 minutes ago
[-] Is that the du jour unit of measurement for morality now?
reply▲linotype28 minutes ago
[-] gestures broadly at every other thing we've known about Mark Zuckerberg since "Dumb Fucks" in college
reply▲renewiltord9 minutes ago
[-] I suppose as an American taxpayer and American voter, he is responsible for as many ethnic cleansings as anyone else. Supposedly, Armenians leaving Nagorno-Karabakh is ethnic cleansing, and the US did give aid to Azerbaijian so that makes Americans facilitators of ethnic cleansing, though admittedly so are the Canadians.
reply▲Big picture, I'll always believe we dodged a huge bullet in that "AI" got big in a nearly fully "open-source," maybe even "post open-source" world. The fact that Meta is, for now, one of the good guys in this space (purely strategically and unintentionally) is fortunate and almost funny.
reply▲burroisolator7 hours ago
[-] AI only got big, especially for coding, because they were able to train on a massive corpus of open source code. I don't think it is a coincidence.
reply▲Another funny possibly sad coincidence is that the licenses that made open source what it is will probably be absolutely useless going forward, because as recent precedent has shown, companies can train on what they have legally gained access to.
On the other hand, AGPL continues to be the future of F/OSS.
reply▲MIT is also still useful; it lets me release code where I don't really care what other people do with it as long as they don't sue me (an actual possibility in some countries)
reply▲Which countries would these be?
reply▲The US, for one. You can sue nearly anyone for nearly anything, even something you obviously won't win in court, as long as you find a lawyer willing to do it; you don't need any actual legal standing to waste the target's time and money.
Even the most unscrupulous lawyer is going to look at the MIT license, realize the target can defend it for a trivial amount of money (a single form letter from their lawyer) and move on.
reply▲You can sue for damages if they have malware in the code, there is no license that protects you from distributing harmful products even if you do it for free.
reply▲Yup. The book torrenting case is pretty nuts.
If I can reproduce the entirety of most books off the top of my head and sell that to people as a service, it's a copyright violation. If AI does it, it's fair use.
Pants-on-head idiotic judge.
reply▲hardwaresofton47 minutes ago
[-] This was immediately my reaction as well, but I'm not a judge so what do I know. In my own mind I mark it as a "spice must flow" moment -- it will seem inevitable in retrospect but my simple (almost surely incorrect) take is that there just wasn't a way this was going to stop AI's progress. AI as a trend has incredible plot armor at this point in time.
Is the hinge that the tools can recall a huge portion (not perfectly of course) but usually don't? What seems even more straight forward is the substitute good idea, it seems reasonable to assume people will buy less copies of book X when they start generating books heavily inspired by book X.
But, this is probably just a case of a layman wandering into a complex topic, maybe it's the case that AI has just nestled into the absolute perfect spot in current copyright law, just like other things that seem like they should be illegal now but aren't.
reply▲fragmede41 minutes ago
[-] I didn't see the part of the trial where they got the "entirety of most books" out of Llama. What did you see that I didn't?
reply▲Oh, they're actually the bad guys, just folks haven't thought far enough ahead to realise it yet.
reply▲This is an instance of bad guys fighting bad guys.
reply▲OK, lay it on us.
reply▲It’s not unreasonable given the mountain of evidence of their past behaviour to just assume they are always the “bad guy”.
reply▲I would normally agree, but we're instantially talking about the company that made Pytorch and played an instrumental role in proliferating usable offline LLMs.
If you can make that algebra add up to "bad guy" then be my guest.
reply▲Just read Careless People.
reply▲reply▲fragmede39 minutes ago
[-] don't call them open source when they're not. it's shared model.
reply▲TheRoque21 minutes ago
[-] It's just how they call them... Hence the quotes.
reply▲They're involved in genocide and enables near-global tyranny through their surveillance and manipulation. There are no excuses for working for or otherwise enabling them.
reply▲> bad guys
You imply there are some good guys.
What company?
reply▲Google circa 2005?
Twitter circa 2012?
In 2025? Nobody, I don't think. Even Mozilla is turning into the bad guys these days.
reply▲jsrozner54 minutes ago
[-] Google was bad the moment it chose its business model. See The Age of Surveillance Capitalism for details. Admittedly there was a nice period after it chose its model when it seemed good because it was building useful tools and hadn't yet accrued sufficient power / market share for its badness to manifest overtly as harm in the world.
reply▲rTX5CMRXIfFG2 hours ago
[-] Depends. Does your definition of “good” mean “perfect”? If so, cynical remarks like “no one is good” would be totally correct.
reply▲Dont make the mistake of anthropomorphizing Mark Zuckerberg. He didnt open source anything because he's a "good guy", he's just commoditizing the complement.
The "good guy" is a competitive environment that would render Meta's AI offerings to be irrelevant right now if it didnt open source.
reply▲somenameforme1 hour ago
[-] The reason Machiavellianism is stupid is that the grand ends the means aim to obtain often never come to pass, but the awful things done in pursuit of them certainly do. So the motivation behind those means doesn't excuse them. And I see no reason the inverse of this doesn't hold true. I couldn't care less if Zuckerburg thinks open sourcing Llama is some grand scheme to let him to take over the world to become its god-king emperor. In reality, that almost certainly won't happen. But what certainly
will happen is the world getting free and open source access to LLM systems.
When any scheme involves some grand long-term goal, I think a far more naive approach to behaviors is much more appropriate in basically all cases. There's a million twists on that old quote that 'no plan survives first contact with the enemy', and with these sort of grand schemes - we're all that enemy. Bring on the malevolent schemers with their benevolent means - the world would be a much nicer place than one filled with benevolent schemers with their malevolent means.
reply▲> The reason Machiavellianism is stupid is that the grand ends the means aim to obtain often never come to pass
That doesn't feel quite right as an explanation. If something fails 10 times, that just makes the means 10x worse. If the ends justify the means then doesn't that still fit into Machiavellian principles? Isn't the complaint closet to "sometimes the ends don't justify the means"?
reply▲Oh, absolutely -- I definitely meant that in the least complimentary way possible :). In a way, it's just the triumph of the ideals of "open source," -- sharing is better for everyone, even Zuck, selfishly.
reply▲Quarrelsome10 hours ago
[-] they did say "accidentally". I find that people doing the right thing for the wrong reasons is often the best case outcome.
reply▲The price tag on this stuff, in human capital, data, and hardware, is high enough to preclude that sort of “perfect competition” environment.
Don’t let the perfect be the enemy of the good.
reply▲petesergeant1 hour ago
[-] > The price tag on this stuff, in human capital, data, and hardware, is high enough to preclude that sort of “perfect competition” environment.
I feel like we right now live in that perfect competition environment though. Inference is mostly commoditized, and it’s a race to the bottom for price and latency. I don’t think any of the big providers are making super-normal profit, and are probably discounting inference for access to data/users.
reply▲Only because everyone believes it’s a winner takes all game and this perfect competition will only last for as long as the winner hasn’t come out on top yet.
reply▲petesergeant42 minutes ago
[-] > everyone believes it’s a winner takes all game
Why would anyone think that, and why do you think everyone thinks that?
reply▲moralestapia4 hours ago
[-] >he's just commoditizing the complement
That's a cool smaht phrase but help me understand, for which Meta products are LLMs a complement?
reply▲A continuous stream of monetizable live user data?
The entire point of Meta owning everything is that it wants as much of your data stream as it can get, so it can then sell more ad products derived from that.
If much of that data begins going off-Meta, because someone else has better LLMs and builds them into products, that's a huge loss to Meta.
reply▲moralestapia4 hours ago
[-] Sorry, I don't follow.
>because someone else has better LLMs and builds them into products
If that were true they wouldn't be trying to create the best LLM and give it for free.
(Disclaimer: I don't think Zuck is doing this out of the good of his heart, obv. but I don't see the connection with the complements and whatnot)
reply▲Meta has ad revenue. I think Meta’s play is to make it difficult for pure AI competitors to make revenue through LLMs.
reply▲That’s not commoditising the complement!
reply▲I think its about sapping as much user data from competitors. A company seeking to use an LLM has a choice between OpenAI, LLaMA, and others. If they choose LLaMA because it's free and host it themselves, OpenAI misses out on training data and other data like that
reply▲Well is the loss of training data from customers using self-hosted Llama that big a deal for OpenAI or any of the big labs at this point? Maybe in late-2022/early-2023 during the early stages of RLHF'd mass models but not today I don't think. Offerings from the big labs have pretty much settled into specific niches and people have started using them in certain ways across the board. The early land grab is over and consolidation has started.
reply▲Your ability to use a lesser version of this AI on your own hardware will not save you from the myriad ways it will be used to prey on you.
reply▲Why not? Current open models are more capable than the best models from 6 months back. You have a choice to use a model that is 6 months old - if you still choose to use the closed version that’s on you.
reply▲And an inability to do so would not have saved you either.
reply▲Most of Meta's models have not been released as open source. Llama was a fluke, and it helps to commoditize your compliment when you're not the market leader.
There is no good or open AI company of scale yet, and there may never be.
A few that contribute to the commons are Deep Seek and Black Forest Labs. But they don't have the same breadth and budget as the hyperscalers.
reply▲Llama is not open source. It is at best weights available. The license explicitly limits what kind of things you are allowed to use the outputs of the models for.
reply▲Is that easier to enforce than having AI only trained in a legal way (=obeying licenses / copyright law)?
reply▲Which, given what it was trained on, is utterly ridiculous.
reply▲Yup, but that being said, Llama is GPLv3 weather Meta likes it or not. Same as ChatGPT and all the others. ALL of them can perfectly reproduce GPLv3 licensed works and data, making them derivative work, and the license is quite clear on that matter. In fact up until recently you could get chatGPT to info dump all sorts of things with that argument, but now when you try you will hit a network error, and afterwards it seems something breaks and it goes back to parroting a script on how it's under a proprietary license.
reply▲That's not true, the llama that's open source is pretty much exactly what's used internally
reply▲[flagged]
reply▲reply▲I agree, nobody should call anyone an idiot. However, naivity isn't a slur, its a personality trait.
reply▲It's not ok here. The comment loses nothing if that sentence is removed.
reply▲Can someone make an honest argument for how OpenAI staff are missionaries, after the coup?
I'd be very happy to be convinced that supporting the coup was the right move for true-believer missionaries.
(Edit: It's an honest and obvious question, and I think that the joke responses risk burying or discouraging honest answers.)
reply▲That is just an act of corpo-ceo bulshitting employees and press about high moral standards, mission, etc. Don't trust any of his words.
reply▲Anytime someone tells you to be in it for the mission, you are expendable and underpaid.
reply▲chaosharmonic6 hours ago
[-] I don't at all disagree with you, but at the kind of money you'd be making at an org like OAI, it's easy to envision there being a ceiling, past which the additional financial compensation doesn't necessarily matter that much.
The problem with the argument is that most places saying this are paying more like a sub-basement, not that there can't genuinely be more important things.
That said, Sam Altman is also a guy who stuck nondisparagement terms into their equity agreement... and in that same vein, framing poaching as "someone has broken into our home" reads like cult language.
reply▲We shouldn’t even be using the offensive word “poaching.” As an employee, I am not a deer or wild boar owned by a feudal lord by working for his company. And another employer isn’t some thief stealing me away. I have agency and control over who I enter into an employment arrangement with!
reply▲So then, is "headhunting" more or less bad?
reply▲I think anything that evokes “hunting on someone else’s land for his property” is equally inappropriate.
reply▲DanielHB11 minutes ago
[-] Crazy that this proves that engineers making >1 million USD /year can still be underpaid
reply▲Could Facebook hire away OpenAI people just by matching their comp? Doubtful. Facebook is widely hated and embarrassing to work at. Facebook has to offer significantly more.
And if someone at OpenAI says hey Facebook just offered me more money to jump ship, that's when OpenAI says "Sorry to hear, best of luck. Seeya!"
In this scenario, you're only underpaid by staying at OpenAI if you have no sense of shame.
reply▲I wonder what is it that Facebook offered? It can’t be money so I think it’s more responsibility or freedom. Or they had some secret breakthroughs?
reply▲It's money. It's also a fresh, small org and a new project, which is exciting for variety of reasons.
reply▲I can't explain why but I don't think money is it. Nor a new project or whatever can't be it either. Its just too small of a value proposition when you are already in openAI making banger models used by the world.
reply▲That could be genuine words. Mission is to be expendable and make them rich.
Don't forget about the mission during next round of layoffs and record high quarterly profits.
reply▲casualscience9 hours ago
[-] yeah, I used to work in the medical tech space, they love to tell you how much you should be in it for the mission and that's why your pay is 1/3 what you could make at FAANG... of course, when it came to our sick customers, they need to pay market rates.
reply▲Missionary (from wikipedia):
A missionary is a member of a religious group who is sent into an area in order to promote its faith or provide services to people, such as education, literacy, social justice, health care, and economic development. - https://en.wikipedia.org/wiki/Missionary
Post coup, they are both for-profit entities.
So the difference seems to be that when meta releases its models (like bibles), it is promoting its faith more openly than openai, which interposes itself as an intermediary.
reply▲Altman has to be the most transparently two-faced tech CEO there is. I don't understand why people still lap up his bullshit.
reply▲Money.
reply▲What money is in it for the "rationalist", AI doom crowd that build up the narrative Altman wants for free?
reply▲mitthrowaway21 hour ago
[-] Suggesting that the AI doom crowd is building up a narrative for Altman is sort of like saying the hippies protesting nuclear weapons are in bed with the arms makers because they're hyping up the destructive potential of hydrogen bombs.
reply▲That analogy falls flat. For one we have seen the destructive power of hydrogen bombs through nuclear tests. Nuclear bombs are a proven, real threat that exists now. AGI is the boogeyman under the bed, that somehow ends up never being there when you are looking for it.
reply▲It's a real negotiating tactic:
https://en.wikipedia.org/wiki/BrinkmanshipIf you convince people that AGI is dangerous to humanity and inevitable, then you can force people to agree with outrageous, unnecessary investments to reach the perceived goal first. This exactly happened during the Cold War when Congress was thrown into hysterics by estimates of Soviet ballistic missile numbers: https://en.wikipedia.org/wiki/Missile_gap
reply▲mitthrowaway237 minutes ago
[-] Chief AI doomer Eliezer Yudkowsky's latest book on this subject is literally called "If Anyone Builds it, Everyone Dies". I don't think he's secretly trying to persuade people to make investments to reach this goal first.
reply▲dumb people need symbols. Same reason elon gets worship.
reply▲Tim Cook is right there. If I say "Vision Pro" I'll probably get downvoted out of a mere desire to not want to talk about that little excursion.
reply▲kenjackson11 hours ago
[-] I'm not very informed about the coup -- but doesn't it just depend on what side most of the employees sat/sit on? I don't know how much of the coup was just egos or really an argument about philosophy that the rank and file care about. But I think this would be the argument.
reply▲There was a petition with a startlingly high percentage of employees signing it, but no telling how many of them felt pressured to to keep their job.
reply▲They didn't need pressuring. There was enough money involved that was at risk without Sam that they did what they thought was the best way to protect their nest eggs.
reply▲The thing where dozens of them simultaneously posted “OpenAI is nothing without its people” on Twitter during the coup was so creepy, like actual Jonestown vibes. In an environment like that, there’s no way there wasn’t immense pressure to fall into line.
reply▲That seems like kind of an uncharitable take when it can otherwise be explained as collective political action. I’d see the point if it were some repeated ritual but if they just posted something on Twitter one time then it sounds more like an attempt to speak more loudly with a collective voice.
reply▲Honest answer*:
I think building super intelligence for the company that owns and will deploy the super intelligence in service of tech's original sin (the algorithmic feed) is a 100x worse than whatever OpenAI is doing, save maybe OpenAI's defense contract, which I have no details about.
Meta will try to buoy this by open-sourcing it, which, good for them, but I don't think it's enough. If Meta wants to save itself, it should re-align its business model away from the feeds.
In that way, as a missionary chasing super intelligence, I'd prefer OpenAI.
*because I don't have an emotional connection to OpenAI's changing corporate structure away from being a non-profit:
reply▲> I think building super intelligence for the company that owns and will deploy the super intelligence in service of tech's original sin (the algorithmic feed) is a 100x worse than whatever OpenAI is doing,
OpenAI announced in April they'd build a social network.
I think at this point it barely matters who does it, the ways in which you can make huge amounts of money from this are limited and all the major players after going to make a dash for it.
reply▲makeitdouble9 hours ago
[-] As a thought exercise, OpenAI can partner to apply the technology to:
- online gambling
- kids gambling
- algorithmic advertising
Are these any better ? All of these are of course money wells and a logical move for a for-profit IMHO.
And they can of course also integrate into a Meta competitor's algorithmic feeds as well, putting them at the same level as Meta in that regard.
All in all, I'm not seeing them having any moral high ground, even purely hypotheticaly.
reply▲Wait if an online gambling company uses OpenAI API then hosts it all on AWS, somehow OpenAI is more morally culpable than AWS? Why?
reply▲makeitdouble8 hours ago
[-] I saw the discussion as whether OpenAI is on a better moral ground than Meta, so this was my angle.
On where the moral burden lies in your example, I'd argue we should follow the money and see what has the most impact on that online gambling company's bottom line.
Inherently that could have the most impact on what happens when that company succeeds: if those become OpenAI's biggest clients, it wouldn't be surprising that they put more and more weight in being well suited for online gambling companies.
Does AWS get specially impacted by hosting online gambling services ? I honestly don't expect them to, not more than community sites or concert ticket sellers.
reply▲There is no world in which online gambling beats other back-office automation in pure revenue terms. I'm comfortable saying that OpenAI would probably have to spend more money
policing to make sure their API's aren't used by gambling companies than they'd make off of them. Either way, these are all imagined horrors, so it is difficult to judge.
I am judging the two companies for what they are, not what they could be. And as it is, there is no more damaging technology than Meta's various algorithmic feeds.
reply▲makeitdouble7 hours ago
[-] > There is no world in which online gambling beats other back-office automation in pure revenue terms.
Apple's revenue is massively from in-app purchases, which are mainly games, and online betting also entered the picture. We had Tim Cook on the stand explain that they need that money and can't let Epic open that gate.
I think we're already there in some form or another, the question would be whether OpenAI has any angle for touching that pie (I'd argue no, but they have talented people)
> I am judging the two companies for what they are, not what they could be
Thing is, AI is mostly nothing right now. We're only discussing it because it of its potential.
reply▲My point exactly. The App Store has no play in back office automation so the comparison doesn’t make sense. AFAICT, OpenAI is already making Billions on back office automation. I just came from a doctors visit where the she was using some medical grade ChatGPT wrapper to transcribe my medical conversation meanwhile I fight with instagram for the attention of my family members.
AI is already here [1]. Could there be better owners of super intelligence? Sure. Is OpenAI better than Meta. 100%
[1] https://www.cnbc.com/amp/2025/06/09/openai-hits-10-billion-i...
reply▲eli_gottlieb5 hours ago
[-] If you have "superintelligence" and it's used to fine-tune a corporate product that preexisted it, you don't have superintelligence.
reply▲ASalazarMX12 hours ago
[-] I'd bet 100 quatloos that your comment will not have honest arguments below. You can't nurture missionaries in an exploitative environment.
reply▲Eh? Plenty of cults like Jehivahs Witnesses that are exploitive as hell.
reply▲CamperBob210 hours ago
[-] Not to mention, missionaries
are exploitative. They're trying to harvest souls for God or (failing the appearance of God to accept their bounty) to expand the influence of their earthbound church.
The end result of missionary activity is often something like https://www.theguardian.com/world/video/2014/feb/25/us-evang... .
Bottom line, "But... but I'm like a missionary!" isn't my go-to argument when I'm trying to convince people that my own motives are purer than my rival's.
reply▲This is just a CEO gaslighting his employees to "think of the mission" instead of paying up
No different than "we are a family"
reply▲> “I have never been more confident in our research roadmap,” he wrote. “We are making an unprecedented bet on compute, but I love that we are doing it and I'm confident we will make good use of it. Most importantly of all, I think we have the most special team and culture in the world. We have work to do to improve our culture for sure; we have been through insane hypergrowth. But we have the core right in a way that I don't think anyone else quite does, and I'm confident we can fix the problems.”
tldr. knife fights in the hallways over the remaining life boats.
reply▲Sam vs Zuck... tough choice. I'm rooting for neither. Sam is cleverly using words here to make it seem like OpenAI are 'the good guys' but the truth is that they're just as nasty and power/money hungry as the rest.
reply▲nwmcsween20 minutes ago
[-] Sam Altman complaining about "unethical" corporate behavior is pure gold
reply▲then why convert to for-profit?
reply▲Do I "poach" a stock when I offer more money for it than the last transaction value?
"Poaching" employees is just price discovery by market forces. Sounds healthy to me. Meta is being the good guys for once.
reply▲jimmywetnips11 hours ago
[-] [flagged]
reply▲The elderly couple showed up with baseball bats?
reply▲Sounds like some tariffs should be applied as as well considering there's now a trade imbalance!
reply▲datavirtue10 hours ago
[-] You must be new here. No joking allowed.
reply▲AFAIU, that is basically true? Isn't it in the guidelines somewhere? Sarcasm or (exclusive-or!) really good humor get a pass in practice.
reply▲aspenmayer2 minutes ago
[-] I think it’s a matter of style or finesse. If you can make it look good, even breaking the rules is socially acceptable, because a higher order desire is to create conditions in individuals where they break unjust rules when the greater injustice would be to censor yourself to comply with the rules in a specific case.
Good artists copy, great artists steal.
Good rule followers follow the rules all the time. Great rule followers break the rules in rare isolated instances to point at the importance of internalizing the spirit that the rules embody, which buttresses the rules with an implicit rule to not follow the rules blindly, but intentionally, and if they must be broken, do so with care.
> I have spread my dreams under your feet;
> Tread softly because you tread on my dreams.
https://en.wikipedia.org/wiki/Aedh_Wishes_for_the_Cloths_of_...
reply▲See.
reply▲I can fully believe one can be funny in a way that isn’t validated or understood, or even perceived as humorous. I’m not sure HN is a good bellwether for comedic potential.
reply▲If you don’t adhere to the guidelines we’ll send mean and angry emails to dang.
reply▲> If you don’t adhere to the guidelines we’ll send mean and angry emails to dang.
That’s so weird, you’re on! That makes two of us! When I don’t adhere to the guidelines, I also send mean and angry emails to dang. Apologies in advance, dang.
reply▲Pretty telling that OpenAI only now feels like it has to reevaluate compensation for researchers while just weeks ago it spent $6.5 billion to hire Jony Ive. Maybe he can build your superintelligence for you.
reply▲An observation: most articles with titles of the form "A SLAMS B" put forward a narrow, one-sided view of the issue they report on. Oftentimes they're shallow attempts to stir outrage for clicks. This one is just giving a CEO a platform to promote how awesome he thinks his company is.
All these articles and videos of people "slamming" each other; it doesn't move the needle, and it's not really news.
reply▲alex_young8 minutes ago
[-] It’s kind of rich that he’s complaining about Facebook paying engineers ’too much’, given the history here.
A decade ago Apple, Google, Intel, Intuit, and Adobe all had anti poaching agreements, and Facebook wouldn’t play ball, paid people more, won market share, and caused the salary boom in Silicon Valley.
Now Facebook is paying people too much and we should all feel bad about it?
reply▲Why is it still called Meta? Do they still do the Metaverse thing?
reply▲Does he have the same conviction when people from other companies decide to join OpenAI?
reply▲"Apostates who turned to darkness" vs "Converts who saw the light".
reply▲The game theoretic aspect of this is quite interesting. If Meta will make OpenAI's model improvements open source, then the value of every poached employee will be worth significantly less as time goes on. That means it's in the employees best interest to leave first, if their goal is to maximize their income.
reply▲Open source could also be a bait and switch.
ie - Zuck has no intention to keep opening up the models he creates. Thus, he knows he can spend the money to get the talent. Because he has every intention to make it back.
reply▲HardCodedBias11 hours ago
[-] Zuck has the best or the second best distribution on the planet.
If he neutralizes the tech advantage of other companies his chances of winning rise.
reply▲jekwoooooe11 hours ago
[-] Allegedly they were offered 100m just in the first year. I think they will be fine
reply▲That was immediately proven to be false, both by Meta leadership and the poached researchers themselves. Sam Altman just pulled the number out of his ass in an interview.
reply▲That's my point. The ones that left early got a large sum of money. The ones that leave later will get less. That would incentivize people to be the first to leave.
reply▲> OpenAI is the only answer for those looking to build artificial general intelligence
Let’s assume for a moment that OpenAI is the only company that can build AGI (specious claim), then the question I would have for Sam Altman: what is OpenAI’s plan once that milestone is reached, given his other argument:
> And maybe more importantly than that, we actually care about building AGI in a good way,” he added. “Other companies care more about this as an instrumental goal to some other mission. But this is our top thing, and always will be.
If building AGI is OpenAI’s only goal (unlike other companies), will OpenAI cease to exist once mission is accomplished or will a new mission be devised?
reply▲darth_avocado11 hours ago
[-] OpenAI’s only goal isn’t building AGI. It is to build it first and make money off it.
reply▲What even is the monetization plan for AI. Seems like the cutting edge tech becomes immediately devalued to nothing after a few months when a new open source modal is released.
After spending so many billions on this stuff, are they really going to pay it all off selling API credits?
reply▲Exactly! The Microsoft-OpenAI agreement states that AGI is whatever makes them 100 billion in profits. Nothing in there about anything intelligence related.
>The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits.
https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
reply▲Is there even a point to money post AGI?
reply▲Something tells me food and water supplies, weapons and private security forces aren't going to be paid for in OAI compute credits.
reply▲The profit cap was supposed to be for first to acheive AGI being end game, and would ensure redistribution (though with apparently some kind of Altman tax through early World Coin ownership stake). When they realized they wouldn't reach AGI with current funding and they were so close to $100 billion market cap they couldn't entice new investors on $100 billion in profits, why didn't they set it to, say, $10 trillion instead of infinity? Because they are missionaries?
A leaked email from Ilya early on even said they never planned to open source stuff long term, it was just to entice researchers at the beginning.
Whole company is founded on lies and Altman was even fired from YC over self detailing or something in I think a deleted YC blog post if I remember right.
reply▲I wouldn't worry, I forecast we'll have peace in the Middle East before we have true AGI.
reply▲blueblisters10 hours ago
[-] Nope AGI is not the end goal -
https://blog.samaltman.com/the-gentle-singularity> OpenAI is a lot of things now, but before anything else, we are a superintelligence research company.
IMO, AGI is already a very nebulous term. Superintelligence seems even more hand-wavy. It might be useful to define and understand limits of "intelligence" first.
reply▲Superintelligence has always been rhetorical slight of hand to equate "better than human" with "literally infinitely improving and godlike" in spite of optimization always leveling off eventually for one reason or another.
reply▲This looks similar to what Meta (then Facebook) did a decade ago and basically broke the agreements between Apple, Google, etc. to not poach each others employees
reply▲zbyforgotp55 minutes ago
[-] Zuck poaches AI devs and places them under Wang - how does that work? Wang doesn’t make impression of being a brilliant researcher or coder just a great deal maker (to put it diplomatically).
reply▲OpenAI's tight spot:
1) They are far from profitability.
2) Meta is aggressively making their top talent more expensive, and outright draining it.
3) Deepseek/Baidu/etc are dramatically undercutting them.
4) Anthropic and (to a lesser extent?) Google appear to be beating them (or, charitably, matching them) on AI's best use case so far: coding.
5) Altman is becoming less like-able with every unnecessary episode of drama; and OpenAI has most of the stink from the initial (valid) grievance of "AI-companies are stealing from artists". The endless hype and FUD cycles, going back to 2022, have worn industry people out, as well as the flip flop on "please regulate us".
6) Its original, core strategic alliance with Microsoft is extremely strained.
7) and, related to #6, its corporate structure is extremely unorthodox and likely needs to change in order to attract more investment, which it must (to train new frontier models). Microsoft would need to sign off on the new structure.
8) Musk is sniping at its heels, especially through legal actions.
Barring a major breakthrough with GPT-5, which I don't see happening, how do they prevail through all of this and become a sustainable frontier AI lab and company? Maybe the answer is they drop the frontier model aspect of their business? If we are really far from AGI and are instead in a plateau of diminishing returns that may not be a huge deal, because having a 5% better model likely doesn't matter that much to their primary bright spot:
Brand loyalty from the average person to ChatGPT is the best bright spot, and OpenAI successfully eating Google's search market. Their numbers there have been truly massive from the beginning, and are I think the most defensible. Google AI Overviews continue to be completely awful in comparison.
reply▲They have majority of the attention and market cap. They have runway. And that part is the most important thing. Others don’t have the users to grand test developments.
reply▲ninininino11 hours ago
[-] I'm not so sure they have runway.
XAI has Elon's fortune to burn, and Spacex to fund it.
Gemini has the ad and search business of Google to fund it.
Meta has the ad revenue of IG+FB+WhatsApp+Messenger.
Whereas OpenAI $10 billion in annual revenue, but low switching costs for both consumers and developers using their APIs.
If you stay at the forefront of frontier models, you need to keep burning money like crazy, that requires raising rounds repeatedly for OpenAI, whereas the tech giants can just use their fortunes doing it.
reply▲They definitely have a very valuable brand name even if the switching costs are low. To many people, AI == ChatGPT
reply▲Maybe employees realised this and left OpenAI for this reason.
reply▲> how do they prevail through all of this and become a sustainable frontier AI lab and company?
I doubt that OpenAI needs or wants to be a sustainable company right now. They can probably continue to drum up hype and investor money for many years. As long as people keep writing them blank checks, why not keep spending them? Best case they invent AGI, worst case they go bankrupt, which is irrelevant since it's not their own money they're risking.
reply▲The biggest problem OAI has is that they don't own a data source. Meta, Google, and X all have existing platforms for sourcing real time data at global scale. OAI has ChatGPT, which gives them some unique data, but it is tiny and very limited compared to what their competitors have.
LLMs trained on open data will regress because there is too much LLM generated slop polluting the corpus now. In order for models to improve and adapt to current events they need fresh human created data, which requires a mechanism to separate human from AI content, which requires owning a platform where content is created, so that you can deploy surveillance tools to correctly identify human created content.
reply▲OAI has a deal with reddits corpus of data to use.
They will either have to acquire a data source or build their own moving forward imo. I could see them buying reddit.
Sam Altman also owns something like ~10% of reddits stock since they went public.
reply▲blueblisters10 hours ago
[-] If they can turn ChatGPT into a free cash flow machine, they will be in a much more comfortable position. They have the lever to do so (ads) but haven't shown much interest there yet.
I can't imagine how they will compete if they need to continue burning and needing to raise capital until 2030.
reply▲storgendibal10 hours ago
[-] The interest and actions are there now: Hiring Fidji Simo to run "applications" strongly indicates a move to an ad-based business model. Fidji's meteoric rise at Facebook was because she helped land the pivot to the monster business that is mobile ads on Facebook, and she was supposedly tapped as Instacart's CEO because their business potential was on ads for CPGs, more than it was on skimming delivery fees and marked up groceries.
reply▲VirusNewbie8 hours ago
[-] Good analysis, my counter to it is that OpenAI
has one of the leading foundational models, while Meta, despite being a top paying tech company, continued to release sub par models that don't come close to the other big three.
So, what happened? Is there something fundamentally wrong with the culture and/or infra at Meta? If it was just because Zuckerburg bet on the wrong horses to lead their LLM initiatives, what makes us think he got it right this time?
reply▲For one thing, all the trade secrets going from openai and anthropic to meta.
reply▲jekwoooooe11 hours ago
[-] OpenAI has no shot without a huge cash infusion and to offer similar packages. Meta opened the door.
reply▲I think that leaks like this have negative information value to the public.
I work at OAI, but I'm speaking for myself here. Sam talks to the company, sometimes via slack, more often in company-wide meetings, all the time. Way more than any other CEO I have worked for. This leaked message is one part of a long, continuing conversation within the company.
The vast majority of what he and others say doesn't get leaked. So you're eavesdropping on a tiny portion of a conversation. It's impossible not to take it out of context.
What's worse, you think you learned something from reading this article, even though you probably didn't, making you more confident in your conclusions when you should be less confident.
I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in. It's eye-opening to see how confidently wrong most poasters are. It certainly has humbled my own reactions to news. (In this particular instance I don't think there's so much right and wrong but more that I think if you had actually been in the room for more of the conversation you'd probably feel different.)
Btw Sam has tweeted about an open source model. Stay tuned... https://x.com/sama/status/1932573231199707168
reply▲makeitdouble9 hours ago
[-] The other side of it: some statements made internally can be really bad but employees brush over them because they inherently trust the speaker to some degree, they have additional material that better aligns with what they want to hear so they latch on the rest, and current leaders' actions look fine enough to them so they see the bad parts as just communication mishaps.
Until the tide turns.
reply▲Worse: employees are often actively deceived by management. Their “close relationship” is akin to that of a farmer and his herd. Convinced they’re “on the inside” they’re often blind to the truth that’s obvious from the outside.
Or simply they don’t see the whole picture because they’re not customers or business partners.
I’ve seen Oracle employees befuddled to hear negative opinions about their beloved workplace! “I never had to deal with the licensing department!”
reply▲Okay, but I've also heard insiders at companies I've worked completely overlook obvious problems and cultural/management shortcomings issues. "Oh, we don't have a low-trust environment, it's just growing pains. Don't worry about what the CEO just said..."
Like, seriously, I've seen first-hand how comments like this can be more revealing out of context than in context, because the context is all internal politics and spin.
reply▲> I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in. It's eye-opening to see how confidently wrong most poasters are.
This is so true. And not confined to HN.
reply▲> Btw Sam has tweeted about an open source model. Stay tuned...
https://x.com/sama/status/1932573231199707168Sneaky wording but seems like no, Sam only talked about "open weights" model so far, so most likely not "open source" by any existing definition of the word, but rather a custom "open-but-legal-dept-makes-us-call-it-proprietary" license. Slightly ironic given the whole "most HN posters are confidently wrong" part right before ;)
Although I do agree with you overall, many stories are sensationalized, parts-of-stories always lack a lot of context and large parts of HN users comments about stuff they maybe don't actually know so much about, but put in a way to make it seem so.
reply▲impossiblefork8 hours ago
[-] Open weights is unobjectionable. You do get a lot.
It's nice to also know what the training data is, and it's even nicer to be aware of how it's fine-tuned etc., but at least you get the architecture and are able to run it as you like and fine tune it further as you like.
reply▲There are ten measures by which a model can/should be open:
1. The model code (pytorch, whatever)
2. The pre-training code
3. The fine-tuning code
4. The inference code
5. The raw training data (pre-training + fine-tuning)
6. The processed training data (which might vary across various stages of pre-training and fine-tuning)
7. The resultant weights blob
8. The inference inputs and outputs (which also need a license; see also usage limits like O-RAIL)
9. The research paper(s) (hopefully the model is also described in literature!)
10. The patents (or lack thereof)
A good open model will have nearly all of these made available. A fake "open" model might only give you two of ten.
reply▲I agree with the sentiment.
Having been behind the scenes of HN discussion about a security incident, with accusations flying about incompetent developers, the true story was the lead developers new of the issue, but it was not prioritised by management and pushed down the backlog in place of new (revenue generating) features.
There is plenty of nuance to any situation that can't be known.
No idea if the real story here is better or worse than the public speculation though.
reply▲> I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in. It's eye-opening to see how confidently wrong most poasters are.
Some topics (and some areas where one could be an expert in) are much more prone to this phenomenon than others.
Just to give a specific example that suddenly comes to my mind: Grothendieck-style Algebraic Geometry is rather not prone to people confidently posting wrong stuff about on HN.
Generally (to abstract from this example [pun intended]): I guess topics that
- take an enormous amount of time to learn,
- where "confidently bullshitting" will not work because you have to learn some "language" of the topic very deeply
- where even a person with some intermediate knowledge of the topic can immediately detect whether you use the "'grammar' of the 'technical language'" very wrongly
are much more rarely prone to this phenomenon. It is no coincidence that in the last two points I make comparisons to (natural) languages: it is not easy to bullshit in a live interview that you know some natural language well if the counterpart has at least some basic knowledge of this natural language.
reply▲I think its more the site's architecture that promotes this behavior.
In the offline world there is a big social cost to this kind of behavior. Platforms haven't been able to replicate it. Instead they seem to promote and validate it. It feeds the self esteem of these people.
reply▲Karrot_Kream9 hours ago
[-] It's hard to have an informed opinion on Algebraic Geometry (requires expertise) and not many people are going to upvote and engage with you about it either. It's a lot easier to have an opinion on tech execs, current events, and tech gossip. Moreover you're much more likely to get replies, upvotes, and other engagement for posting about it.
There's a reason politics and tech gossip are where most HN comments go these days. This is a pretty mainstream site.
reply▲> There's a reason politics and tech gossip are where most HN comments go these days. This is a pretty mainstream site.
HN is the digital water cooler. Rumors are a kind of social currency, in the capital sense, in that it can be leveraged and has a time horizon for value of exchange, and in the timeliness/recency biased sense, as hot gossip is a form of information that wants to be free, which in this context means it has more value when shared, and that value is tapped into by doing so.
reply▲furyofantares4 hours ago
[-] I've experienced that. Absolutely.
But I've also experienced that the outside perspective, wrong as it may be on nearly all details, can give a dose of realism that's easy to brush aside internally.
reply▲bboygravity9 hours ago
[-] I totally agree that most articles (pretty much all news/infotainment) is devoid of any information.
At the same time all I need to know about Sam is in the company/"non-profit's" name, which is in itself is now simply a lie.
reply▲> I think that leaks like this have negative information value to the public.
To most people I'd think this is mainly for entertainment purposes ie 'palace intrique' and the actual facts don't even matter.
> The vast majority of what he and others say doesn't get leaked. So you're eavesdropping on a tiny portion of a conversation. It's impossible not to take it out of context.
That's a good spin but coming from someone who has an anonymous profile how do we know it's true (this is a general thing on HN people say things but you don't know how legit what they say is or if they are who they say they are).
> What's worse, you think you learned something from reading this article, even though you probably didn't, making you more confident in your conclusions when you should be less confident.
What conclusions exactly? Again do most people really care about this (reading the story) and does it impact them? My guess is it doesn't at all.
> I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in.
This is a well known trope and is discussed in other forms ie 'NY Times story is wrong move to the next story and you believe it' ie: https://www.epsilontheory.com/gell-mann-amnesia/
reply▲> coming from someone who has an anonymous profile how do we know it's true
My profile is trivially connected to my real identity, I am not anonymous here.
reply▲> That's a good spin but coming from someone who has an anonymous profile how do we know it's true (this is a general thing on HN people say things but you don't know how legit what they say is or if they are who they say they are).
Not only that, but how can we know if his interpretation or "feelings" about these discussions are accurate? How do we know he isn't looking through rose-tinted glasses like the Neumann believers at WeWork? OP isn't showing the missing discussion, only his interpretation/feelings about it. How can we know if his view of reality is accurate and unbiased? Without seeing the full discussion and judging for ourselves, we can't.
reply▲> How can we know if his view of reality is accurate and unbiased? Without seeing the full discussion and judging for ourselves, we can't.
I agree with that of course.
reply▲This is a strangely defensive comment for a post that, at least on the surface, doesn't seem to say anything particularly damning. The fact that you're rushing to defend your CEO sort of proves the point being made, clearly you have to make people believe they're a part of something bigger, not just pay them a lot.
The only obvious critique is that clearly Sam Altman doesn't believe this himself. He is legendarily mercenary and self serving in his actions to the point where, at least for me, it's impressive. He also has, demonstrably here, created a culture where his employees do believe they are part of a more important mission and that clearly is different than just paying them a lot (which of course, he also does).
I do think some skepticism should be had around that view the employees have, but I also suspect that was the case for actual missionaries (who of course always served someone else's interests, even if they personally thought they were doing divine work).
reply▲The headline makes it sound like he's angry that Meta is poaching his talent. That's a bad look that makes it seem like you consider your employees to be your property. But he didn't actually say anything like that. I wouldn't consider any of what he said to be "slams," just pretty reasonable discussion of why he thinks they won't do well.
I'd say this is yet another example of bad headlines having negative information content, not leaks.
reply▲makeitdouble8 hours ago
[-] With no dogs in the fight, the very fact he's talking to his employees about a competitor's hiring practices is noteworthy.
The delivery of the message can be milder and better than how it sounds in the chosen bits, but the overall picture kinda stays the same.
reply▲To me, there’s an enormous difference between “they pay well but we’re going to win the race” and “my employees belong to me and they’re stealing my property.”
Notably, I don’t see him condemning Meta’s “poaching” here, just commenting on it. Compare this with, for example, Steve Jobs getting into a fight with Adobe’s CEO about whether they’d recruit each other’s employees or consider them to be off limits.
reply▲Your comment comes across dangerously close to sounding like someone that has drunk the kool-aid and defends the indefensible.
Yes, you can get the wrong impression from hearing just a snippet of a conversation, but sometimes you can hear what was needed whether it was out of context or not. Sam is not a great human being to be placed on a pedestal that never needs anything he says questioned. He's just a SV CEO trying to keep people thinking his company is the coolest thing. Once you stop questioning everything, you're in danger of having the kool-aid take over. How many times have we seen other SV CEOs with a "stay tuned" tweet that they just hope nobody questions later?
>if you had actually been in the room for more of the conversation you'd probably feel different
If you haven't drunk the kool-aid, you might feel differently as well.
SAMA doesn't need your assistance white knighting him on the interwebs.
reply▲Technically, after being labelled a missionary you can't really blame people for spreading the word of the almighty.
reply▲threetonesun9 hours ago
[-] Little Miyazaki knock offs posting on the Nazi Hellsite former known as Twitter isn't really helping how the "public" feels about OAI either.
reply▲hilarious seeing that he views it this way when his company is so very well known for taking (strong arguments say stealing) everything from everyone.
i’m noticing more and more lately that our new monarchs really do have broken thought patterns. they see their own abuse towards others as perfectly ok but hilariously demand people treat them fairly.
small children learn things that these guys struggle to understand.
reply▲I think they understand that it's all performative
reply▲Sam comes across as an extremely calculating person to me. I'm not suggesting he's necessarily doing this for bad reasons, but it's very clear to me the public facing communications he makes are well considered and likely not fully reflective of his actual views and positions on things, but instead what he believes to be power maximising.
He's very good at creating headlines and getting people talking online. There's no doubt he's good at what he does, but I don't know why anyone takes anything he says seriously.
reply▲unfitted25458 hours ago
[-] This interview with Karen Hao is really good (
https://www.youtube.com/watch?v=8enXRDlWguU), she interviews people that have had 1 on 1 meetings with Sam, and they always say he aligned with them on everything to the point where they don't actually know what he believes. He will tailor his opinions to try and weave in trust.
reply▲Even more blatantly and directly, "Don't you dare use our model, trained on other people's work, to train yours".
reply▲codingwagie12 hours ago
[-] The value of these researchers to meta is surely more than a few billion. Love seeing free markets benefit the world
reply▲Im a bit torn about this. If it ends up hurting OpenAI so much that they close shop, what is the incentive for another OpenAI to come up?
You can spend time making a good product and get breakthroughs and all it takes is for meta to poach your talent, and with it your IP. What do you have left?
reply▲How do you figure? If you assume that Meta gets the state of the art model, revenue is non-existent, unless they start a premium tier or put ads. Even then, its not clear if they will exceed the money spent on inference and training compute.
reply▲It's worth a few billion (easily) to keep people's default time sink as aimlessly playing on FV/IG as opposed to chatting with ChatGPT. Even if that scroll is replaced by chatting with llama as opposed to seeing posts.
reply▲amarcheschi10 hours ago
[-] Had he been doing the poaching, he would be saying mercenaries will beat missionaries. Why believe in ceos words at this point
reply▲He says, while driving $4 million car. :-)
reply▲joshdavham11 hours ago
[-] I think Meta already has very deep cultural problems.
If you've ever browsed teamblind.com (which I strongly recommend against as I hate that site), you'll see what the people who work at Meta are like.
reply▲That's true with every Tech company employees. They all want $1 M TC and work 10 hours / week
reply▲FooBarBizBazz7 hours ago
[-] The 4% Rule means everybody with $25M is getting $1M per year for zero hours of work per week. Google tells me Sam has $1.7B.
reply▲What are they like? since you recommend not browsing this site?
reply▲The posts from Meta employees on teamblind are generally cynical, status/wealth-obsessed and mean.
reply▲thomassmith6510 hours ago
[-] Off topic, but the existence of this teamblind.com site escaped my notice till now.
Is there a particular reason to hate it (aside from it being social media)?
reply▲Mainly because it brings out the worst in people. It’s easy to read Blind too much and take on a very cynical, money-driven view of everything in your life, which of course a Blind addict would justify as clear-eyed and pragmatic.
reply▲It has obvious pros, but since you asked about the cons —- anonymity brings the worst out of people; TC chasing leads to a reductionist view of people’s values and skills.
For example unlike HN you don’t often do technical discussions on blind, by design. So it is a “meta”-level strategy discussion of the job, then it skews politics, gossips, stock price etc..
This is compounded by it being social media, where negativity can be amplified 5-10x.
reply▲I hate teamblind because it makes me feel really negative about our industry.
I actually really like tech - the problems we get to work on, the ever-changing technological landscape, the smart and passionate people, etc, etc. But teamblind is just filled with cynical, wealth-obsessed and mean careerists. It's like the opposite of HN in many ways.
And if you ever wondered where the phrase "TC or GTFO" originated... it's from teamblind.
reply▲I looked at it once, seemed full of young men discussing hair loss issues and how to get a girlfriend.
reply▲Yeah, yeah, typical rich guy whining when labor makes some gains.
reply▲Isn't Meta's open model closer to OpenAI's mission then OpenAI.
reply▲Ironically, Altman's statement wasnt all that wrong, in a sense.
He just mixed up who the "Missionaries" and who the "Mercenaries" were.
reply▲philosophty12 hours ago
[-] Two rich kids who have mostly paid-to-win their way into the
game are predictably fighting using money because that's all they bring to the table.
reply▲At least from the outside, OpenAI's messaging about this seems obnoxiously deluded, maybe some of those employees left because it starts feels like a cult built on foundations of self importance? Or maybe they really do know some things we don't know, but it seems like a lot of people are eager to keep giving them that excuse.
But then again, maybe they have such a menagerie of individuals with their heads in the clouds that they've created something of an echo chamber about the 'pure vision' that only they can manifest.
reply▲Yeah it's a tough spot he's found himself in. How do you convince people who know more about this stuff than anybody that you're barreling towards something that's an improbability? It seems that most of them have made their choice to turn more towards reality, the material reality, and register their skill with an organization that holds that in higher regard. I can't blame them, and neither can he, but he also can't help himself when it comes to reiterating the hype. He might be projecting about that 'deep-seated cultural issue' he's prescribing to meta, and lashing out against those who don't accept it.
reply▲lenerdenator12 hours ago
[-] > I can't blame them, and neither can he
He's certainly trying with statements like this.
To be fair, he's hardly alone. Business is built on dupers and dupees. The duper talks about how important the mission of the business is while taking the value of the labor of the dupee. If he had to work for the money he pays the dupee, he would be a lot less interested in the mission.
reply▲reactordev12 hours ago
[-] I think it’s more of the latter. We’ve already seen others beat them in their own game. Only for them to come back with a new model.
In the end, this is the same back and forth that Apple and Sun shared in the late 90s or Meta and Google in 2014. We could have made non-competes illegal today but we didn’t.
reply▲(post employment) Non-competes have been non-enforcable in California since 1872. They became illegal in California last year.
A federal rule would be nice, but the state rule where a lot of the development happens could be sufficient.
reply▲Ehh, this take feels ungenerous to me. You don't have to believe a private firm is a holy order for it to benefit from a culture filled with "we believe this specific project is Important" people vs "will work at whatever shop drops the most cash" people.
Mercenaries by definition select for individual dollar outcomes, and its impossible for that not to impact the way they operate in groups, which is generally to the group's detriment unless management is incredibly good at building group-first incentive structures that don't stomp individual outcomes.
That said, mercenary-missionaries are definitely a thing. They're unstoppable forces culturally and economically, and that could be who we're seeing move around here.
reply▲In our times, every narcissist sees himself as a saint and a messiah on a mission to save the world, while doing the complete opposite of that. And they get very angry when they see other narcissists trying to do the same.
reply▲The modern iPhone vs. Android battle
reply▲It's weird to hear Sam Altman call the employees of OpenAI 'missionaries' based on their intense policies that seem determined to control how people think and act.
Imagine if in 2001 Google had said "I'm sorry, I can't let you search that" if you were looking up information on medical symptoms, or doing searches related to drugs, or searching for porn, or searching for Disney themed artwork.
It's hard for me to see anyone with such a strong totalitarian control over how their technology can be used as a good guy.
reply▲I understand the massive anti-OpenAI sentiment here, but OpenAI makes a really great product. ChatGPT and its ecosystem are widely used by millions every day to make them more productive. Losing these employees doesn’t bode well for users.
Meta doesn’t really have a product unless you count the awful “Meta AI” that is baked into their apps. Unless these acquisitions manifest in frontier models getting open sourced, it feels like a gigantic brain drain.
reply▲jstummbillig10 hours ago
[-] Meta's real AI product is actually even worse than that and insidious: They try to run over companies who are (in contrast) successfully advancing AI with money they made by hooking teens on IG, and then just use the resulting inferior product as a marketing tool.
reply▲agnosticmantis6 hours ago
[-] What’s the profile of these talents like?
And what are the skills that are most highly sought after?
Is it the researchers or the system engineers that scale the prototypes? Or other skills/expertise?
reply▲When even Scam Altman disliked Zuck we have reached AI bottom
reply▲“First comes the Missionary, then comes the Mercenary, then comes the Army”
Wonder if that applies here.
reply▲I don't know which pedastal Sam is standing on to point finger at others? Who are the missionaries and who are the mercenaries? What part of OpenAI is Open?
reply▲ Another said: “Yes we’re quirky and weird, but that’s what makes this place a magical cradle of innovation,” wrote one. “OpenAI is weird in the most magical way. We contain multitudes.”
i thought i was reading /r/linkedinlunatics
reply▲Sam Altman complaining about mercenary behavior from competitors... Talk about the pot calling the kettle black. Guess he's unhappy he's not the one being mercenary in this situation.
reply▲this open competition for talent is better than that time all the big tech firms were working to actively suppress wages.
reply▲We're talking about a few hundred people here, globally. The entire goal is to suppress everyone else's wages to zero.
reply▲This is a bad year to talk about missions and ideologies. Just take the money and run
reply▲> hinting that the company is evaluating compensation for the entire research organization.
TL ; DR
Some other company paid more and got engineers to join them because the engineers care more about themselves and their families than some annoying guy's vision.
reply▲sashank_15099 hours ago
[-] Do we know what numbers we are talking about here. I’ve heard
1. “So much money your grandchildren don’t need to work”
2. 100M
3. Not 100M
So what is it? I’m just curious, I find 100M hard to believe but Zuck is capable of spending a lot.
reply▲That didn't work for the American colonies, Portugal and Spain were very focused on being missionaries and were beaten by the Dutch and Brits that just wanted to make money.
reply▲0xbadcafebee11 hours ago
[-] 90% of the reason Spain and Portugal explored the new world was for wealth (spices, gold/silver, sugar, brazilwood). The rest of the reason was to spread their religion and increase their national power. Missions only popped up 30 years after they first began colonization.
The Dutch, British, and French were initially brought to the new world because they'd heard how rich it was and wanted a piece of the pie. It took them a while to establish a hold because the Spanish defended it so well (incumbents usually win) and also they kept settling frozen wastelands rather than tropical islands.
The religiously persecuted groups (who were in no way state-sponsored) came 120 years after Spain's first forays.
reply▲The motive for settling the colonies in New England was emphatically not to make money.
reply▲MangoToupe12 hours ago
[-] That really depends on the time period. The puritanical core of the Massachusetts Bay Colony was certainly replaced by commercial/trade interests long before their war with the crown.
reply▲dragonwriter11 hours ago
[-] The idea that the Spanish and Portuguese colonial effort wasn't droven by economic gain above all else is also beyond silly.
reply▲I believe Portuguese got there looking for a shorter route to India (money) and eventually settled the land for gold, silver, brazilwood, diamonds and sugarcane (money).
reply▲Nah, they very much wanted to do missionary work and find Preston John, they invested in a lot of shitty missions for absolutely no reason other than to try to convert people to the church.
Conquerors is a great read on the subject: https://en.wikipedia.org/wiki/Conquerors:_How_Portugal_Forge...
And don't get me wrong, they were very successful at filling their pockets with gold, but could have been even more if they were mostly mercenaries like the brits and the dutch.
reply▲In what way did the Spanish lose out to the Dutch or the Brits? Did you only think of North America and forget everything south of the rio grande (and a good deal north of it)?
reply▲In any case this is business and in many cases how business operates. Nice try on Sam's part to try and make it like it's a bad thing and everybody is for the good of the purpose.
reply▲Poaching. Such a nasty word for merely offering and employee a better deal. A place where his work is not underpaid.
reply▲Missionaries vs mercenaries? Which company is releasing open source models? Please remind me I forgot.
reply▲throwawayq342312 hours ago
[-] I'm sorry how is the mission of OpenAI any different than their competitors? They are for-profit they offer absurd salaries, etc.
reply▲jekwoooooe11 hours ago
[-] No no no don’t you get it they have this multi entity “non profit” and something something “capped profit” yet everyone is employed by the for profit. But they just want to give AGI for free right
reply▲AI as a religion should scare any investor. Where are the products that you can sell for moolah?
reply▲Religion delivers a recurring revenue model that isn't taxed and where criminal confessions can't be used in court if made to a high-ranking company officer. It's the perfect business.
reply▲lenerdenator12 hours ago
[-] The product is a model that takes knowledge and puts it in a form that can act on it without a human doing the acting.
You sell it to people who don't want to pay other people while getting the same productivity.
reply▲Religions and cults are actually extremely lucrative.
For many investors the product is the hype.
reply▲They have product that they derive revenue from.
reply▲The most popular religions are super wealthy and lucrative
reply▲928340923212 hours ago
[-] Is he comparing working at OpenAI to religion? Is that not a crazy analogy to make? Cult like behavior to say the least. It's also funny the entire business of AI is poaching content from people.
reply▲So he believes OpenAI is in some kind of moral or humanitarian mission? Is he lying or just delusional?
reply▲That's rich. Almost as rich as Sam.
reply▲0xbadcafebee11 hours ago
[-] "Talent" doesn't make a business successful, and paychecks aren't the reason most people switch jobs. This is like Sam announcing to the world "it sucks working for our company, don't come here".
reply▲Hmm on the one hand somebody could have unimaginable wealth but on the other hand they could be in a religion started by a former reddit ceo, it is truly an unsolvable riddle
reply▲lenerdenator12 hours ago
[-] For a group of people who talk incessantly about the value of unrestricted markets, tech bros sure hate having to participate in free labor markets.
Being a missionary for big ideas doesn't mean dick to a creditor.
reply▲reverendsteveii12 hours ago
[-] Capitalists don't like markets, or at least not the markets that we're told capitalism will bring about. Those markets are supposed to increase competition and drive down prices until companies are making just barely enough to survive. What capitalist wants that for himself? He wants decreased competitions and sky high prices for himself, and increased competition and lower prices for his competitors and suppliers.
reply▲Dracophoenix9 hours ago
[-] > Capitalists don't like markets, or at least not the markets that we're told capitalism will bring about.
The "markets" most people learn about are artificial Econ 101 constructions. They're pedagogical tools for explaining elasticity and competition under the assumption that all widgets are equally and infinitely fungible. An assumption which ignores marginal value, individual preferences, innovation and other things that make real markets.
> What capitalist wants that for himself? He wants decreased competitions and sky high prices for himself, and increased competition and lower prices for his competitors and suppliers.
The capitalist wants to be left to trade as he sees fit without state intervention.
reply▲Like the culture of OpenAI where Microsoft threatened to poach the entire staff so they caved?
reply▲I like sama and many other folks at OpenAI, but I have to call things how I see them:
"What Meta is doing will, in my opinion, lead to very deep cultural problems. We will have more to share about this soon but it's very important to me we do it fairly and not just for people who Meta happened to target."
Translation from corporate-speak: "We're not as rich as Meta."
"Most importantly of all, I think we have the most special team and culture in the world. We have work to do to improve our culture for sure; we have been through insane hypergrowth. But we have the core right in a way that I don't think anyone else quite does, and I'm confident we can fix the problems."
Translation from corporate-speak: "We're not as rich as Meta."
"And maybe more importantly than that, we actually care about building AGI in a good way." "Other companies care more about this as an instrumental goal to some other mission. But this is our top thing, and always will be." "Missionaries will beat mercenaries."
Translation from corporate-speak: "I am high as a kite." (All companies building AGI claim to be doing it in a good way.)
reply▲ASalazarMX12 hours ago
[-] The perfect corollary is that Altman is as mercenary, if not more, than Zuckerberg, given all the power grabs he did in OpenAI. Even the "Open" in OpenAI is a joke.
He just has less options because OpenAI is not as rich as Meta.
reply▲> "But we have the core right in a way that I don't think anyone else quite does"
Translation from corpospeak: "I think my pivot to for-profit is very clever and unique" :)
reply▲elzbardico11 hours ago
[-] Said the guy whose life mission seems to be to convert a non-profit into a for-profit entity.
reply▲look around - CA - missionaries have the best real estate. And another related note on connection between strong promotion of devotion to ideas and it is being a good business - the Abrahamic monotheism was a result of the successful marketing campaign "only the donations made here are donations to the real god" of that Temple back then against several other competing ones. (Curiously that the current historic stage of AI, on the cusp, be it 3 or 30 years, of emergence of AGI is somewhat close to that point in history back then. Thus in particular a flood of messiahs and doom sayers would only increase.)
reply▲Quarrelsome10 hours ago
[-] > the Abrahamic monotheism was a result of the successful marketing campaign
I thought it was because everyone was accepted, technically equal, and sins were seen as something inherent and forgivable (at least with Christianity) whereas paganism and polythiesms can tend towards rewarding those with greater resources (who can afford to sacrifice an entire bull every religious cycle), thereby creating a form of religious inequality. At least that was one of the somewhat compelling arguments I heard that described the spread of Christianity throughout the Roman Empire.
reply▲timewizard10 hours ago
[-] In short: "Liars prosper in the short term."
reply▲They can prosper longer than you can stay liquid or even alive :)
reply▲That depends on the regulatory environment and the degree of market monopolization.
reply▲unit_circle11 hours ago
[-] Side note: I'm noticing more and more of these simple, hyperbolic headlines specifically of statements that public figures make. A hallmark of the event being reported is a public figure making a statement that will surely have little to no effect whatsoever.
Calling these statements "slamming" (a specific word I see with curious frequency) is so riling to me because they are so impotent but are described with such violent and decisive language.
Often it's a politician, usually liberal, and their statement is such an ineffectual waste of time, and outwardly it appears wasting time is most of what they do. I consider myself slightly left of center, so seeing "my group" dither and waste time rather than organize and do real work frustrates me greatly. Especially so since we are provided with such contrast from right of center where there is so much decisive action happening at every moment.
I know it's to feed ranking algorithms, which causes me even more irritation. Watching the brain rot get worse in real time...
reply▲neuroelectron11 hours ago
[-] "Do Not Be Explicitly Useful"—Strategic Uselessness as Liability Buffer
This is a deliberate obfuscation pattern. If the model is ever consistently useful at a high-risk task (e.g., legal advice, medical interpretation, financial strategy), it triggers legal, regulatory, and reputational red flags.
a. Utility → Responsibility
If a system is predictably effective, users will reasonably rely on it.
And reliance implies accountability. Courts, regulators, and the public treat consistent output as an implied service, not just a stochastic parrot.
This is where AI providers get scared: being too good makes you an unlicensed practitioner or liable agent.
b. Avoid “Known Use Cases”
Some companies will actively scrub capabilities once they’re discovered to work “too well.”
For instance:
A model that reliably interprets radiology scans might have that capability turned off.
A model that can write compelling legal motions will start refusing prompts that look too paralegal-ish or insert nonsense case law citation.
I think we see this a lot from ChatGPT. It's constantly getting worse in real world uses while exceeding at benchmarks. They're likely, and probably forced, to cheat on benchmarks by using "leaked" data.
reply▲I hope xAI wins. I think Sam's self-portrayal as a missionary has a lot of irony - I see him as the ultimate mercenary.
It's always challenging to judge based entirely on public perceptions, but at some point public evidence adds up. The board firing, getting maybe fired from YC (disputed), people leaving to start anthropic because of him, people stating they don't want him in charge of AGI. All the other execs leaving. His lying in congress, his lying to the board, his general affect just seems off - not in an aspie way, but in some dishonest way. Yeah it's subjective, but it's a point and it's different from Zuckerberg, Musk etc. who come across as earnest. Even PG said if dropped on an island of cannibals you'd come back and Sam would be king.
I'm rooting for basically any of the other (American) players in the game to win.
At least Zuck is paying something close to the value these people might generate instead of having them sign hostile agreements to claw back their equity and then feigning ignorance. If NBA all stars get 100M$+ contracts, it's not crazy for a John Carmack type to command the same or more - the hard part is being able to identify the talent, not justify the value created by the leverage of the correct talent (which is huge).
reply▲Couldn't think of a worse steward of AI than Meta/Zuck (not a fan of OpenAI either). One of the most insidious companies out there.
Sad to see Nat Friedman go there. He struck me as "one of the good ones" who was keen to use tech for positive change. I don't think that is achievable at Meta
reply▲But if they give you tens of millions of reasons to go there . . .
reply▲For Sam to seemingly claim that Meta is hiring mercenaries while OpenAI is hiring missionaries seems a bit counter to OpenAI's mission of having closed weight models vs an open weights at Meta.
I could definitely see those who are 'missionaries' wanting to give it away. ¯\_(ツ)_/¯
reply