- He's on track to becoming a top-tier AI researcher. Despite having only one year of a PhD under his belt, he already received two top awards as a first-author at major AI conferences [1]. Typically, it takes many more years of experience to do research that receives this level of recognition. Most PhDs never get there.
- Molmo, the slate of open vision-language models that he built & released as an academic [2], has direct bearing on Zuck's vision for personalized, multimodal AI at Meta.
- He had to be poached from something, in this case, his own startup, where in the best case, his equity could be worth a large multiple of his Meta offer. $250M likely exceeded the expected value of success, in his view, at the startup. There was also probably a large premium required to convince him to leave his own thing (which he left his PhD to start) to become a hired hand for Meta.
Sources:
Exactly. What's the likelihood of that?
Sufficiently high that Meta is willing to pay such an amount of money. :-)
I'd forget the word shareholder even exists.
Wouldn't you yearn for any more impact given how much that amount of resource could improve the lives of many, if used wisely?
- Percy Bysshe Shelley
Zuck's advantage over Sir Isaac (Newton) is that the market for top AI researchers is much more volatile than in South Sea tradeables pre-bubble burst?
Either that or 250M is cheap for cognitive behavior therapy
It's fine to think that we're in a bubble and to post a comment explaining your thoughts about it. But a comment like this is a low-effort, drive-by shoot-down of a comment that took at least a bit of thought and effort, and that's exactly what we don't want on HN.
>"The first is enabling business AIs within message threads ... We’re expanding business AIs to more businesses in Mexico and the Philippines. And we expect to broaden availability later this year as we keep refining the product."
>"The second area of business AI development is within ads ... We’re currently testing this with a small number of businesses across Feed and Reels on Facebook and Instagram as well as Instagram Stories."
>"And then the final area that we are exploring is business AIs on business websites to help better support businesses across all platforms ... and we’re starting to test that with a few businesses in the US."
So it's just very small scale tests so far - not the sort of thing that would have any measurable impact on their revenue.
[0]: https://s21.q4cdn.com/399680738/files/doc_financials/2025/q2...
Hitmen get what, $5-50k? And that’s for murder.
Mine is a hell of a lot lower than $250M, and I would bet half of that that yours is too.
Meta will have more AI-compute than he ever hoped to get at his - and most other - startups.
But if he's getting real, non-returnable actual money from Meta on the basis of a back of envelope calculation for his own startup, from Meta's need to satiate Mark Zuckerberg's FOMO, then good for him.
This bubble cannot burst soon enough, but I hope he gets to keep some of this; he deserves it simply for the absurd comedy it has created.
I hope for all of our sake's that you're right. I feel confident that you're not :(
I have two questions about this, really:
- is he going to be the last guy getting this kind of value out of a couple of research papers and some intimidated CEO's FOMO?
- are we now entering a world where people are effectively hypothetical acquihires?
That is, instead of hiring someone because they have a successful early stage startup that is shaking the market, you hire someone because people are whispering/worried that they could soon have a successful early stage startup?
The latter of these is particularly worrisomely "bubbly" because of something that people don't really recognise about bubbles unless they worked in one. In a bubble, people suspend their disbelief about such claims and they start throwing money around. They hire people without credentials who can talk the talk. And they burn money on impossible ideas.
The bubble itself becomes increasingly intellectually dishonest, increasingly unserious, as it inflates. People who would be written off as fraudsters at any other time are taken seriously as if they are visionaries and ultra-productive people, because everyone's tolerance for risk increases as they become more and more desperate to ride the train. People start urgently taking impossible things at face value, weird ideas get much further advanced much more quickly, and grifters get closer to the target -- the human source of the cash -- faster than due dilligence would ordinarily allow them.
"This guy is so smart he could have a $1bn startup just like that" is an obvious target for con artists and grifters. And they will come.
For clarity I am ABSOLUTELY NOT saying that the subject of this article is such a person. I am perfectly happy to stipulate that he's the real deal.
But he is now the template for a future grift that is essentially guaranteed to happen. Maybe it'll be a team of four or five people who get themselves acquihired because there's a rumour they are going to have billions of dollars of funding for an idea. They will publish papers that in a few months will be ridiculed. And they will disappear with a lot of money.
And that could burst your bubble.
You started to sound like Dario, who likes to accuse others as intellectually dishonest and unserious. Anyway, perhaps the strict wage structure of Anthropic will be its downfall in this crazy bubble?
The same thing happened in the dotcom era, the same thing happened in the run-up to the subprime mortgage crisis. Every single bubble displays these characteristics.
Nightmare Future!
Yes, I want them to excel in sports, but these articles provide a crucial counterweight to the all-too-common narrative that becoming a pro athlete is the ultimate dream. Instead, these stories show that being exceptional in STEM isn’t just something you do because you are curious, you find it interesting, you enjoy it (all great motivators), or to please parents and teachers (generally, probably, lesser quality motivators): these stories show that being exceptional in STEM can open doors to exciting, high-impact careers.
It’s been amazing to watch my kids begin to reframe STEM not as the “sensible” thing to do, but as something genuinely cool, aspirational, and full of opportunity.
It's the same reason that sports stars, musicians, and other entertainers that operate on a global scale make so much more money now than they did 100 years ago. They are serving a market that is thousands of times larger than their predecessors did, and the pay is commensurately larger.
1) The winner immediately becomes a monopoly
2) All investments are directed from competitors, to the winner
3) Research on AGI/ASI ceases
I don't see how any of these would be viable. Right now there's an incremental model arms race, with no companies holding a secret sauce so powerful that they're miles above the rest.
I think it will continue like it does today. Some company will break through with some sort of AGI model, and the competitors will follow. Then open source models will be released. Same with ASI.
The things that will be important and guarded are: data and compute.
Yes, but just like in an actual arms race, we don't know if this can evolve in a winner takes all scenario very quickly and literally.
So maybe the issue is more about staying in the top N, and being willing to pay tons to make sure that happens.
That's probably true, but at the moment the only thing that creates something resembling a moat is the fact that progress is rapid (i.e. the top players are ~6-12 months ahead of the already commoditized options, but the gap in capabilities is quite large): if progress plateaus at all, the barrier to be competitive with the top dogs is going to drop a lot, and anyone trying to extra value out of their position is going to attract a ton of competition even from new players.
We are already seeing diminishing returns from compute and training costs going up, but as more and more AI is used in the wild and pollutes training data, having validated data becomes the moat.
If this were winner-take-all market with low switching costs, we'd be seeing instant majority market domination whenever a new SOTA model comes out every few weeks. But this isn't happening in practice, even though it's much easier to switch models on OpenRouter than many other inference providers.
I get the perception of "winner-take-all" is why the salaries are shooting up, but it's at-odds with the reality.
At work we are optimising cost by switching in different models for different agents based on use case, and where testing has demonstrated a particular model's output is sufficient.
> If the very best LLM is 1.5x as good as good as the next-best, then pretty much everyone in the world will want to use the best one
Is it? Gemini is arguably better than OAI in most cases but I'm not sure it's as popular among general public
I think what we're seeing here is superstar economics, where the market believes the top players are disproportionately more valuable than average. Typically this is bad, because it leads to low median compensation but in this rare case it is working out.
It would be unfortunate if something like Grok takes the cake here.
Well only if the price is the same. Otherwise people will value price over quality, or quality over price. Like they do for literally every other product they select...
You are not one random hyperparameter away from the SciFi singularity. You are making iterative improvements and throwing more compute at the problem, as are all your competitors, all of which are to some degree utterly exchangeable.
Yes, the figures are nuts. But compare them to F1 or soccer salaries for top athletes. A single big name can drive billions in that context at least, and much more in the context of AI. $50M-$100M/year, particularly when some or most is stock, is rational.
Like I definitely think it is better for society if the economic forces are incentivizing pursuit of knowledge more than pursuit of pure entertainment[0]. But I think we also need to be a bit careful here. You need some celebrities to be the embodiment of an idea but the distribution can be too sharp and undermine, what I think we both agree on is, the goal.
Yeah, I think, on average, a $100M researcher is generating more net good for a society (and world) than a $100M sports player or actor. Maybe not in every instance, but I feel pretty confident about this on average. But at the same time, do we get more with one $100M researcher or 100 $1M researchers? It's important to recognize that we're talking about such large sums of money that at any of these levels people would be living in extreme luxury. Even in SV the per capita income is <$150k/yr, while the median income is medium income is like half that. You'd be easily in the top 1%. (The top 10% for San Jose is $275k/yr)
I think we also need to be a bit careful in recognizing how motivation can misalign incentives and goals. Is the money encouraging more to do research and push humanity's knowledge forward? Or is the money now just another means for people that just want money to exploit, who have no interest in advancing humanity's knowledge? Obviously it is a lot more complicated and both are happening but I think it is worth recognizing that if things shift towards the latter than they actually make it harder to achieve the original goals.
So on paper, I'm 100% with you. But I'm not exactly sure the paper is matching reality.
[0] To be clear, I don't think entertainment has no value. It has a lot and it plays a critical role in society.
For whatever reason, remuneration seems more concentrated than fundamentals. I don't begrudge those involved their good luck, though: I've had more than my fair share of good luck in my life, it wouldn't be me with the standing to complain.
“Our Rock Stars Aren't Like Your Rock Stars”
Locking up more of the world's information behind their login wall, or increase their ad sales slightly is not enough to make that kind of money. We can only speculate, of course, but at the same time I think the general idea is pretty clear: AI will soon have a lot of power, and control over that power is thought to be valuable.
The bit about "building great things" certainly rings true. Just not in the same way artists or scientists do.
If you do not believe this narrative, then your .com era comment is a pretty good analysis.
> There is a group of wealthy individuals who have bought in to the idea that the singularity is months away.
My question is "how many months need to pass until they realize it isn't months away?"What, it used to be 2025? Then 2027? Now 2030? I know these are not all the same people but there are trends of to keep pushing it back. I guess Elon has been saying full self-driving is a year away since 2016 so maybe this belief can sustain itself for quite some time.
So my second question is: does the expectation of achievements being so close lengthen the time to make such achievements?
I don't think it is insane to think it could. If you think it is really close you'd underestimate the size of certain problems. Claim people are making mountains out of molehills. So you put efforts elsewhere, only to find that those things weren't molehills after all.
Predictions are hard and I think a lot of people confuse critiques with lack of motivation. Some people do find flaws and use them as excuses to claim everything is fruitless. But I think most people that find flaws are doing so in an effort to actually push things forward. I mean isn't that the job of any engineer or scientist? You can't solve problems if you can't identify problems. Triaging and prioritizing problems is a whole other mess, but it is harder to do when you're working at the edge of known knowledge. Little details are often not so little.
It's going to persist until shareholders punish them for it. My guess is it's going to be some near-random-trigger, such as a little-known AI company declaring bankruptcy, but becoming widely reported. Suddenly, investing in AI with no roadmap to profitability will become unfashionable, budget cuts, down-rounds, bankruptcies and consolidation will follow. But there's no telling when this will be, as there's elite convergence to keep the hype going for now.
Telco capex was $100 billion at the peak of the IT bubble, give or take. There's going to be $400 billion investments in AI in 2025.
While there's a lot of money going towards research, there's less than there was years ago. There's been a shift towards engineering research and ML Engineer hiring. Fewer positions for lower level research than there were just a few years ago. I'm not saying don't do the higher level research, just that it seems weird to not do the lower level when the gap is so small.
I really suspect that the winner is going to be the one that isn't putting speed above all else. Like you said, first to market isn't everything. But if first to market is all the matters then you're also more likely to just be responding to noise in the system. The noisy signal of figuring out what that market is in the first place. It's really easy to get off track with that and lose sight of the actual directions you need to pursue.
> given what even a resource-constrained DeepSeek did to them.
I think a lot of people have a grave misunderstanding of DeepSeek. The conversation is usually framed comparing to OpenAI. But this would be like comparing how much it cost to make the first iPhone (the literal first working one, not how much each Gen 1 iPhone cost to make) with the cost to make any smartphone a few years later. It's a lot easier and cheaper to make something when you have an example in hand. Just like it is a lot easier to learn Calculus than it is to invent calculus.Which that framing weirdly undermines DeepSeek's own accomplishments. They did do some impressive stuff. But that's much more technical and less exciting of a story (at least to the average person. It definitely is exciting to other AI researchers).
Remember capsule networks?
"Our key innovation is a new collection of datasets called PixMo that includes a novel highly-detailed image caption dataset collected entirely from human annotators using speech-based descriptions, and a diverse mixture of fine-tuning datasets that enable new capabilities. Notably, PixMo includes innovative 2D pointing data that enables Molmo to answer questions not just using natural language but also using non verbal cues. We believe this opens up important future directions for VLMs enabling agents to interact in virtual and physical worlds. The success of our approach relies on careful choices for the model architecture details, a well-tuned training pipeline, and most critically the quality of our newly collected datasets, all of which we have released."
This is a solid engineering project with a research component - they collected some data that ended up being quite useful when combined with pre-existing tech. But this is not rocket science and not a unique insight. And I don't want to devalue the importance of solid engineering work, but you normally don't get paid as much for non-unique engineering expertise. This by no means sounds unique to me. This seem like a good senior-staff research eng project in a big tech company these days. You don't get paid 250M for that kind of work. I know very talented people who do this kind of work in big tech, and from what I can tell, many of them appear to have much more fundamental insight and experience, and led larger teams of engineers, and their comp does not surpass 1-2M tops (taking a very generous upper bound).
You bring up the only relevant data point at the end, as a throw in. Nobody outside of academia cares about your PhD and work history if you have a startup that is impressive to them. That's the only reason he's being paid.
Don’t get me wrong, they are smart people - but so are thousands of other researchers you find in academia etc. - difference here is scale of the operation.
Even if it’s 1% at the scale you’re talking that’s 1B to the company. So still worth it.
Wild.
For AI researchers pursuing AGI, this variance between distributions is arguably even worse than the distribution between samples - there's no past data whatsoever to build estimates, it's all vibes.
You can argue the distribution is hard to pin down (hence my note on risk), but let’s not pretend there’s zero precedent.
If it turns out to be another winter at least it will have been a fucking blizzard.
But the distribution for individual researcher salaries really is pure guesswork. How does the datapoint of "Attention Is All You Need?" fit in to this distribution? The authors had very comfortable Google salaries but certainly not 9-figure contracts. And OpenAI and Anthropic (along with NVIDIA's elevated valuation) are founded on their work.
I'd argue the top individual researchers figure into the overall AI spend. They are the people leading teams/labs and are a marketable asset in a number of ways. Extrapolate this further outward - why does Jony Ive deserve to be part of a $6B aquihire? Why does Mira Murati deserve to be leading a 5 month old company valued at $12B with only 50 employees? Neither contributed fundamental research leading to where we are today.
How much revenue does Google make in a day? £700m+.
They're not high because of performance/results alone.
I can't help but think that the structure of this kinda hints at there being a bit of a scam-y element, where a bunch of smart people are trying to pump some rich people out of as much money as possible, with questionable chances at making it back. Imagine that the people on The List had all the keys needed to build AGI already if they put their knowledge together, what action do you think they would take?
You can just doodle away with whatever research interests you the most, there's no need to deliver a god mode AI to the great leader even if you had the ability to.
I suggest we saw a clear demonstration of that with the Metaverse and the answer is no, but more intensely than two letters can communicate.
.. that had already leaked and would later plummet in value.
These AI researchers fundamentally need access to tons of compute, data and engineers in order to pursue their passion.
There is hope for humanity.
Jokes aside, how and why?
We are in a time where the impact can be measured more quickly, so good for the engineers taking advantage of this.
For example, Meta seem to be spending so much so they don't later have to fight a war against an external Facebook-as-chatbot style competitor, but it's hard to see how such a thing could emerge from the current social media landscape.
Why why would they need fears about a quasi-facebook chatbot?
Anyhow, with the Metaverse as a flop, and apparently having self-assessed Meta's current LLM efforts as unsatisfactory, it seems Zuck may want to rescue his reputation by throwing money at it to try to make his next big gamble a winner. It seems a bit irrational given that other companies, and countries, have built SOTA LLMs without needing to throw NBA/NFL/rockstar money around.
This is the same thing. It is the new shiny tech demo that is really cool. And technically works really, really well and has some real uses, but that doesn’t make a multi billion dollar business.
He's not there yet, and he knows it. Jobs gave us GUIs and smartphones. Facebook is not even in the same universe, and Instagram is just something he bought. He went all in on the metaverse, but the technology still needs at least 10-15 years to fully bake. In the meantime, there's AGI/super-intelligence. He needs to beat Sam Altman.
The sad thing is, even if he does beat Sam to AGI, Sam will still probably get the credit as the visionary.
Steve Jobs neither gave/invented GUIs nor smartphones. :-D
I mean I'm with you, I think these things are pretty far away and are going to cost a lot of money to make and require a lot of failure in the mean time. But then again, it looks like they spent ~$18bn on Reality Labs last year. So if he was funding it all on his own dime, his current $260bn of wealth would give him a good 14 years runway if we ignore interest. It would be effectively indefinite if he earns about a 5% interest on that money.
I guess I'm just trying to say, it's hard to think about these things when we're talking about such scales of wealth. I mean at those scales, I'm pretty sure the money is meaningless, that money (and the ability to throw it around) is more a proxy for ego.
The only case where this may have made sense - but more for an individual rather than a team - is Google's aqui-rehire of Noam Shazeer for $1B. He was the original creator of the transformer architecture, had made a number of architectural improvements while at Character.ai, and thus had a track record of being able to wring performance out of it, which at Google-scale may be worth that kind of money.
It is the same thing in sports as well. There will only ever be one Michael Jordan one Lionel Messi one Tiger Woods one Magnus Carlsen. And they are paid a lot because they are worth it.
>> Meta seem to be spending so much so they don't later have to fight a war against an external Facebook-as-chatbot style competitor
Meta moved on from facebook a while back.It has been years since I last logged into facebook and hardly anybody I know actually post anything there. Its a relic of the past.
It’s not just uncomfortable but might not be true at all. Sports is practically the opposite type of skills: easy to measure, known rules, enormous amount of repetition. Research is unknown. A researcher that guarantees result is not doing research. (Coincidentally, the increasing rewards in academia for incrementalist result driven work is a big factor in the declining overall quality, imo.)
I think what’s happening is kind of what happened in Wall Street. Those with a few documented successes got disproportionately more business based to a large part on initial conditions and timing.
Not to take away from AI researchers specifically, I’m sure they’re a smart bunch. But I see no reason to think they stand out against other academic fields.
Occam’s razor says it’s panic in the C-suites and they perceive it as an existential race. It’s not important whether it actually is, but rather that’s how they feel. And they have such enormous amount of cash that they’re willing to play many risky bets at the same time. One of them being to hire/poach the hottest names.
It is not a question of exquisitely rare intellect, but rather the opportunity and funding/resources to prosper.
(And while there are certainly those who could have been the best who did not have the opportunity to succeed, or just didn't actually want to pursue it, I think usually this is way at the edges, i.e. removing the top would not make room for these people, because they're probably not even on anyone's radar at all, like the 'Einstein toiling in a field')
These AI researchers will probably have far more impact on society (good or bad I dont know) than the athletes, and the people who pay them (ie zuck et al) certainly thinks its worth paying them this much because they provide value.
But I counsel a different perspective: it's quite remunerative to be selling tulips when there's a mania on!
I think negative feelings are coming from more of a “why are they getting paid so much to build a machine that’s going to wreck everything” sort of angle, which I find understandable.
Will never understand the logic. They is literally better than an average senior dev, if he has been offered 250m package.
In contrast, a skilled football player lands somewhere between neutral and positive, as at the very least they entertain millions of people. And I'm saying that as someone who finds football painfully dull.
When someone had a successful business model that offsets the incredible costs let me know, but it is all hypothetical.
but I dont see news articles about athletes in such negativity, citing their young age etc.
The money here (in the AI realm) is coming a handful of oligarchs who are transparently trying to buy control of the future.
The difference between the two scenarios is... kinda obvious don't you think?
Are there 250 million AI specialists and the ones hired by Meta still come out on top?
Also much more people are affected by whatever AI is being developed/deployed than worldwide football viewers.
Top 5 football leagues have about 1.5billion monthly viewers. Top 5 AI companies (google, openai, meta etc) have far more monthly active users.
It just seems very short cited right now.
Or should I and my friends all be targeting 7-8 figure jobs?
Meta can make 40 of these hires (over a number of years) and still be in a better place than feeling like they have to make a single $10B acquisition (if they could even make it at that point)
Microsoft Research had hundreds of big brains for decades that all worked independently and added little of value to the business.
I worry that those who became billionaires in the AI boom won't want the relative status of their wealth to become moot once AGI hits. Most likely this will come in the form of artificial barriers to using AI that, for ostensible safety reasons, makes it prohibitively difficult for all but the wealthiest or AGI-lab adjacent social circles to use.
This will cause a natural exacerbation of the existing wealth disparities, as if you have access to a smarter AI than everyone else, you can leverage your compute to be tactically superior in any domain with a reward.
All we can hope for is a general benevolence and popular consensus that avoids a runaway race to the bottom effect as a result of all this.
I suppose some are genuine materialists who think that ultimately that is all we are as humans, just a reconstitution of what has come before. I think we’re much more complicated than that.
LLMs are like the myth of Narcissus and hypnotically reflect our own humanity back at us.
https://80000hours.org/2025/03/when-do-experts-expect-agi-to...
>One way to reduce selection effects is to look at a wider group of AI researchers than those working on AGI directly, including in academia. This is what Katja Grace did with a survey of thousands of recent AI publication authors.
>In 2022, they thought AI wouldn’t be able to write simple Python code until around 2027.
>In 2023, they reduced that to 2025, but AI could maybe already meet that condition in 2023 (and definitely by 2024).
>Most of their other estimates declined significantly between 2023 and 2022.
>The median estimate for achieving ‘high-level machine intelligence’ shortened by 13 years.
Basically every median timeline estimate has shrunk like clockwork every year. Back in 2021 people thought it wouldn't be until 2040 or so when AI models could look at a photo and give a human-level textual description of its contents. I think is reasonable to expect that the pace of "prediction error" won't change significantly since it's been on a straight downward trend over the past 4 years, and if it continues as such, AGI around 2028-2030 is a median estimate.
Claim doesn't check out; here's a YouTube video from Apple uploaded in 2021, explaining how to enable and use the iPhone feature to speak a high level human description of what the camera is pointed at: https://www.youtube.com/watch?v=UnoeaUpHKxY
> The thing is, AI researchers have continually underestimated the pace of AI progress
What's your argument?That because experts aren't good at making predictions that non-experts must be BETTER at making predictions?
Let me ask you this: who do you think is going to make a less accurate prediction?
Assuming no one is accurate here, everybody is wrong. So the question is who is more or less accurate. Because there is a thing as "more accurate" right?
>> In 2022, they thought AI wouldn’t be able to write simple Python code until around 2027.
Go look at the referenced paper[0]. It is on page 3, last item in Figure 1, labeled "Simple Python code given spec and examples". That line is just after 2023 and goes to just after 2028. There's a dot representing the median opinion that's left of the vertical line half way between 2023 and 2028. Last I checked, 8-3 = 5, and 2025 < 2027.And just look at the line that follows
> In 2023, they reduced that to 2025, but AI could maybe already meet that condition in 2023
Something doesn't add up here... My guess, as someone who literally took that survey, is what's being referred to as "a simple program" has a different threshold.Here's the actual question from the survey
Write concise, efficient, human-readable Python code to implement simple algorithms like quicksort. That is, the system should write code that sorts a list, rather than just being able to sort lists.
Suppose the system is given only:
A specification of what counts as a sorted list
Several examples of lists undergoing sorting by quicksort
Is the answer to this question clear? Place your bets now!Here, I asked ChatGPT the question[1], it got it wrong. Yeah, I know it isn't very wrong, but it is still wrong. Here's an example of a correct solution[2] which shows the (at least) two missing lines. Can we get there with another iteration? Sure! But that's not what the question was asking.
I'm sure some people will say that GPT gave the right solution. So what that it ignored the case of a singular array and assumed all inputs are arrays. I didn't give it an example of a singular array or non-array inputs, but it did just assume. I mean leetcode questions pull out way more edge cases than I'm griping on here.
So maybe you're just cherry-picking. Maybe the author is just cherry-picking. Because their assertion that "AI could maybe already meet that condition in 2023" is not unobjectively true. It's not clear that this is true in 2025!
[0] https://arxiv.org/abs/2401.02843
[1] https://chatgpt.com/share/688ea18e-d51c-8013-afb5-fbc85db0da...
[2] https://www.geeksforgeeks.org/python/python-program-for-inse...
The graph you're looking at is of the 2023 survey, not the 2022 one
As for your question, I don't see what it proves. You described the desired conditions for an a sorting algorithm and chatGPT implemented a sorting algorithm. In the case of an array with one element, it bypasses the for loop automatically and just returns the array. It is reasonable for it to assume all inputs are arrays because your question told it that its requirements were to create a program that " turn any list of numbers into a foobar."
Of course I'm not any one of the researchers asked about their predictions in the survey, but I'm sure if you told them "a SOTA AI in 2025 produced working human readable code based on a list of specifications, and is only incorrect by a broad characterization of what counts as an edge case that would trip up a reasonable human coder on the first try", I'm sure the 2022 or 2023 respondents would say that it meets their criteria for their threshold.
> As for your question, I don't see what it proves.
The author made a claimI showed the claim was false
The author bases his argument on this and similar claims. Showing his claim is false says he's argument doesn't hold
> and is only incorrect by a broad characterization
I don't know of I'd really call a single item an "edge case" so much as generalization.But I do know I'd answer that question differently given your reframing.
No amount of describing pictures in natural language is AGI.
If you think an incremental improvement in transformers are what's needed for AGI, I see your angle. However, IMO, transformers haven't shown any evidence of that capability. I see no reason to believe that they'd develop that with a bit more compute or a bit more data.
So honestly, it doesn't seem like many of the predictions are that far off with this in context. That things sped up as funding did too? That was part of the prediction! The other big player here was falling cost of compute. There was pretty strong agreement that if compute was 50% more expensive that this would result in a decrease in progress by >50%.
I think uncontextualized, the predictions don't seem that inaccurate. They're reasonably close. Contextualized, they seem pretty accurate.
If you're going to offer an opinion contrary to the majority, you should at least have a convincing argument why.
To me it's obvious that these extremes create perverse incentives, so the people who will take those jobs won't amount to much. I m willing to bet that Meta's AI efforts are doomed from now on.
They are IC roles for the most part
I suppose those $100M are spread across years and potentially contingent upon achieving certain milestones.
With $250M they can easily buy their own competitive AI compute rig ...
Bear case: No, there's nothing you can do. These are exceptionally rare hires driven by FOMO at the peak of AI froth. If any of these engineers are successful at creating AGI/superintelligence within five years, then the market for human AI engineers will essentially vanish overnight. If they are NOT successful at creating AGI within five years, the ultra high-end market for human AI engineers will also vanish, because companies will no longer trust that talent is the key.
Bull case: Yes, you should go all in and rebrand as a self-proclaimed AI genius. Don't focus on commanding $250M in compensation (although 24, Matt Deitke has been doing AI/ML since high school). Instead, focus on optimizing or changing any random part of the transformer architecture and publishing an absolutely inscrutable paper about the results. Make a glossy startup page that makes some bold claims about how you'll utilize your research to change the game. If you're quick, you can ride the wave of FOMO and start leveling up. Although AGI will never happen, the opportunities will remain as we head into the "plateau of productivity."
These types of comp packages also seem designed to create a kind of indentured servitude for the researchers. Instead of forming their own rival companies that might actually compete with facebook, facebook is trying to foreclose that possibility. The researchers get the money, but they are also giving up autonomy. Personally, no amount of money would induce me to work for Zuckerberg.
Just a thought:
Assuming that Meta's AI is actually good. Could it rather be that having access to a massive amount of data does not bring that much of a business value (in this case particularly for training AIs)?
Evidence for my hypothesis: if you want to gain a deep knowledge about some complicated specific scientific topic, you typically don't want to read a lot of shallow texts tangentially related to this topic, but the few breakthrough papers and books of the smartest mind who moved the state of art in the respective area. Or some of the few survey monographs of also highly smart people who work in the respective area who have a vast overview about how these deep research breakthroughs fit into the grander scheme of things.
You can get that technical or scientific context for a lot less than $250 million per head.
Assuming a lab has 20 phds/postdocs and a few professors, call it 25 people per lab, and you're compute / equipment heavy, getting you up to an average of 1M per person per year in total fully loaded costs (including facilities overhead and GPUs and conferences and whatnot), then you're looking at 200 PhD researchers. Assuming that each PhD makes one contribution per 4 years, then that's 50 advances in the field per year from your lab. if only 10% are notable, that's 5 things you've gotten that people are going to get excited about in the field. You need 2% of these contributions to be groundbreaking to get a single major contribution per year.
So 250M for a single person is a lot, but if that person is really really good, then that may be only expensive and not insane.
Most would say, vibe-wise Llama 4 fell flat in face of Qwen & friends.
Media has this strange need for fully-grown responsible adults to be thought of as children. Not only for the amazing stories of "this (mid-30s career professional) kid did something", but also helpful to try and shirk responsibility.
Thinking about attempts to frame SBF as a wee smol bean kid in over his head while actively committing fraud.
You can always go back and finish your PhD later.
https://marvelcinematicuniverse.fandom.com/wiki/William_Gint...
For me the Meta storm of billions in hiring was enough to start selling any tech giant related stock.
It is about to crash, harder than ever.
The issue with high salaries is that there is a latent assumption that these people provide the multiples in additional value. That they are so smarter than everyone else.
This is simply not true, and will lead to a competitive disadvantage.
But feel free to prove me wrong - I am ammendable.
But I would expect them to be smart and have relevant experience that everyone else doesn’t have, and I expect the companies offering these salaries aren’t doing it for fun but because they believe their IP or ability to generate IP is very hard to come by, and it’s better to monopolize that talent than let competitors do so. If they could hire 10 people equally as good for 1/10 the price then they would do so. But I’m sure there’s also a large dose of gambling too; even in sports highly anticipated freshman drafts can turn into duds.
When OpenAI was making waves the first time, then Google launched their neutered incapable competitor, I thought it is “over” for Google because why would anyone use search anymore (apart from the 1% of use cases where it gives better results faster), and clearly they are incapable of building good new products anymore…
and now they are there with the best LLMs and they are at the top of the pack again.
Billions of dollars in the bank, great developers, good connections to politicians and institutions mean that you are hard to replace even if you fumble it a couple of times.
It is indeed; those people hired at those salaries are not going to "produce" more than the people hired at normal salaries.
Because what we have now is a "good enough" so getting a 10x better LLM isn't going to produce a 10x increase in revenue (nevermind profit).
The problem is not "We need a better LLM" or "We need cheaper/faster generation". It's "We don't know how to make money of this".
That doesn't require engineers who can creat the next generation SOTA in AI, that requires business people who can spot solutions which simply needs tokens.
We're sailing uncharted waters, all bets are off.
EUR:USD has been rising for a reason.
and then immediately bounce back to higher than it was before
I think the biggest confuser here is that there are really two games being played, the money game and the technology game. Investments in AI are going to be largely driven by speculation on their monetary outcome, not technological outcome. Whether or not the technology survives the Venture Capital Gauntlet, the investment bubble could still pop, and only the businesses that have real business models survive. Heaps of people lose their shirt to the tune of billions, yet we still have an AI powered future of some kind.
All this to say, you can both be certain AI is a valuable technology and also believe the economics around it right now are not founded in a clear reality. These are all bets on a future none of us can be sure of.
But thinking Tech Giants are going to crash is woefully ignorant of how the market works and indicates a clear wearing of blinders. And it's a common one among coders who feel the noose tightening and who are the types of people led by their own fear. And i find that when you mix that with arrogance, these three traits often correlate with older generations of software engineers who are poor at adapting to the new technology. The ones who constantly harp on how AI is full of mistakes and disregard that humans are as well. The ones who insist on writing even more than 70% of their own code rather than learning to guide new tools granularly. It's a take that nobody should entertain or respect.
As for your point on 'future none of us can be sure of.' I'll push back on that: It is not clear how AGI or ASI will come about, ie. what architecture will underpin it. However - it is absolutely clear that AI powered coding will continue to improve, and that algorithmic progress can and will be driven by AI coders, and that that will lead to ASI.
The only way to not believe that is to think there is a special sauce behind consciousness. And I tend to believe in scientific theory, not magic.
That is why there is so much VC. That is why tech giants are all racing. It isn't a bet. It is a race to a visible, clear goal of ASI that again, it takes blinders to not see.
So while AI is absolutely a bubble, this bubble will mark the transition to an entirely new economic system, society, world, etc. (and flip a coin on whether any of us survive it lol, but that's a whole separate conversation)
Based on what precedent?
Maybe I need to get one of these recruitment agents.
Mr. Deitke, who recently dropped out of a computer science Ph.D. program at the University of Washington, had moonlighted at a Seattle A.I. lab, the Allen Institute for Artificial Intelligence. There, he led the development of a project called Molmo, an A.I. chatbot that juggles images, sounds and text — the kind of system that Meta is trying to build.
Probably Zuck is trying to prop up his failed Metaverse with "AI". $250 million is nothing compared to what has already been sunk into that Spruce Goose.
Paying 250m to a genius to more deeply entrap user time and attention is going to look diabolical unless there are measurable user life improvement outcome measurements... if metas more slop addiction that 250m is a diabolical contract
https://nypost.com/2025/08/01/business/meta-pays-250m-to-lur...
After that, it's manual labor like the plebs or having enough savings to ~~last them the rest of their lives~~ invest and "earn" passive income by taking a portion of the value produced by people who still do actual work.
Some people are rightly pointing out that for quite a lot of things right now we probably already have AGI to a certain extent. Your average AI is way better than the average schmuck on the street in basically anything you can think of - maths, programming, writing poetry, world languages, music theory. Sure there are outliers where AI is not as good as a skilled practitioner in foo, but I think the AGI bar is about being "about as good as the average human" and not showing complete supremacy in every niche. So far the world has been disrupted sure, but not ended.
ASI of course is the next thing, but that's different.
I've gotten some great results out of LLM's, but thats often because the prompt was well crafted, and numerous iterations were performed based on my expertise.
You couldn't get that out of the LLM without that person most of the time.
To highlight the inverse: If someone truly has an "AGI" system (the acronym the goalposts have been moved-to) then it wouldn't matter who was wrangling it.
These models don't understand anything similar to reality and they can be confused by all sorts of things.
This can obviously be managed and people have achieved great things with them, including this IMO stuff, but the models are despite their capability very, very far from AGI. They've also got atrocious performance on things like IQ tests.
Yeah, that framing for LLMs is one of my pet-causes: It's document generation, some documents resemble stories with characters, and everything else (e.g. "chatting" with an LLM) is an illusion, albeit an impressive and sometimes-useful one.
Being able to generate a document where humans perceive plausible statements from Santa Claus does not mean Santa Claus now lives inside the electronic box, that flying sleighs are real, etc. The principle still holds even if the character is described as "an intelligent AI assistant named [Product Name]".
I think a possible scenario is that we see huge open source advances in training and inference efficiency that ends up making some of the mega-investments in AI infrastructure look silly.
What will probably ‘save’ the mega-spending is (unfortunately!) the application of AI to the Forever Wars for profit.
Whenever and however it comes, it’s going to be a bloodbath because we haven’t had a proper burst since 2008. I don’t count 2020.
AI is great and it's the future, and a bunch of people will probably eventually turn it into very powerful systems able to solve industrially important maths and software development problems, but that doesn't meant they'll make huge money from that.
Chances are good that while they’re competitive for sure, what they really have that landed them these positions is connections and the ability to market themselves well.
and what exactly did this "whiz" kid do that you and I didn’t
I assume you are going for “there are no more useful resources to acquire so those with all the resources overpay just to feel like they own those last few they don’t yet own”.
seems like governments will have a thing to say about who's able to run that AGI or not.
GPU's run on datacenters which exist in countries
Tokyo Professor and former Beijing Billionaire CEO Jack Ma, may disagree.
Granted, capitalism needs maintenance.
Externalities need to be consistently reflected, so capitalism can optimize real value creation, instead of profitable value destruction. It is a tool that can be very good at either.
Capitalism also needs to be protected from corrupted government by, ironically, shoring up the decentralization of power so critical for democracy, including protecting democracy from capitalism's big money.
(Democracy and capitalism complement each other, in good ways when both operating independently in terms of power, and supportively in terms of different roles. And, ironically, also complement each other when they each corrupt the other.)
The money and resources they have available is astronomical.
Instead they spend it on future proofing their profits.
What a sad world we have built.
But the promises turned into stock boosting lies; the environmental good into vote buying for climate change deniers, and space exploration into low earth cell-towers.
Those years were a long time ago for me. I’ve been arguing musk is a snake oil salesman since at least 2014. I lost friends over it at the time, people who were very heavily invested into musk, both financially and for some reason, emotionally.
Electric cars? That would be Martin Eberhard and Marc Tarpenning, Tesla’s actual founders. They created the Roadster and brought the vision. Musk came in with money, staged a hostile takeover, and then rewrote the company’s history to fit his inflated ego, like the sad little man he is. It's honestly cringe.
> cheap orbit rockets and with starlink internet almost everywhere possible on earth
Amazing what billions in government contracts and management smart enough to keep Elon out of the way can accomplish. SpaceX deserves praise; spinning it into a Elon is a genius narrative? Not so much.
As for the snake oil, just a few of Elon's greatest hits:
1. Hyperloop. Old idea's wrapped in new buzzwords. Never viable. He didn’t invent it, but he sure wants you to think he did, just like with Telsa.
2. FSD “next year” since forever. Still not here. Still being marketed like it's solved. And still charging like a wounded bull for it.
3. Robotaxis and appreciation hype. Musk literally claimed Teslas would go up in value and earn passive income as robotaxis. It doesnt get much more snake oil than this.
"We’re confident the cars will be worth more than what you pay for them today." – July 2019
"It’s financially insane to buy anything other than a Tesla." – April 2019
Absolutely laughable. Show me one consumer owned Tesla that’s worth more today than it was in 2019. I’ll wait. If you can't, we'll mark it down as snake oil bullshit.
4. Optimus. Elon hyped this like Tesla had cracked general purpose humanoid robotics out of nowhere, leapfrogging companies that have been grinding on this for decades. The first reveal? A guy in a suit dancing. The follow ups? Stiff prototypes doing slow, assisted movements and following that, remotely controlled animatronics and so on. Meanwhile, Musk is on stage talking about replacing human labor, reshaping the economy, and bots becoming more valuable than cars. None of it is remotely close. But it worked, stock popped, headlines flooded in, and the fantasy sold.
5. SolarCity. An overhyped, underdelivered money pit that Tesla had to bail out. Just another Elon tyre fire.
6. "Funding secured." Flat out lied about taking Tesla private at $420. SEC slapped him, but the stock soared. Mission accomplished.
And that’s just scratching the surface of his bullshit. It ignores all the other missed deadlines, quality issues, inflated delivery claims, etc etc etc. Here is some more of his bullshit, also I am sure not exhaustive:
Yes, he’s had wins. But wins don’t erase the mountain of bullshit. Elon’s biggest output isn’t cars or rockets. It’s hype. His true skill is selling fantasy to retail investors and tech worshipping middled aged white dudes who still think he’s some genius messiah. Strip the PR away, and you’ve got a guy who overpromises, underdelivers, and never stops running his mouth.
I feel sorry for you. I guess we can share our feeling sorry for each other in common.
> You just write long BS for you bias.
Careful now, your bias is showing.
> Have a good day
Every day is a great day with the money i've made off TSLA lately :) Thanks Elon!
We should not listen to people who promise to make Mars safe for human habitation, until we have seen them make Oakland safe for human habitation. We should be skeptical of promises to revolutionize transportation from people who can't fix BART, or have never taken BART."
- https://idlewords.com/talks/sase_panel.htm
"Living standards in Poland in 2010 had more than doubled from 1990. In the same time period, in the United States, I’ve seen a whole lot of nothing. Despite fabulous technical progress, practically all of it pioneered in our country, there’s been a singular failure to connect our fabulous prosperity with the average person.
A study just out shows that for the median male worker in the United States, the highest lifetime wages came if you entered the workforce in 1967. That is astonishing. People born in 1942 had better lifetime earnings prospects than people entering the workforce today.
You can see this failure to connect with your own eyes even in a rich place like Silicon Valley. There are homeless encampments across the street from Facebook headquarters. California has a larger GDP than France, and at the same time has the highest poverty rate in America, adjusted for cost of living. Not only did the tech sector fail to build up the communities around it, but it’s left people worse off than before, by pricing them out of the places they grew up."
Very aptly, the Manhattan Project or Space Race weren't aimed at the improvement of mankind per se. Motivation was a lot more specific and down to earth.
Well, no, the way forward is to just take away all that money and just spread it around.
https://soundcloud.com/adventurecapitalists/moving-mt-fuji
lyrics: https://genius.com/Adventure-capitalists-moving-mt-fuji-lyri...
but it's not the same reading the lyrics, you really need to hear his voice
Any left wing / socialist person on HN should be ecstatic - literally applauding with grins on their faces - that workers are extracting such sums out of the capitalist class. The hate for these salaries is mind boggling to me, and shows a lot of opposition to labor being paid what they are due is more about envy than class consciousness
I don't feel strongly about these salaries beyond them being an indication of deep dysfunction in the system. This is not healthy, for a market or for a society. No-one should be paid these amounts but I don't care about these developers because they don't run the system.
I've benefited from devs being paid well. Not that well. But same thing in concept.
I'm guessing not, but both the AI expert and the CEO are agents for the owner class: it is owners like Elon and Sam Altman that are deciding to pay these huge salaries and they are doing it for the same reason that corporate boards of directors pay CEOs huge salaries: namely, to help the owners accumulate more capital.
This gentleman now has an entirely different set of problems to everyone else. Do you think he will now go on to advocate for wealth equality, housing affordability, healthcare etc, or do you think he'll go buy some place nice away from his former problems and enjoy his (earned) compensation in peace?
A 1b $ anonymous software engineer is likely leading to 5000 more revenue than a 200k talented Ai engineer.
Personal anecdote time. One of the people named in the press as having turned down one of these hyper-offers used to work in an adjacent team, same "pod" maybe, whatever adjacent. That person is crazy smart, stands out even among elite glory days FAANG types. Anyways they left and when back on the market I was part of the lobby to get them back at any price, had to run it fairly high up the flagpole (might have been Sheryl who had to sign off, maybe it was Mark).
Went on to make it back for the company a hundred fold the first year. Clearly a good choice to "pay over market".
Now it's a little comical for it to be a billion or whatever, that person was part of a clique of people at that level and there's a lot of "brand" going into a price tag like that: the people out of our little set who did compilers or whatever instead of FAIR are just as good and what is called "AI" now is frankly not that differentiated (the person in question maintained as much back in the day).
But a luck and ruthlessness hire like Zuckerberg on bended knee to a legitimate monster hacker and still getting dissed? Applause. I had Claude write a greentext for the amusement of my chums. I recommend it kek.
Because if it's not funding the revolution (peaceful or otherwise) why exactly would a leftist applaud these salaries?
Marx hated the bourgeoisie (business owners, including petite-bourgeoisie AKA small business owners) and loved the proletariat - including the extremely skilled or well paid proletarians.
Marx also hated the lumpen-proletariet - AKA prostitutes, homeless, etc.
What I did or didn't read is alas occluded from you. The Masereel illustrated woodcuts on a recent edition of the manifesto are wonderful.
one would think that a talented academic/researcher getting a 1B salary would impress the socialist people but it doesn't because it was never about that. it was about bringing rich people down and not much else.
Edit oops, knowledge was outdated, it’s about 270.000.
Right now capital expenses are responsible for most of AI's economic impacts, as seen by the infrastructure spend contributing more to GDP than consumer spending this year.