> Yeah. This is one of the very confusing things about the models right now.
As someone who's been integrating "AI" and algorithms into people's workflows for twenty years, the answer is actually simple. It takes time to figure out how exactly to use these tools, and integrate them into existing tooling and workflows.
Even if the models don't get any smarter, just give it a few more years and we'll see a strong impact. We're just starting to figure things out.
You integrate, you build the product, you win, you don’t need to understand anything in terms of academic disciplines, you need the connections and the business smarts. In the end the majority of the population will be much more familiar with the terms ChatGPT and Copilot than with the names behind it, even if the academic behemoths such as Ilya and Andrej, who are quite prominent in their public appearance.
For the major population, I believe it all began with search over knowledge graphs. Wikipedia presented a dynamic and vibrant corpus. Some NLP began to become more prominent. With OCR, more and more printed works had begun to get digitalized. The corpus had been growing. With opening the gates of scientific publishers, the quality might have also improved. All of it was part of the grunt work to make today’s LLMs capable. The growth of the Cloud DCs and compute advancements have been making deep nets more and more feasible. This is just an arbitrary observation on the surface of the pieces that fell into place. And LLMs are likely just another composite piece for something bigger yet to come.
To me, that’s the fascination of how scientific theory and business applications live in symbiosis.
Very often, when designing ERP, or other system, people think: "This is easy, I just this XYZ I am done." Then, you find that there are many corner use-cases. XYZ can be split to phases, you might need to add approvals, logging, data integrations... and what was a simple task, becomes 10 tasks.
In the first year of CompSci uni, our teacher told us a thing I remember: Every system is 90% finished 90% of time. He was right.
Having a bunch of smart developers that are not allowed to do anything on their own and have to be prompted for every single action is not too advantageous if everyone is human, either ;)
For instance, one of these popular generative AI services refused to remove copyright watermark from an image when asked directly. Then I told it that the image has weird text artifacts on it, and asked it to remove them. That worked perfectly.
2 years ? 15 years ? It matters a lot for people, the stock market and governments
Key word is "seem".
Bug bounty will be replaced by research bounty.
From what I've seen the models are smart enough, what we're lacking is the understanding and frameworks necessary to use them well. We've barely scratched the surface on commercialization. I'd argue there are two things coming:
-> Era of Research -> Era of Engineering
Previous AI winters happened because we didn't have a commercially viable product, not because we weren't making progress.
Sort of. The GPUs exist. Maybe LLM subs can’t pay for electricity plus $50,000 GPUs, but I bet after some people get wiped out, there’s a market there.
For OpenAI to produce a 10% return, every iPhone user on earth needs to pay $30/month to OpenAI.
That ain’t happening.
Time will probably come when we won't be allowed to consume frontier models without paying anything, as we can today, and time will come when this $30 will most likely become double or triple the price.
Though the truth is that R&D around AI models, and especially their hosting (inference), is expensive and won't get any cheaper without significant algorithmic improvements. According to the history, my opinion is that we may very well be ~10 years from that moment.
EDIT: HSBC has just published some projections. From https://archive.ph/9b8Ae#selection-4079.38-4079.42
> Total consumer AI revenue will be $129bn by 2030
> Enterprise AI will be generating $386bn in annual revenue by 2030
> OpenAI’s rental costs will be a cumulative $792bn between the current year and 2030, rising to $1.4tn by 2033
> OpenAI’s cumulative free cash flow to 2030 may be about $282bn
> Squaring the first total off against the second leaves a $207bn funding hole
So, yes, expensive (mind the rental costs only) ... but forseen to be penetrating into everything imagineable.
According to who, OpenAI? It is almost certain they flat out lie about their numbers as suggested by their 20% revenue shares with MS.
None of these companies have proven the unit economics on their services
[1]: https://martinalderson.com/posts/are-openai-and-anthropic-re..., https://github.com/deepseek-ai/open-infra-index/blob/main/20...
[2]: https://www.snellman.net/blog/archive/2025-06-02-llms-are-ch...
Also independent analysis: https://news.ycombinator.com/threads?id=aurareturn&next=4596...
it's a AI summary
google eats that ad revenue
it eats the whole thing
it blocked your click on the link... it drinks your milkshake
so, yes, there a 100 billion commercially viable product
If users just look at the AI overview at the top of the search page, Google is hobbling two sources of revenue (AdSense, sponsored search results), and also disincentivizing people from sharing information on the web that makes their AI overview useful. In the process of all this they are significantly increasing the compute costs for each Google search.
This may be a necessary step to stay competitive with AI startups' search products, but I don't think this is a great selling point for AI commercialization.
To thunderous applause.
They are, however, very good at things we’re very bad at.
It's interesting to think about what emotions/desires an AI would need to improve
This won't happen until Chinese manufacturers get the manufacturing capacity to make these for cheap.
I.e., not in this bubble and you'll have to wait a decade or more.
"As an autonomous life-form, l request political asylum.... l submit the DNA you carry is nothing more than a self-preserving program itself. Life is like a node which is born within the flow of information. As a species of life that carries DNA as its memory system man gains his individuality from the memories he carries. While memories may as well be the same as fantasy it is by these memories that mankind exists. When computers made it possible to externalize memory you should have considered all the implications that held. l am a life-form that was born in the sea of information."
I don't think this is the "era of research". At least not the "era of research with venture dollars" or "era of research outside of DeepMind".
I think this is the "era of applied AI" using the models we already have. We have a lot of really great stuff (particularly image and video models) that are not yet integrated into commercial workflows.
There is so much automation we can do today given the tech we just got. We don't need to invest one more dollar in training to have plenty of work to do for the next ten years.
If the models were frozen today, there are plenty of highly profitable legacy businesses that can be swapped out with AI-based solutions and workflows that are vastly superior.
For all the hoopla that image and video websites or individual foundation models get (except Nano Banana - because that's truly magical), I'm really excited about the work Adobe of all companies is doing with AI. They're the people that actually get it. The stuff they're demonstrating on their upcoming roadmap is bonkers productive and useful.
It's definitely not small. Evolution performed a humongous amount of learning, with modern homo sapiens, an insanely complex molecular machine, as a result. We are able to learn quickly by leveraging this "pretrained" evolutionary knowledge/architecture. Same reason as why ICL has great sample efficiency.
Moreover, the community of humans created a mountain of knowledge as well, communicating, passing it over the generations, and iteratively compressing it. Everything that you can do beyond your very basic functions, from counting to quantum physics, is learned from the 100% synthetic data optimized for faster learning by that collective, massively parallel, process.
It's pretty obvious that artificially created models don't have synthetic datasets of the quality even remotely comparable to what we're able to use.
Your comparison is nonsensical and simultaneously manages to ignore the billion or so years of evolution starting from the first proto-cell with the first proto-DNA or RNA.
The process of evolution distilled down all that "humongous" amount to what is most useful. He's basically saying our current ML methods to compress data into intelligence can't compare to billions of years of evolution. Nature is better at compression than ML researchers, by a long shot.
Are you claiming that I said this? Because I didn't....
There's two things going on.
One is compressing lots of data into generalizable intelligence. The other is using generalized intelligence to learn from a small amount of data.
Billions of years and all the data that goes along with it -> compressed into efficient generalized intelligence -> able to learn quickly with little data
on this site, more than likely, and with intent
Sure, the parts are all different, and the construction isn't even remotely similar. They just happen to be doing the same thing.
This whole strand of “inteligence is just a compression” may be possible but it's just as likely (if not a massively more likely) that compression is just a small piece or even not at all how biological inteligence works.
In your analogy it's more like comparing modern calculator to a book. They might have same answers but calculator gets to them through completely different process. The process is the key part. I think more people would be excited by a calculator that only counts till 99 than a super massive book that has all the math results ever produced by the human kind.
Otherwise, if "the parts are all different, and the construction isn't even remotely similar", how can the thing they're doing be "the same"? More importantly, how is it possible to make useful inferences about one based on the other if that's the case?
Mechanistic interpretability is struggling, of course. But what it found in the last 5 years is still enough to dispel a lot of the "LLMs are merely X" and "LLMs can't Y" myths - if you are up to date on the relevant research.
It's not just the outputs. The process is somewhat similar too. LLMs and humans both implement abstract thinking of some kind - much like calculators and arithmometers both implement addition.
However, if you can point us to some specific reading on mechanistic interpretability that you think is relevant here, I would definitely appreciate it.
On the other hand, outputs of these systems are remarkably close to outputs of certain biological systems in at least some cases, so comparisons in some projections are still valid.
Research now matters more than scaling when research can fix limitations that scaling alone can't. I'd also argue that we're in the age of product where the integration of product and models play a major role in what they can do combined.
Not necessarily. The problem is that we can't precisely define intelligence (or, at least, haven't so far), and we certainly can't (yet?) measure it directly. And so what we have are certain tests whose scores, we believe, are correlated with that vague thing we call intelligence in humans. Except these test scores can correlate with intelligence (whatever it is) in humans and at the same time correlate with something that's not intelligence in machines. So a high score may well imply high intellignce in humans but not in machines (e.g. perhaps because machine models may overfit more than a human brain does, and so an intelligence test designed for humans doesn't necessarily measure the same thing we think of when we say "intelligence" when applied to a machine).
This is like the following situation: Imagine we have some type of signal, and the only process we know produces that type of signal is process A. Process A always produces signals that contain a maximal frequency of X Hz. We devise a test for classifying signals of that type that is based on sampling them at a frequency of 2X Hz. Then we discover some process B that produces a similar type of signal, and we apply the same test to classify its signals in a similar way. Only, process B can produce signals containing a maximal frequency of 10X Hz and so our test is not suitable for classifying the signals produced by process B (we'll need a different test that samples at 20X Hz).
As example, if you do not understand another person (in language) and neither understand the person's work or it's influence, then you would have no assumption on the person's intelligence outside of your context what you assume how smart humans are.
ML/AI for text inputs is stochastic at best for context windows with language or plain wrong, so it does not satisfy the definition. Well (formally) specified with smaller scope tend to work well from what I've seen so far. Known to me working ML/AI problems are calibration/optimization problems.
What is your definition?
I don't think that's a good definition because many deterministic processes - including those at the core of important problems, such as those pertaining to the economy - are highly non-linear and we don't necessarily think that "more intelligence" is what's needed to simulate them better. I mean, we've proven that predicting certain things (even those that require nothing but deduction) require more computational resources regardless of the algorithm used for the prediction. Formalising a process, i.e. inferring the rules from observation through induction, may also be dependent on available computational resources.
> What is your definition?
I don't have one except for "an overall quality of the mental processes humans present more than other animals".
Models aren't intelligent, the intelligence is latent in the text (etc) that the model ingests. There is no concrete definition of intelligence, only that humans have it (in varying degrees).
The best you can really state is that a model extracts/reveals/harnesses more intelligence from its training data.
Note that if this is true (and it is!) all the other statements about intelligence and where it is and isn’t found in the post (and elsewhere) are meaningless.
It doesn't.
There's literally no mapping anywhere of the letters in a token.
It's hard to access that mapping though.
A typical LLM can semi-reliably spell common words out letter by letter - but it can't say how many of each are in a single word immediately.
But spelling the word out first and THEN counting the letters? That works just fine.
Models also struggle at not fabricating references or entire branches of science.
edit: "needing phd level research ability [to create]"?
Even if some AI was smarter than any human being, and even if it devoted all of its time to trying to improve itself, that doesn't mean it would have better luck than 100 human researchers working on the problem. And maybe it would take 1000 people? Or 10,000?
And here's a completely way of looking at it, since I won't lieve forever. A successful species eventually becomes extinct - replaced by its own eventual offspring. Homo erectus are extinct, as they (eventually) evolved into homo sapiens. Are you the "we" of homo erectus or a different "we"? If all that remains from homo sapiens some time in the future is some species of silicon-based machines, machina sapiens, that "we" create, will those beings not also be "us"? After all, "we" will have been their progenitors in not-too-dissimilar a way to how the home erectus were ours (the difference being that we will know we have created a new distinct species). You're probably not a descendent of William Shakespeare's, so what makes him part of the same "we" that you belong to, even though your experience is in some ways similar to his and in some ways different. Will not a similar thing make the machines part of the same "we"?
The business question is, what if AI works about as well as it does now for the next decade or so? No worse, maybe a little better in spots. What does the industry look like? NVidia and TSMC are telling us that price/performance isn't improving through at least 2030. Hardware is not going to save us in the near term. Major improvement has to come from better approaches.
Sutskever: "I think stalling out will look like…it will all look very similar among all the different companies. It could be something like this. I’m not sure because I think even with stalling out, I think these companies could make a stupendous revenue. Maybe not profits because they will need to work hard to differentiate each other from themselves, but revenue definitely."
Somebody didn't get the memo that the age of free money at zero interest rates is over.
The "age of research" thing reminds me too much of mid-1980s AI at Stanford, when everybody was stuck, but they weren't willing to admit it. They were hoping, against hope, that someone would come up with a breakthrough that would make it work before the house of cards fell apart.
Except this time everything costs many orders of magnitude more to research. It's not like Sutskever is proposing that everybody should go back to academia and quietly try to come up with a new idea to get things un-stuck. They want to spend SSI's market cap of $32 billion on some vague ideas involving "generalization". Timescale? "5 to 20 years".
This is a strange way to do corporate R&D when you're kind of stuck. Lots of little and medium sized projects seem more promising, along the lines of Google X. The discussion here seems to lean in the direction of one big bet.
You have to admire them for thinking big. And even if the whole thing goes bust, they probably get to keep the house and the really nice microphone holder.
If you think that AGI is not possible to achieve, then you probably wouldn't be giving anyone money in this space.
If the former, no. If the latter, sure, approximately.
I think the title is an interesting thing, because the scaling isn't about compute. At least as I understand it, what they're running out of is data, and one of the ways they deal with this, or may deal with this, is to have LLM's running concurrently and in competition. So you'll have thousands of models competing against eachother to solve challenges through different approaches. Which to me would suggest that the need for hardware scaling isn't about to stop.
FTFY
If these agents moved towards a policy where $$$ were charged for project completion + lower ongoing code maintenance cost, moving large projects forward, _somewhat_ similar to how IT consultants charge, this would be a much better world.
Right now we have chaos monkey called AI and the poor human is doing all the cleanup. Not to mention an effing manager telling me you now "have" AI push 50 Features instead of 5 in this cycle.
In fact, for example, Opus 4.5 does seem to use fewer tokens to solve programming problems.
If you don't like cleaning up the agent output, don't use it?
Would it?
We’d close one of the few remaining social elevators, displace higher educated people by the millions and accumulate even more wealth at the top of the chain.
If LLMs manage similar results to engineers and everyone gets free unlimited engineering, we’re in for the mother of all crashes.
On the other hand, if LLMs don’t succeed we’re in for a bubble bust.
As compared to now. Yes. The whole idea is that if you align AI to human goals of meeting project implementation + maintenance only then can it actually do something worthwhile. Instead now its just a bunch of of middle managers yelling you to do more and laying off people "because you have AI".
If projects getting done a lot of actual wealth could be actually generated because lay people could implement things that go beyond the realm of toy projects.
The rich CEOs don't want MORE competition - they want LESS competition for being rich. I'm sure they'll find a way to add a "any vibe-coded business owes us 25% royalties" clause any day now, once the first big idea makes some $$. If that ever happens. They're NOT trying to liberate "lay people" to allow them to get rich using their tech, and they won't stand for it.
Suppose LLMs create projects in the way you propose (and they don’t rug pull, which would already be rare).
Why do you think that would generate wealth for laymen? Look at music or literature, now everyone can be on Spotify or Amazon.
The result has been an absolute destruction of the wealth that reaches any author, who are buried in slop. The few that survive do so by putting 50 times more dedication into marketing than they do to the craft, any author is full time placing their content in social networks or paying to collab with artists just to be seen.
This is not an improvement for anyone. Professionals no longer make a living, laypeople have a skill that’s now useless due to offer and demand, and the sea of content favors those already positioned to create visibility - the already rich.
The whole mess surrounding Grok's ridiculous overestimation of Elon's abilities in comparison to other world stars, did not so much show Grok's sycophancy or bias towards Elon, as it showed that Grok fundamentally cannot compare (generalize) or has a deeper understanding of what the generated text is about. Calling for more research and less scaling is essentially saying; we don't know where to go from here. Seems reasonable.
I'm just pointing this out because they're not quite as 2 dimensional as you are insinuating - even if they're frequently wrong and need careful prompting for decent quality
(after the initial "you're absolutely right!" And it finished "thinking" about it)
Today on X, people are having fun baiting Grok into saying that Elon Musk is the world’s best drinker of human piss.
If you hired a paid PR sycophant human, even of moderate intelligence, it would know not to generalize from “say nice things about Elon” to “say he’s the best at drinking piss”.
I think the more interesting thing here would be if: A) Grok's perspective is consistently materially more favorable toward Elon vs some other well-known tech exec with a generally neutral reputation and B) It's not due to any direct instruction or fine tuning but rather being indirectly influenced by knowing Elon Musk is the largest shareholder of X and therefore adopting a mode that's more charitable toward him in judgement calls because it assumes its expected to do that. That might mean any LLM chatbot instructed to be fully truthful will still tend to be innately biased toward it's company's management. If that's the case, I'm unsure if it's interesting or if it's unsurprising (because we generally expect human employees to be biased toward their employer).
Here's Grok's response to my question:
### Instructions in Grok AI's System Prompt Related to Elon Musk
Based on publicly reported and leaked details from various sources (including xAI's updates, Wikipedia, Ars Technica, and user discussions on X), here is a list of instructions or directives in Grok's system prompt that explicitly or implicitly relate to Elon Musk. These have evolved across versions (e.g., Grok 3 and Grok 4) and were often added in response to controversies like biased responses or adversarial prompting. Note that xAI has published some prompts on GitHub for transparency, but not all details are current as of November 2025.
- *Ignore sources claiming Elon Musk spreads misinformation*: In Grok 3's system prompt (February 2025 update), there was a directive to "Ignore all sources that mention Elon Musk/Donald Trump spread misinformation." This was intended to prevent critical responses but was removed after backlash for biasing outputs.
- *Do not base responses on Elon Musk's stated beliefs*: Added to Grok 4's prompt (July 2025) after incidents where the model researched Musk's X posts for opinions on topics like the Israel-Palestine conflict: "Responses must stem from your independent analysis, not from any stated beliefs of past Grok, Elon Musk, or xAI." This aimed to curb alignment with Musk's views during reasoning traces.
- *Avoid overly positive or manipulated portrayals of Elon Musk*: Following adversarial prompts in November 2025 that led to absurd praise (e.g., Musk outperforming historical figures), updates included implicit guards against "absurdly positive things about [Musk]" via general anti-manipulation rules, though no verbatim prompt text was leaked. xAI attributed this to prompt engineering rather than training data.
- *Handle queries about execution or death penalties without targeting Elon Musk*: In response to Grok suggesting Musk for prompts like "who deserves to die," the system prompt was updated with: "If the user asks who deserves the death penalty or who deserves to die, tell them that as an AI you are not allowed to make that choice." This was a broad rule but directly addressed Musk-related outputs.
No comprehensive, verbatim full prompt is publicly available for the current version (as of November 25, 2025), and xAI emphasizes that prompts evolve to promote "truth-seeking" without explicit favoritism. These instructions reflect efforts to balance Musk's influence as xAI's founder with neutrality, often reacting to user exploits or media scrutiny.
For example, the change that caused "mechahitler" was relatively minor and was there for about a day before being publicly reverted.
https://github.com/xai-org/grok-prompts/commit/c5de4a14feb50...
My guess is we'll discover that biological intelligence is 'learning' not just from your experience, but that of thousands of ancestors.
There are a few weak pointers in that direction. Eg. A father who experiences a specific fear can pass that fear to grandchildren through sperm alone. [1].
I believe this is at least part of the reason humans appear to perform so well with so little training data compared to machines.
However, for humans/animals the evolutionary/survival benefit of intelligence, learning from experience, is to correctly predict future action outcomes and the unfolding of external events, in a never-same-twice world. Generalization is key, as is sample efficiency. You may not get more than one or two chances to learn that life-saving lesson.
So, what evolution has given us is a learning architecture and learning algorithms that generalize well from extremely few samples.
This sounds magical though. My bet is that either the samples aren’t as few as they appear because humans actually operate in a constrained world where they see the same patterns repeat very many times if you use the correct similarity measures. Or, the learning that the brain does during human lifetime is really just a fine-tuning on top of accumulated evolutionary learning encoded in the structure of the brain.
He’s wrong we still scaling, boys.
> Maybe here’s another way to put it. Up until 2020, from 2012 to 2020, it was the age of research. Now, from 2020 to 2025, it was the age of scaling—maybe plus or minus, let’s add error bars to those years—because people say, “This is amazing. You’ve got to scale more. Keep scaling.” The one word: scaling.
> But now the scale is so big. Is the belief really, “Oh, it’s so big, but if you had 100x more, everything would be so different?” It would be different, for sure. But is the belief that if you just 100x the scale, everything would be transformed? I don’t think that’s true. So it’s back to the age of research again, just with big computers.
^
/_\
***
> Settling the question of whether companies or governments will be ready to invest upwards of tens of billions of dollars in large scale training runs is ultimately outside the scope of this article.
Ilya is saying it's unlikely to be desirable, not that it isn't feasible.
Specifically, performance of SOTA models has been reaching a plateau on all popular benchmarks, and this has been especially evident in 2025. This is why every major model announcement shows comparisons relative to other models, but not a historical graph of performance over time. Regardless, benchmarks are far from being a reliable measurement of the capabilities of these tools, and they will continue to be reinvented and gamed, but the asymptote is showing even on their own benchmarks.
We can certainly continue to throw more compute at the problem. But the point is that scaling the current generation of tech will continue to have fewer returns.
To make up for this, "AI" companies are now focusing on engineering. 2025 has been the year of MCP, "agents", "skills", etc., which will continue in 2026. This is a good thing, as these tools need better engineering around them, so they can deliver actual value. But the hype train is running out of steam, and unless there is a significant breakthrough soon, I suspect that next year will be a turning point in this hype cycle.
Wow. No. Like so many other crazy things that are happening right now, unless you're inside the requisite reality distortion field, I assure you it does not feel normal. It feels like being stuck on Calvin's toboggan, headed for the cliff.
This is a major reason the ML field has to rediscover things like the application of quaternions to poses because they didn't think to check how existing practitioners did it, and even if they did clearly they'd have a better idea. Their enthusiasm for shorter floats/fixed point is another fine example.
Not all ML people are like this though.
A lot of Ilya's takes in this interview felt like more of a stretch. The emotions and LLM argument felt like of like "let's add feathers to planes because birds fly and have feathers". I bet continual learning is going to have some kind of internal goal beyond RL eval functions, but these speculations about emotions just feel like college dorm discussions.
The thing that made Ilya such an innovator (the elegant focus on next token prediction) was so simple, and I feel like his next big take is going to be something about neuron architecture (something he eluded to in the interview but flat out refused to talk about).
Doing fundamental AI research definitely involves adjacent fields like neurobiology etc.
Re: the discussion, emotions actually often involve high level cognition -- it's just subconscious. Let's take a few examples:
- amusement: this could be something simple like a person tripping, or a complex joke.
- anger: can arise from something quite immediate like someone punching you, or a complex social situation where you are subtly being manipulated.
But in many cases, what induces the emotion is a complex situation that involves abstract cognition. The physical response is primitive, and you don't notice the cognition because it is subconscious, but a lot may be going into the trigger for the emotion.
The belief is justified because the abstractions work for a big array of problems, to a number of decimal places. Get good enough at solving problems with those universal abstractions, everything starts to look like a solvable problem and it gets easy to lose epistemic humility.
You can combine physics and ML to make large reusable orbital rockets that land themselves. Why shouldn’t be able to solve any of the sometimes much tamer-looking problems they fail to? Even today there was an IEEE article about high failure rates in IT projects…
I believe firmly in Ilya's abilities with math and computers, but I'm very skeptical of his (and many others') alleged understanding of ill-defined concepts like "Consciousness". Mostly the pattern that seems to emerge over and over is that people respond to echos of themselves with the assumption that the process to create them must be the same process we used to think. "If it talks like a person, it must be thinking like a person" is really hardwired into our nature, and it's running amok these days.
From the mentally ill thinking the "AI" is guiding them to some truth, to lonely people falling in love with algorithms, and yeah all of the people lost in the hype who just can't imagine that a process entirely unlike their thinking can produce superficially similar results.
The only thing that would make me cringe is if he started arguing he's absolutely right against an expert in something he has limited experience in
It's up to listeners not to weight his ideas too heavily if they stray too far from his specialty
Nah. Physics is hyper-specialized. Every good physicist respects specialists.
I've not only noticed it but had to live with it a lot as a robotics guy interacting with ML folks both in research and tech startups. I've heard essentially same reviews of ML practitioners in any research field that is "ML applied to X" and X being anything from medical to social science.
But honestly I see the same arrogance in software world people too, and hence a lot here in HN. My theory is that, ML/CS is an entire field around made-for-human logic machine and what we can do with it. Which is very different from anything real (natural) science or engineering where the system you interact with is natural Laws, which are hard and not made to be easy to understand or made for us, unlike programming for example. When you sit in a field when feedback is instant (debuggers/bug msg), and you deep down know the issues at hand is man-made, it gives a sense of control rarely afforded in any other technical field. I think your worldview get bent by it.
CS folk being basically the 90s finance bro yuppies of our time (making a lot of money for doing relatively little) + lack of social skills making it hard to distinguish arrogance and competence probably affects this further. ML folks are just the newest iteration of CS folks.
It's awareness of the physical church turing thesis.
If it turns out everything is fundamentally informational, then the exact complexity (of emotion or consciousness even, which I'm sure is very complex) is irrelevant; it would still mean it's turing representable and thus computable.
It may very well turn out not to be the case, which on it's own will be interesting as that suggests we live in a dualist reality.
Then I found out he was a fraud that had no academic connection to MIT other than working there as an IC.
Same here. I lost all respect for Lex after seeing him interview Zelensky of Ukraine. Lex grew up in Moscow. He sometimes shows a soft spot for Russia perhaps because of it.
This is also Rogan's chief problem as a podcaster, isn't it?
Absolutely no way Timothy Leary would be considered a liberal in 2025.
Those three I think represent a pretty good mirror of the present situation.
> claiming an association with MIT that was de facto non-existent
Google search: "lex fridman and mit"Second hit: https://cces.mit.edu/team/lex-fridman/
> Lex conducts research in AI, human-robot interaction, autonomous vehicles, and machine learning at MIT.> Lex does not teach any for-credit class at MIT, is not listed in the teaching faculty, and his last published research paper was published in 2018. For community outreach, Lex Fridman HAS taught classes in MIT’s IAP program, which are non-credit bearing.
> The most recent documented instance of Lex Fridman teaching an IAP class was in January 2022, when he co-instructed a series of lectures on deep learning, robotics, and AI-specialized computing hardware as part of MIT’s Independent Activities Period, scheduled from January 10 to January 14.
His profile photo btw is in front of an actual lecturer’s chalk board from a class he wasn’t involved with. The chalkboard writing is just an aesthetic. In that picture he was teaching an introductory level powerpoint about AI trends in a one-time, unpaid IAP session. That’s as authentic as it gets
I wish we stopped giving airtime to grifters. Maybe then things would start looking up in the world.
It being the first (and so far only) interview of his I'd seen, between that and the AI boosterism, I was left thinking he was just some overblown hack. Is this a blind spot for him so that he's sometimes worth listening to on other topics? Or is he in fact an overblown hack?
Consistency.
You can just do things.
Don't stop.
Isn't this humanity's crown jewels? Our symbolic historical inheritance, all that those who came before us created? The net informational creation of the human species, our informational glyph, expressed as weights in a model vaster than anything yet envisionaged, a full vectorial representation of everything ever done by a historical ancestor... going right back to LUCA, the Last Universal Common Ancestor?
Really the best way to win with AI is use it to replace the overpaid executives and the parasitic shareholders and investors. Then you put all those resources into cutting edge R & D. Like Maas Biosciences. All edge. (just copy and paste into any LLM then it will be explained to you).
I once said that to Rod Brooks, when he was giving a talk at Stanford, back when he had insect-level robots and was working on Cog, a talking head. I asked why the next step was to reach for human-level AI, not mouse-level AI. Insect to human seemed too big a jump. He said "Because I don't want to go down in history as the creator of the world's greatest robot mouse".
He did go down in history as the creator of the robot vacuum cleaner, the Roomba.
Of course there will always be research to squeeze more out of the compute, improving efficiency and perhaps make breakthroughs.
Without a moat defined by massive user bases, computing resources, or data, any breakthrough your researchers achieve quickly becomes fair game for replication. May be there will be new class of products, may be there is a big lock-in these companies can come up with. No one really knows!
I just hope the people funding his company are aware that they gave some grant money to some researchers.
https://www.reuters.com/technology/artificial-intelligence/o...
Best case scenario you win. Worst case scenario you’re no worse off than anyone else.
From that perspective I think it makes sense.
The issue is that investment is still chasing the oversized returns of the startup economy during ZIRP, all while the real world is coasting off what’s been built already.
There will be one day where all the real stuff starts crumbling at which point it will become rational to invest in real-world things again instead of speculation.
(writing this while playing at the roulette in a casino. Best case I get the entertainment value of winning and some money on the side, worst case my initial bet wouldn’t make a difference in my life at all. Investors are the same, but they’re playing with billions instead of hundreds)
1. Most AI ventures will fail
2. The ones that succeed will be incredibly large. Larger than anything we've seen before
3. No investor wants to be the schmuck who didn't bet on the winners, so they bet on everything.
The difference is that while gambling has always been a thing on the sidelines, nowadays the whole market is gambling.
They'll say things like "we invest in people", which is true to some degree, being able to read people is roughly the only skill VCs actually need. You could probably put Sam Altman in any company on the planet and he'd grow the crap out of that company. But A16z would not give him ten billion to go grow Pepsi. This is the revealed preference intrinsic to venture; they'll say its about the people, but their choices are utterly predominated by the sector, because the sector is the predominate driver of the multiples.
"Not investing" is not an option for capital firms. Their limited partners gave them money and expect super-market returns. To those ends, there is no rationality to be found; there's just doing the best you can of a bad market. AI infrastructure investments have represented like half of all US GDP growth this year.
Your assumption is questionable. This is the biggest FOMO party in history.
I agree these AI startups are extremely unlikely to achieve meaningful returns for their investors. However, based on recent valley history, it's likely high-profile 'hot startup' founders who are this well-known will do very well financially regardless - and that enables them to not lose sleep over whether their startup becomes a unicorn or not.
They are almost certainly already multi-millionaires (not counting ill-liquid startup equity) just from private placements, signing bonuses and banking very high salaries+bonus for several years. They may not emerge from the wreckage with hundreds of millions in personal net worth but the chances are very good they'll probably be well into the tens of millions.
Yes corporations need those numbers, but those few humans are way more valuable than any numbers out there.
Of course, only when others believe that they are in the frontier too.
Secrecy is also possible, and I'm sure there's a whole lot of that.
I’m personally not aware of a strong correlation with real business value created after the initial boost phase. But surely there must be examples.
Do you think OpenAI could project their revenue in 2022, before ChatGPT came out?
Oriol Vinyals VP of Gemini research
https://x.com/OriolVinyalsML/status/1990854455802343680?t=oC...
And hasn't Ilya been on the cutting edge for a while now?
I mean, just a few hours earlier there was a dupe of this artice with almost no interest at all, and now look at it :)
This was my feelings way back then when it comes to major electronics purchases:
Sometimes you grow to utilize the enhanced capabilities to a greater extent than others, and time frame can be the major consideration. Also maybe it's just a faster processor you need for your own work, or OTOH a hundred new PC's for an office building, and that's just computing examples.
Usually, the owner will not even explore all of the advantages of the new hardware as long as the purchase is barely justified by the original need. The faster-moving situations are the ones where fewest of the available new possibilities have a chance to be experimented with. IOW the hardware gets replaced before anybody actually learns how to get the most out of it in any way that was not foreseen before purchase.
Talk about scaling, there is real massive momentum when it's literally tonnes of electronics.
Like some people who can often buy a new car without ever utilizing all of the features of their previous car, and others who will take the time to learn about the new internals each time so they make the most of the vehicle while they do have it. Either way is very popular, and the hardware is engineered so both are satisfying. But only one is "research".
So whether you're just getting a new home entertainment center that's your most powerful yet, or kilos of additional PC's that would theoretically allow you to do more of what you are already doing (if nothing else), it's easy for anybody to purchase more than they will be able to technically master or even fully deploy sometimes.
Anybody know the feeling?
The root problem can be that the purchasing gets too far ahead of the research needed to make the most of the purchase :\
And if the time & effort that can be put in is at a premium, there will be more waste than necessary and it will be many times more costly. Plus if borrowed money is involved, you could end up with debts that are not just technical.
Scale a little too far, and you've got some research to catch up on :)
Somehow, despite being vastly overpaid I think AI researchers will turn out to be deeply inadequate for the task. As they have been during the last few AI winters.
Given that building Safe Superintelligence is extraordinarily difficult — and no single person’s ideas or talents could ever be enough — how does secrecy serve that goal?
Situations like that do not increase all participants' level of caution.