Of course, each relevant newspaper on those areas highlight that it's coming to their place, but it really seems to be distributed.
Might be to be close to some of Yann's collaborators like Xavier Bresson at NUS
Almost certainly the IP will be held in Singapore for tax reasons.
Europe in general has been tightening up their rules / taxes / laws around startups / companies especially tech and remote.
It's been less friendly. these days.
> He is the Jacob T. Schwartz Professor of Computer Science at the Courant Institute of Mathematical Sciences at New York University. He served as Chief AI Scientist at Meta Platforms before leaving to work on his own startup company.
That entire sentence before the remarks about him service at Meta could have been axed, its weird to me when people compare themselves to someone else who is well known. It's the most Kanye West thing you can do. Mind you the more I read about him, the more I discovered he is in fact egotistical. Good luck having a serious engineering team with someone who is egotistical.
This is just the official name of a chair at NYU. I'm not even sure Jacob T. Schwartz is more well known than Yann LeCun
Either you have not read enough Wikipedia pages, or you have too much to complain about. (Or both.)
There are a lot more degrees of freedom in world models.
LLMs are fundamentally capped because they only learn from static text -- human communications about the world -- rather than from the world itself, which is why they can remix existing ideas but find it all but impossible to produce genuinely novel discoveries or inventions. A well-funded and well-run startup building physical world models (grounded in spatiotemporal understanding, not just language patterns) would be attacking what I see as the actual bottleneck to AGI. Even if they succeed only partially, they may unlock the kind of generalization and creative spark that current LLMs structurally can't reach.
Even with continuous backpropagation and "learning", enriching the training data, so called online-learning, the limitations will not disappear. The LLMs will not be able to conclude things about the world based on fact and deduction. They only consider what is likely from their training data. They will not foresee/anticipate events, that are unlikely or non-existent in their training data, but are bound to happen due to real world circumstances. They are not intelligent in that way.
Whether humans always apply that much effort to conclude these things is another question. The point is, that humans fundamentally are capable of doing that, while LLMs are structurally not.
The problems are structural/architectural. I think it will take another 2-3 major leaps in architectures, before these AI models reach human level general intelligence, if they ever reach it. So far they can "merely" often "fake it" when things are statistically common in their training data.
That's what I said. Backpropagation cannot be enough; that's not how neurons work in the slightest. When you put biological neurons in a Pong environment they learn to play not through some kind of loss or reward function; they self-organize to avoid unpredictable stimulation. As far as I know, no architecture learns in such an unsupervised way.
https://www.sciencedirect.com/science/article/pii/S089662732...
Can you be a bit more specific at all bounds? Maybe via an example?
So my question is: when is there enough training data that you can handle 99.99% of the world ?
While I suspect latter is a real problem (because all mammal brains* are much more example-efficient than all ML), the former is more about productisation than a fundamental thing: the models can be continuously updated already, but that makes it hard to deal with regressions. You kinda want an artefact with a version stamp that doesn't change itself before you release the update, especially as this isn't like normal software where specific features can be toggled on or off in isolation of everything else.
* I think. Also, I'm saying "mammal" because of an absence of evidence (to my *totally amateur* skill level) not evidence of absence.
As for the "just put a vision LLM in a robot body" suggestion: People are trying this (e.g. Physical Intelligence) and it looks like it's extraordinarily hard! The results so far suggest that bolting perception and embodiment onto a language-model core doesn't produce any kind of causal understanding. The architecture behind the integration of sensory streams, persistent object representations, and modeling time and causality is critically important... and that's where world models come in.
I think this is true to some extent: we like our tools to be predictable. But we’ve already made one jump by going from deterministic programs to stochastic models. I am sure the moment a self-evolutive AI shows up that clears the "useful enough" threshold we’ll make that jump as well.
I like how people are accepting this dubious assertion that Einstein would be "useful" if you surgically removed his hippocampus and engaging with this.
It also calls this Einstein an AGI rather than a disabled human???
But one might say that the brain is not lossless ... True, good point. But in what way is it lossy? Can that be simulated well enough to learn an Einstein? What gives events significance is very subjective.
Using the term autoregressive models instead might help.
I don't think it makes sense conceptually unless you're literally referring to discovering new physical things like elements or something.
Humans are remixers of ideas. That's all we do all the time. Our thoughts and actions are dictated by our environment and memories; everything must necessarily be built up from pre-existing parts.
You can't get Suno to do anything that's not in its training data. It is physically incapable of inventing a new musical genre. No matter how detailed the instructions you give it, and even if you cheat and provide it with actual MP3 examples of what you want it to create, it is impossible.
The same goes for LLMs and invention generally, which is why they've made no important scientific discoveries.
You can learn a lot by playing with Suno.
Everything is bits to a computer, but text training data captures the flattened, after-the-fact residue of baseline human thought: Someone's written description of how something works. (At best!)
A world model would need to capture the underlying causal, spatial, and temporal structure of reality itself -- the thing itself, that which generates those descriptions.
You can tokenize an image just as easily as a sentence, sure, but a pile of images and text won't give you a relation between the system and the world. A world model, in theory, can. I mean, we ought to be sufficient proof of this, in a sense...
So when we think about capturing any underlying structure of reality itself, we are constrained by the tools at hand.
The capability of the tool forms the description which grants the level of understanding.
> One major critique LeCun raises is that LLMs operate only in the realm of language, which is a simple, discrete space compared to the continuous, complex physical world we live in. LLMs can solve math problems or answer trivia because such tasks reduce to pattern completion on text, but they lack any meaningful grounding in physical reality. LeCun points out a striking paradox: we now have language models that can pass the bar exam, solve equations, and compute integrals, yet “where is our domestic robot? Where is a robot that’s as good as a cat in the physical world?” Even a house cat effortlessly navigates the 3D world and manipulates objects — abilities that current AI notably lacks. As LeCun observes, “We don’t think the tasks that a cat can accomplish are smart, but in fact, they are.”
The biggest thing thats missing is actual feedback to their decisions. They have no "idea of that because transformers and embeddings dont model that yet. And langiage descriptions and image representations of feedback arent enough. They are too disjointed. It needs more
World models and vision seems like a great use case for robotics which I can imagine that being the main driver of AMI.
There is absolutely no doubt about Yann's impact on AI/ML, but he had access to many more resources in Meta, and we didn't see anything.
It could be a management issue, though, and I sincerely wish we will see more competition, but from what I quoted above, it does not seem like it.
Understanding world through videos (mentioned in the article), is just what video models have already done, and they are getting pretty good (see Seedance, Kling, Sora .. etc). So I'm not quite sure how what he proposed would work.
That's true for 99% of the scientists, but dismissing their opinion based on them not having done world shattering / ground breaking research is probably not the way to go.
> I sincerely wish we will see more competition
I really wish we don't, science isn't markets.
> Understanding world through videos
The word "understanding" is doing a lot of heavy lifting here. I find myself prompting again and again for corrections on an image or a summary and "it" still does not "understand" and keeps doing the same thing over and over again.
Source: himself https://x.com/ylecun/status/1993840625142436160 (“I never worked on any Llama.”) and a million previous reports and tweets from him.
So I keep wondering: if his idea is really that good — and I genuinely hope it is — why hasn’t it led to anything truly groundbreaking yet? It can’t just be a matter of needing more data or more researchers. You tell me :-D
Lecun introduced backprop for deep learning back in 1989 Hinton published about contrastive divergance in next token prediction in 2002 Alexnet was 2012 Word2vec was 2013 Seq2seq was 2014 AiAYN was 2017 UnicornAI was 2019 Instructgpt was 2022
This makes alot of people think that things are just accelerating and they can be along for the ride. But its the years and years of foundational research that allows this to be done. That toll has to be paid for the successsors of LLMs to be able to reason properly and operate in the world the way humans do. That sowing wont happen as fast as the reaping did. Lecun was to plant those seeds, the others who onky was to eat the fruit dont get that they have to wait
That article is from June 2025 so may be out of date, and the definition of "seed round" is a bit fuzzy.
The giant seed round proves investors were willing to fund Mira Murati, not that the company had built anything durable.
Within months, it had already lost cofounder Andrew Tulloch to Meta, then cofounders Barret Zoph and Luke Metz plus researcher Sam Schoenholz to OpenAI; WIRED also reported that at least three other researchers left. At that point, citing it as evidence of real competitive momentum feels weak.
Recently all papers are about LLM, it brings up fatigue.
As GPT is almost reaching its limit, new architecture could bring out new discovery.
Europe again missing out, until AMI reaches a much higher valuation with an obvious use case in robotics.
Either AMI reaches over $100B+ valuation (likely) or it becomes a Thinking Machines Lab with investors questioning its valuation. (very unlikely since world models has a use-case in vision and robotics)
I can't read the article, but American investors investing into European companies, isn't US the one missing out here? Or does "Europe" "win" when European investors invest in US companies? How does that work in your head?
Academics don’t always make great entrepeneurs
As the other commenter pointed out, this is 1B seed.