Interview with Yann LeCun on AI
105 points
5 days ago
| 14 comments
| wsj.com
| HN
singingwolfboy
5 days ago
[-]
reply
benlivengood
5 days ago
[-]
I think I would put more credence in Yann LeCun's predictions if he had predicted the emergence of few-shot learning and chain-of-thought reasoning as a function of model size/data before they happened.

In part, he's arguing that LLMs are not the most efficient path toward intelligence and that some other design will do better, which is probably true, but no one has pushed model size for the (somewhat ironically named) Gato-style multimodal embodied transformers that I think would result in something closer to cat intelligence.

I am reasonably certain that further step changes will happen as LLM model and/or data sizes increase. Right now we're achieving a lot of SOTA performance with somewhat smaller models and multimodal pretraining, but not putting those same methods to work in training even larger models on more data.

reply
twobitshifter
5 days ago
[-]
The recent advance of reasoning in o1 preview seems to have not been widely understood by the media and even LeCun in this case. o1 preview represents a new training paradigm using reinforcement learning on the reasoning steps applied to the solution. This step allows for reasoning to be developed just like AlphaZero is able to ‘reason’ and come up with unique solutions. The reinforcement learning in o1preview means that the ‘repeating facts learned’ arguments no longer apply. Instead the AI is free to come up with its own reasoning steps that lead to correct answers and these reasoning steps are refined over time. It can continue to train and get better by repeatedly answering the same questions, the same way alphazero can play the same game multiple times.
reply
Reubend
5 days ago
[-]
I don't think that's correct. As I understand it, O1 can use reinforcement learning to optimize its chain of thought, but each individual "thought" is still subject to the performance of the LLM.

Therefore, while it can generate new strategies to approach a problem, the implementation of each step within the strategy is limited by a probabilistic approach.

Contrast that with AlphaZero, which can come up with strategies that are 100% unique (in theory), since it isn't constrained by any human training.

I think O1 is a step forward, but not a massive leap in technology.

reply
namaria
5 days ago
[-]
I recently wrote a small essay on my field of expertise and asked chatgpt o1 about it. It consistently misunderstood my arguments, and construed correct sounding but subtly wrong interpretations and explanations.

I wouldn't trust it about any subject I don't dominate so it feels kinda pointless to use it at all.

reply
david-gpu
5 days ago
[-]
I don't think it is comparable to alpha zero, because in the latter case there is an easily verifiable endpoint: whether it won the match or not. With o1 they still rely on human-curated "correct" answers; the system is not able to decide by itself what a correct answer is like alpha zero does.

Thus, while it leads to improved reasoning vs prior art, it is not yet bootstrapping. It may be useful in constrained fields like coming up with proofs of theorems, though.

reply
maskil
5 days ago
[-]
You still will need clear benchmarks as the reward for RL. With Chess, the rules are simple but you may not have a clear loss function for a complicated architectural challenge.
reply
razodactyl
5 days ago
[-]
He is highly knowledgeable in his field.

He's very abrasive in his conduct but don't mistake it for incompetence.

Even the "AI can't do video" thing was blown out and misquoted because discrediting people and causing controversy fuels more engagement.

He actually said something along the lines of it "not being able to do it properly" / everything he argues is valid from a scientific perspective.

The joint embeddings work he keeps professing has merit.

---

I think the real problem is that from a consumer perspective, if the model can answer all their questions it must be intelligent / from a scientist's perspective it's not capable for the set of all consumers so it's not intelligent.

So we end up with a dual perspective where both are correct due to technical miscommunication and misunderstanding.

reply
kergonath
5 days ago
[-]
> He is highly knowledgeable in his field

Indeed. It seems to me that he has a type of personality common in skilled engineers. He is competent and makes informed decisions, but does not necessarily explain them well (or at all if they feel trivial enough), is certain of his logic (which often sounds like arrogance), and does not seem to have much patience.

He is great at what he does but he really is not a spokesman. His technical insights are often very interesting, though.

reply
kordlessagain
4 days ago
[-]
Technical gatekeeping is a thing, and definitely a problem at times.

This is particularly problematic when dealing with concepts that aren't fully understood, even by their originators, as these might contain valuable insights despite their apparent strangeness. The risk of losing out on potentially groundbreaking perspectives must be weighed against the importance of scientific integrity. To address this, the technical community could benefit from fostering more open dialogue, encouraging interdisciplinary collaboration, and creating spaces where speculative ideas can be explored without immediate judgment.

Usually doesn't happen here.

reply
ks2048
5 days ago
[-]
It seems the problem with this "debate" is that intelligence is very clearly not a scalar value - where you can look at a number line and put a dot where cats are, where humans are, where AIs are, etc.

So, people just talk past each other, with everyone using a different method on how a complex trait like "intelligence" is to be collapsed down to a scalar for easy comparison.

reply
echelon
5 days ago
[-]
We don't understand intelligence.

But we do understand vision and hearing. We have for over 50 years. We've implemented them classically and described them with physics. Game engines, graphics pipelines, synthesizers, codecs, compression, digital image processing, ... the field is vast and productive.

Our mastery over signals is why I'm so bullish on diffusion and AI for images, video, and audio regardless with whatever happens with LLMs.

And if this tech cycle only improves our audio-visual experience and makes games and film more accessible, it'll still be revolutionary and a step function improvement over what came before.

reply
deepfriedchokes
5 days ago
[-]
Judging fish by their ability to climb trees.
reply
OKRainbowKid
5 days ago
[-]
That's a good analogy for when people use the task of counting "r"s in "strawberry" to make a point that LLMs are useless/dumb.
reply
yunohn
4 days ago
[-]
I mean, OpenAI made this the defining purpose of their o1 “reasoning” model, and teased it for weeks with a strawberry across Twitter.
reply
tikkun
5 days ago
[-]
It's frustrating how many disagreements come down to framings rather than actual substance.

His framing of intelligence is one thing. The people who disagree with him are framing intelligence a different way.

End of story.

I wish that all the energy went towards substantive disagreements rather than disagreements that are mostly (not entirely) rooted in semantics and definitions.

reply
jcranmer
5 days ago
[-]
That's not what he's saying at all, though.

What he's saying is that he thinks the current techniques for AI (e.g., LLMs) are near the limits of what you can achieve with such techniques and are thus a dead-end for future research; also consequently, hyperventilation about AI superintelligence and the like is extremely irresponsible. It's actually a substantial critique of AI today in its actual details, albeit one modulated by popular press reporting that's dumbing it down for popular consumption.

reply
aithrowawaycomm
5 days ago
[-]
His point is that lots of AI folks are framing intelligence incorrectly by overvaluing surface knowledge or ability to be trained to solve constrained problems, when cats have deeper cognitive abilities like planning and rich spatial reasoning which are far beyond the reach of any AI in 2024.

ANNs are extremely useful tools because they can process all sorts of information humans find useful: unlike animals or humans, ANNs don't have their own will, don't get bored or frustrated, and can focus on whatever you point them at. But in terms of core cognitive abilities - not surface knowledge, not impressive tricks, and certainly not LLM benchmarks - it is hard to say ANNs are smarter than a spider. (In fact they seem dumber than jumping spiders, which are able to form novel navigational plans in completely unfamiliar manmade environments. Even web-spinning spiders have no trouble spinning their webs in cluttered garages or pantries; would a transformer ANN be able to do that if it was trained on bushes and trees?)

reply
Yacovlewis
5 days ago
[-]
From my own experience trying to build an intelligent digital twin startup based on the breakthrough in LLM's, I agree with LeCunn that LLMs are actually quite far from demonstrating the intelligence of house cats, and I myself likely jumped the gun by trying to emulate intelligent humans with the current stage of AI.

His AI predictions remind me of Prof. Rodney Brooks (MIT, Roomba) and his similarly cautious timelines for AI development. Brooks has a very strong track record over decades of being pretty accurate with his timelines.

reply
steveBK123
5 days ago
[-]
I would suspect any possible future AGI-like progress would be some sort of ensemble. LLMs may be a piece of the puzzle, but they aren't a single model to AGI solution.
reply
blackeyeblitzar
5 days ago
[-]
I feel like LeCun does the same interview and the same presentation over and over. He’s obsessed with the cat analogy and the notion that JEPA will succeed transformer based ‘traditional’ LLM architecture. Maybe that is true but I feel like he has too absolute a view on these topics. Sometimes I feel like I am listening to a politician or marketer rather than someone making forward progress in the field.
reply
mrandish
5 days ago
[-]
While I'm no expert on AI, everything I've read from LeCunn on AI risk so far strikes me as directionally correct. I keep revisting the best examples I can find of the 'Foom' hypothesis and it just doesn't seem likely. Not to say that AI won't be both very useful and disruptive, just that existential fears like Skynet scenarios don't strike me as plausible.
reply
habitue
5 days ago
[-]
Statements like "It doesnt seem plausible", "it doesn't seem likely" aren't the strongest arguments. How things seem to us is based on what we've seen happen before. None of us has witnessed humanity replace itself with something that we dont understand before.

Our intuition isn't a good guide here. Intuitions are honed through repeated exposure and feedback, and we clearly don't have that in this domain.

Even though it doesn't feel dangerous, we can navigate this by reasoning through it. We understand that intelligence trumps brawn (e.g. Humans don't out-claw a tiger, we master it with intelligence). We understand that advances in AI have been very rapid, and that even though current AI doesnt feel dangerous, current AI turns into much more advanced future AI very quickly. And we understand that we dont really understand how these things work. We "control them safely" through mechanisms similar to how evolution controls us: theough the objective function. That shouldn't fill us with confidence because we find loopholes in evolution's objective function left and right like contraception, hyper-palatable foods, tiktok, etc.

All these lines of evidence converge on the conclusion that what we're building is dangerous to us.

reply
dragonwriter
5 days ago
[-]
> Statements like "It doesnt seem plausible", "it doesn't seem likely" aren't the strongest arguments.

They are the strongest statements that anyone can justifiably make about technology aiming to produce intelligence, since it is speculation about how what does not yet exist will do at achieving something that is ill-defined, and where what is clearly within that fuzzy definition is not well understood.

And it is a fortiori the strongest that can be said of things downstream of that, like dangers that are at least in part contingent on the degree of success in achieving "intelligence".

reply
mrandish
5 days ago
[-]
> ... "it doesn't seem likely" aren't the strongest arguments.

Since we're talking about the future, it would be incorrect to talk in absolutes so speaking in probabilities and priors is appropriate.

> Our intuition isn't a good guide here.

I'm not just using intuition. I've done as extensive an evaluation of the technology, trends, predictions and, most importantly, history as I'm personally willing to do on this topic. Your post is an excellent summary of basically the precautionary principle approach but, as I'm sure you know, the precautionary principle can be over-applied to justify almost any level of response to almost any conceivable risk. If the argument construes the risk as probably existential, then almost any degree of draconian response could be justified. Hence my caution when the precautionary principle is invoked to argue for disruptive levels of response (and to be clear, you didn't).

So the question really comes down to which scenarios at which level of probability and then what levels of response those bell-curve probabilities justify. Since I put 'foom-like' scenarios at low probability (sub-5%) and truly existential risk at sub-1%, I don't find extreme prevention measures justified due to their significant costs, burdens and disruptions.

At the same time, I'm not arguing we shouldn't pay close attention as the technology develops while expending some reasonable level of resources on researching ways to detect, manage and mitigate possible serious AI risks, if and when they materialize. In particular, I find the current proposed legislative responses to regulate a still-nascent emerging technology to be ill-advised. It's still far too early and at this point I find such proposals by (mostly) grandstanding politicians and bureaucrats more akin to crafting potions to ward off an unseen bogeyman. They're as likely to hurt as to help while imposing substantial costs and burdens either way. I see the current AI giants embracing such proposals as simply them seeing these laws as an opportunity to raise the drawbridge behind themselves since they have the size and funds to comply while new startups don't - and those startups may be the most likely source of whatever 'solutions' we actually need to the problems which have yet to make themselves evident.

reply
nradov
5 days ago
[-]
There is no such evidence. You're just making things up. No one has described a scientifically plausible scenario for actual danger.
reply
dsubburam
5 days ago
[-]
How would you respond to the argument that the burden of proof lies with the other side, i.e, the side that says there is too little AI doom risk to worry about?
reply
emptysongglass
5 days ago
[-]
Ayone can make claims about anything at all in progressively shriller octaves, be it aliens are among us or we are ruled by a world government of lizard-people. We the opposition don't actually need to address those claims, no matter how dire, if they are meritless.

A collective hallucination of what intelligence is is not a good basis for an argument about doom probabilities. We don't have a clue, yet loud people are pretending we do.

It is utterly horrifying that a contingent of Yudkowskyites essentially hijacked reasonable discourse around this subject. I grew up interacting with the LessWrong people: many of them have other problems interfacing with society that make it obvious to them they know what being "less wrong" looks like. The problem is we don't actually know any way to separate the human experience from "pure logic", whatever that actually means.

reply
nradov
5 days ago
[-]
How would you respond to the argument that the burden of proof lies with the other side, i.e, the side that says there is too little alien invasion doom risk to worry about? The whole thing is a farce, not worthy of a serious answer.
reply
slibhb
5 days ago
[-]
Actually, statements like "it does/n't seem plausible" are the only things we can say about AI risk.

People are deluding themselves when thwy claim they "reason through this" (i.e. objectively). In other words: no one knows what's going to happen; people are just saying what they think.

reply
llamaimperative
5 days ago
[-]
> just that existential fears like Skynet scenarios don't strike me as plausible.

What's the most plausible (even if you find it implausible) disaster scenario you came across in your research? It's a little surprising to see someone who has seriously looked into these ideas describe the bundle of them as "like Skynet."

reply
trescenzi
5 days ago
[-]
I think the risk is much higher with regards to how people use it and much less that it becomes some sudden super intelligent monster. AI doesn’t have to be rational or intelligent to cause massive amounts of damage it just has to be put in charge of dangerous enough systems. Or more pernicious you give it the power to make health care or employment decisions.

It seems silly to me the idea of risk is all concentrated around the runaway intelligence scenario. While that might be possible there is real risk today in how we use these systems.

reply
mrandish
5 days ago
[-]
I agree with what you've said. Personally, I have no doubt that, like any powerful new technology, AI will be used for all kinds of negative and annoying things as well as beneficial things. This is what I meant by "disruptive" in my GP. However, I also think that society will adapt to address these disruptions just like we have in the past.
reply
nradov
5 days ago
[-]
We have had software making healthcare decisions for decades in areas like ECG pattern analysis, clinical decision support, medication administration, insurance claims processing, etc. Occasionally software defects or usability flaws lead to patient harm but mostly they work pretty well. There's no evidence that using AI to supplement existing deterministic algorithms will make things worse, it's just a lot of uninformed and unscientific fearmongering.
reply
trescenzi
5 days ago
[-]
I’m not trying to say it is a problem, more that it could be a problem. The fear mongering around an AI apocalypse is silly. Concern around bugs in real software making minor but real decisions however are warranted. Just as we try to reduce human error so should we try to reduce algorithmic error.
reply
Elucalidavah
5 days ago
[-]
> it just doesn't seem likely

It is likely conditional on the price of compute dropping the way it has been.

If you can basically simulate a human brain on a $1000 machine, you don't really need to employ any AI researchers.

Of course, there has been some fear that the current models are a year away from FOOMing, but that does seem to be just the hype talking.

reply
mrandish
5 days ago
[-]
> If you can basically simulate a human brain

Based on the evidence I've seen to date, doing this part at the scale of human intelligence (regardless of cost) is highly unlikely to be possible for at least decades.

(a note to clarify: the goal "simulate a human brain" is substantially harder than other goals usually discussed around AI, like "exceed domain expert human ability on tests measuring problem solving in certain domain(s).)

reply
threeseed
5 days ago
[-]
If you can simulate a human brain and it required a $100b machine you would still get funding in a weekend.

Because you could easily find ways to print money e.g. curing types of cancers or inventing a better Ozempic.

But the fact is that there is no path to simulating a human brain.

reply
llamaimperative
5 days ago
[-]
There is no path to it? That's a bold claim. Are brains imbued with special brain-magic that makes them more than, at rock bottom, a bunch of bog-standard chemical and electrical and thermal reactions?

It seems very obviously fundamentally solvable, though I agree it is nowhere in the near future.

reply
aithrowawaycomm
5 days ago
[-]
This seems like a misreading - there's also no real path to P-NP or to disentangling the true chemical origins of life. OP didn't say it was impossible. The problem is we don't know very much about intelligence in animals generally, and even less about intelligence in humans. In particular, we know far less about intelligence than we do computational complexity or early forms of life.
reply
CooCooCaCha
5 days ago
[-]
Those seem like silly analogies. There are billions of brains on the planet, humans can grow them inside themselves (pregnancy). Don’t get me wrong, it’s a hard problem, they just seem like different classes of problems.

I could see P=NP being impossible to prove but I find it hard to believe intelligence is impossible to figure out. Heck if you said it’d take us 100 years I would still think that’s a bit much.

reply
RandomLensman
5 days ago
[-]
We have not even figured out single cell organisms, let alone slightly more complex organisms - why would intelligence be such an easy target?
reply
CooCooCaCha
5 days ago
[-]
I didn’t say easy.
reply
RandomLensman
5 days ago
[-]
Yes, sorry, but it still could be impossibly hard for a long time. A lot of things happen in nature all the time we cannot do and have no (practical) path to doing them.
reply
aithrowawaycomm
5 days ago
[-]
I think it'll take much longer than 100 years. The "limiting factor" here is cognitive science experiments on smart animals like rats and pigeons, and less smart animals like spiders and lampreys, all of which will help us understand what intelligence truly is. These experiments take time and resources.
reply
threeseed
5 days ago
[-]
> Don’t get me wrong, it’s a hard problem, they just seem like different classes of problems

Time travel. Teleportation through quantum entanglement. Intergalactic travel through wormholes.

And don't get me wrong they are hard. But just another class of problems. Right ?

reply
CooCooCaCha
5 days ago
[-]
Yes absolutely. I have a (supposedly) working brain in my head right now. But so far there are no working examples of the things you listed.
reply
bob1029
5 days ago
[-]
> Are brains imbued with special brain-magic that makes them more than, at rock bottom, a bunch of bog-standard chemical and electrical and thermal reactions?

Some have made this argument (quantum effects, external fields, etc.).

If any of these are proven to be true then we are looking at a completely different roadmap.

reply
llamaimperative
5 days ago
[-]
Uh yeah, but we have no evidence for any of them (aside from quantum effects, which are "engineerable" to the extent they exist in brains anyway).
reply
threeseed
5 days ago
[-]
> "engineerable" to the extent they exist in brains anyway

Can you please enlighten us then since you clearly know to what extent quantum effects exist in the brain.

reply
llamaimperative
5 days ago
[-]
I’m saying to whatever extent they occur, they are just quantum interactions. There’s a path to reproducing them with engineering.

It’s odd to say “reproduce quantum interactions” but remember to the extent they exist in the brain, they also behave as finicky/noisy quantum interactions. They’re not special brain quantum things.

reply
Yoric
5 days ago
[-]
On the other hand, we can wipe our civilization (with or without AI) without needing anything as sophisticated as Skynet.
reply
qarl
5 days ago
[-]
Well... except... cats can't talk.
reply
Barrin92
5 days ago
[-]
And as Marvin Minsky taught us, which is probably one of the most profound insights in the entire field, talking seems like an accomplishment to us because it's the least developed part of our capacities. It's so conscious a task not because it's a sign of intellect but because it's the least developed and most novel thing our brains do, which is why it's also likely the fastest to learn for a machine.

Moving as smoothly as a cat and navigating the world is the part that actually took our brains millions of years to learn, and movement is effortless not because it's easy but because it took so long to master, so it's also going to be the most difficult thing to teach a machine.

The cognitive stuff is the dumb part, and that's why we have chess engines, pocket calculators and chatbots before we have emotional machines, artificial plumbers and robots that move like spiders.

reply
qarl
5 days ago
[-]
I'm not sure that's the right way to look at it.

Ten years ago, it was common to hear the argument: "Are cats intelligent? No, they can't speak." Language was seen as the pinnacle of the mind. Lately that's been flipped on its head, but only because machines have gotten so good at it.

I think the real reason we don't have robots that move like spiders is that robots don't have muscles, and motors are a very poor approximation.

reply
Barrin92
5 days ago
[-]
>I think the real reason we don't have robots that move like spiders is that robots don't have muscles

If that was the real reason we'd have self driving cars deserving of that label before we had ChatGPT. In the world of atoms we still struggle with machines that are magnitudes less sophisticated than even a worm. No, it's because being embedded in the world is much more like being a cat and much less like a LLM or a chess computer.

Making something that's like a curious five year old who can't do a lot of impressive things and has no market value but who is probably closer to genuine intelligence is going to be much harder than making a search engine for the internet's latent space.

reply
qarl
5 days ago
[-]
Cats and spiders are good at navigating physical space - but aren't really known for intelligence. And they're terrible drivers.

I'll grant you that LLMs are terrible at what I'd call "animal intelligence" - but I'm not so sure that animal intelligence is what is needed to, say, discover the laws of the universe. Solving mathematical problems is much more like playing chess than driving a car.

reply
aithrowawaycomm
5 days ago
[-]
I believe my cats sometimes get frustrated with the limitations of their own vocalizations and try to work around them when communicating with me. If, say, they want a treat, they are only able to meow and perform "whiny" body motions, and maybe I'll give them pets or throw a ball instead. So they have adapted a bit:

- both of them will spit regular kibble out in front of me when they want a fancier treat (cats are hilarious)

- the boy cat has developed very specific "sweet meows" (soft, high-pitched) for affection and "needy meows" (loud, full-chested) for toys or food; for the first few years he would simply amp up the volume and then give a frustrated growl when I did the wrong thing

- the lady cat (who only has two volumes, "yell" and "scream"), instead stands near what she wants before meowing; bedroom for cuddles, water bowl for treats, hallway or office for toys

- the lady cat was sick a while back and had painful poops; for weeks afterwards if she wanted attention and I was busy she would pretend to poop and pretend to be in pain, manipulating me into dropping my work and checking on her

It goes both ways, I've developed ways of communicating with them over the years:

- the lady is skittish but loves laying in bed with me, so I sing "gotta get up, little pup" in a particular way; she will then get up and give me space to leave the bed, without me scaring her with a sudden movement

- I don't lose my patience with them often, but they understand my anxious/exasperated tone of voice and don't push their luck too much (note that some of this is probably shared mammalian instinct)

- the boy sometimes bullies the lady, and I'll raise my voice at him; despite being otherwise skittish and scared of loud noises, the lady seems to understand that I am mad at the boy because of his actions and there's nothing to be alarmed by

Sometimes I think the focus on "context-free" (or at least context-lite) symbolic language, essentially unique to humans, makes us lose focus on the fact that communication is far older than the dinosaurs, and that maybe further progress on language AI should focus on communication itself, rather than symbol processing with communication as a side effect.

reply
CamperBob2
5 days ago
[-]
Or write code. Or write songs. Or create paintings. Or write essays.

The whole comparison is stupid, and inexplicable at LeCun's level of play. AI is not a model of a human brain, or a cat brain, or a parrot brain, or any other kind of brain. It's something else, something that did not exist in any form just a few years ago.

reply
tkgally
5 days ago
[-]
What is increasingly making sense to me is the description of current AI as an alien intelligence—something potentially powerful but fundamentally different from ours. Viewed that way, LeCun's use of humans—or cats—as the baseline seems misguided. Yes, there are things biological intelligence can do that artificial intelligence cannot, but there are also things AI can do better than us. And the danger might be that, because of their speed, replicability, networkability, and other capabilities that exceed those of biology, AI systems can be intelligent in ways that we have trouble imagining.
reply
jononor
5 days ago
[-]
The flip side is that AI systems can also be stupid in ways we have trouble imagining. Which we are seeing play out with LLMs - while able to make plausible and persuasive sentences - they also confidently make many mistakes that humans are very unlikely to do.

We must at the very least resist trying to compare human with artificial intelligence on general, dimensional measures. It does not make any sense, because the nature of the two are more different than alike.

reply
CamperBob2
5 days ago
[-]
Valid point there, for sure. People are so busy arguing over when and whether we'll build something with human-level intelligence, they aren't stopping to ask if something even better is possible.
reply
hyperG
5 days ago
[-]
Animals fly by flapping their wings, hence an airplane is not really flying. It can't even land safely in a tree!
reply
CamperBob2
5 days ago
[-]
Exactly. It's pointless to argue over whether an aircraft is really capable of flight, when small drones are already capable of outmaneuvering most birds on the birds' own turf.
reply
Scrapemist
5 days ago
[-]
Maybe AI can translate
reply
qarl
5 days ago
[-]
HEH... reminds me of an argument Searle once made: with the right "translation" you can make a wall intelligent.
reply
arisAlexis
5 days ago
[-]
Are cars still smarter than o1 or he did a somersault
reply
jimjimjim
5 days ago
[-]
LLMs are great! They are just not what I would call Intelligence
reply
miohtama
5 days ago
[-]
If you don't call it intelligence you miss the enormous political and social opportunity to go down in the history as the pioneer of AI regulation (:
reply
mmoustafa
5 days ago
[-]
It’s really hard for me to believe Yann is engaging sincerely, he is downplaying LLM abilities on purpose.

He leads AI at Meta, a company with the competitive strategy to commoditize AI via Open Source models. Their biggest hinderance would be regulation putting a stop to the proliferation of capabilities. So they have to understate the power of the models. This is the only way Meta can continue sucking steam out of the leading labs.

reply
lyu07282
5 days ago
[-]
You don't have to assume malice, he is a strong believer in liberalism so naturally he would argue whatever leads to less regulation. Even if he thought AI was dangerous he would still believe that corporations are better suited to combat that threat than any government.

Its similar to how the WSJ journalist would never ask him what he thinks about the larger effects of the "deindustrialization" of knowledge-based jobs caused by AI. Not because the journalist is malicious, its just the shared, subconscious ideology.

People don't need a reason to protect capital interests, even poor people on the very bottom will protect it.

reply
mmoustafa
5 days ago
[-]
Yes, you can be liberalist and anti-regulation while also believing the capabilities are ... what they are. I believe many of Yann's statements are straw-manning or sometimes, simply incorrect downplaying of capabilities.
reply
muglug
5 days ago
[-]
You’re free to concoct a conspiracy that he’s just a puppet for Meta’s supposed business interests*, but that doesn’t change the validity of his claims.

* pretty sure any revenue from commercial Llama licenses are a rounding error at best

reply
threeseed
5 days ago
[-]
> commoditize AI via Open Source models

Sounds like we should be fully supporting them then.

reply
mmoustafa
5 days ago
[-]
Yes!
reply
krig
5 days ago
[-]
(reacting to the title alone since the article is paywalled)

AI can’t push a houseplant off a shelf, so there’s that.

Talking about intelligence as a completely disembodied concept seems meaningless. What does ”cat” even mean if comparing to something that doesn’t have a physical corporeal presence in time and space. To compare like this seems to me like making a fundamental category error.

edit: Quoting, “You’re going to have to pardon my French, but that’s complete B.S.”

I guess I’m just agreeing with LeCun here.

reply
jcranmer
5 days ago
[-]
It's referring to the fact that cats are able to do tasks like multistage planning, which he asserts current AIs are unable to do.
reply
qarl
5 days ago
[-]
> It's referring to the fact that cats are able to do tasks like multistage planning, which he asserts current AIs are unable to do.

I don't understand this criticism at all. If I go over to ChatGPT and say "From the perspective of a cat, create a multistage plan to push a houseplant off a shelf" it will satisfy my request perfectly.

reply
staticman2
3 days ago
[-]
ChatGPT only decides what to write one token at a time.
reply
qarl
3 days ago
[-]
Hmmm... You didn't really explain yourself - I'm not sure I understand your point.

But guessing at what you mean - when I evaluate ChatGPT, I include all the trivial add-ons. For example, AutoGPT will create a plan like this and then execute the plan one step at a time.

I think it would be silly to evaluate ChatGPT solely as a single execution endpoint.

reply
staticman2
3 days ago
[-]
As I understand it a model simply predicts the next word- one word at a time. (The next token, actually, but for discussion sake we might pretend a token is identical to a word).

The model does not "plan" anything, it has no idea how a sentence will end when it starts it as it only considers what word comes next- then what word after that- then what word after that. It discovers the sentence is over when the next token turns out to be a period. It discovers it's finished it's assignment when the next token turns out to be a stop token.

So one could say the model provides the illusion of planning, but is never really planning anything other than what the next word to write is.

reply
qarl
2 days ago
[-]
HEH.

Ok. Suppose I create the illusion of a calculator. I type in 5, then plus, then 5. And it gives me the illusional answer of 10.

What's the difference?

reply
staticman2
2 days ago
[-]
To use your metaphor the difference is we are discussing whether someone can invent a calculator that does new and exciting things using current calculator technology.

You can't say "who cares if it's an illusion it works for me" when the topic is whether an attempt to build a better one will work for the stated goal.

reply
qarl
2 days ago
[-]
Way back at the top I explained that ChatGPT can indeed create a multistage plan. I encourage you to try it for yourself.

Otherwise, I think we should go our separate ways. You take care now.

reply
staticman2
2 days ago
[-]
I think I've explained several times it can't plan as it only does one word at a time, your so called multistage plan document is not planned out in advance, it is generated one word at a time with no plan.

If you don't care about the technical aspects, why ask in the first place what Yann LeCun meant?

reply
qarl
2 days ago
[-]
Friend, it gave the answer 10, just like the illusionary calculator. I'm sorry you don't like how it got the answer.

You take care now.

reply
staticman2
1 day ago
[-]
"Now I know you said it can't plan, but what if we all agree to call what it does planning? That would be very exciting for me. I can produce a parable about a calculator if that would help. LeCun says it has limitations, but what if all agree to call it omniscient and omnipotent? That would also be very exciting for me."
reply
qarl
1 day ago
[-]
Look man, 5 + 5 = 10, even if it's implemented by monkeys on typewriters.

This argument we're having is a version of the Chinese Room. I've never found Searle's argument persuasive, and I truly have no interest arguing it with you.

This is the last time I will respond to you. I hope you have a nice day.

reply
staticman2
1 day ago
[-]
I don't think we're having an argument about the Chinese room, because as far as I know Le Cunn does not argue AI can't have "A mind, understanding or consciousness". Not have I, I simply talked about how LLMs work, as I understand them.

There's a lot of confusion about these technologies, because tech enthusiasts like to exaggerate the state of the art's capabilities. You seem to be arguing "we must turn to philosophy to show CHATGPT is smarter than it would seem" which is not terribly convincing.

reply
qarl
1 day ago
[-]
No.

Take care now.

reply
krig
5 days ago
[-]
Thanks, that makes more sense than the title. :)
reply
dang
5 days ago
[-]
We replaced the baity title with something suitably bland.

If there's a representative phrase from the article itself that's neutral enough, we could use that instead.

reply
sebastiennight
5 days ago
[-]
Out of curiosity, would you say a person with locked-in syndrome[0] is no longer intelligent?

[0]: https://en.wikipedia.org/wiki/Locked-in_syndrome

reply
krig
5 days ago
[-]
I don’t think ”intelligent” is a particularly meaningful concept, and just leads to such confusion as your comment hints at. Do I think a person with locked-in syndrome is still a human being with thoughts, desires and needs? Yes. Do I think we can rank intelligences along an axis where a locked-in person somehow rates lower than a healthy person but higher than a cat? I don’t think so. A cat is very good at being a cat, much better than any human is.
reply
krig
5 days ago
[-]
I would also point out that a person with locked-in syndrome still has ”a physical corporeal presence in time and space”, they have carers, relationships, families, histories and lives beyond themselves that are inextricably tied to them as an intelligent being.
reply