A proper world model like Jepa should be predicting in latent space where the representation of what is going on is highly abstract.
Video generation models by definition are either predicting in noise or pixel space (latent noise if the diffuser is diffusing in a variational encoders latent space)
It seems like what this lab is doing is quite vanilla, and I'm wondering if they are doing any sort of research in less demo sexy joint embedding predictive spaces.
There was a recent paper, LeJepa from LeCunn and a postdoc that actually fixes many of the mode distribution collapse issues with the Jepa embedding models I just mentioned.
I'm waiting on the startup or research group that gives us an unsexy world model. Instead of giving us 1080p video of supermodels camping, gives us a slideshow of something a 6 year old child would draw. That would be a more convincing demonstrator of an effective world model.
I don't see that this follows "by definition" at all.
Just because your output is pixel values doesn't mean your internal world model is in pixel space.
Visually, they are stunning. But it's nowhere near physical. I mean look at that video with the girl and lion. The tail teleports between legs and then becomes attached to the girl instead of the tiger.
Just because the visuals are high quality doesn't mean it's a world model or has learned physics. I feel like we're conflating these things. I'm much happier to call something a world model if its visual quality is dogshit but it is consistent with its world. And I say its world because it doesn't need to be consistent with ours
So, you need to say more. Or at least give me some reason to believe you rather than state something as an objective truth and "just trust me". In the long response to a sibling I state more precisely why I have never bought this common conjecture. Because that's what it is, conjecture.
So give me at least some reason to believe you. Because you have neither logos nor ethos. Your answer is in the form of ethos, but without the critical requisites.
The input images are stunning, model's result is another disappointing trip to uncanny valley. But we feel Ok as long as the sequence doesn't horribly contradict the original image or sound. That is the world model.
> But we feel Ok as long as the sequence doesn't horribly contradict the original image or sound.
Is the error I pointed out not "horribly contradicting"? > That is the world model.
I would say that if it is non-physical[0] then it's hard to call it a /world/ model. A world is consistent and has a set of rules that must be followed.I've yet to see a claimed world model that actually captures this behavior. Yet it's something every game engine[1] gets very well. We'd call it a bad physics engine if they made the same mistakes we see even the most advanced "world models" do.
This is part of why I'm trying to explain that visual quality is actually orthogonal. Even old Atari games have consistent world models despite being pixelated. Or think about Mario on the original NES. Even the physics breaking in that game are more edge cases and not the norm. But here, things like the lion's tail is not consistent even to a 2D world. I've never bought the explanation that teleporting in front of and behind the leg is an artifact of embedding 3D into 2D[2] because the issue is actually the model not understanding collision and occlusion. It does not understand how the sections relate to one another in the image.
The major problem with these systems is that they just hope that the physics is recovered through enough examples of videos. Yet if one studied physics (beyond your basic college courses) you'd understand the naïveté of that. It took a long time to develop physics due to these specific limitations. These models don't even have the advantage of being able to interact with the environment. They have no mechanisms to form beliefs and certainly no means to test them. It's essentially impossible to develop physics through observation alone
[0] with respect to the physics of the world being simulated. I want you distinguish real world physics from /a physics/
[1] a game physics engine is a world model. Which, as in stressing in [0], does not necessarily need follow real world physics. Mistakes happen of course but things are generally consistent.
[2] no video and almost no game is purely 2D. They tend to have backgrounds which places some layering but we'll say 2D for convenience and since we have a shared understanding
Large language models are mostly consistent, but they have mistakes even in grammar too, from time to time. And it's usually called a "hallucination". Can't we say physics errors are a kind of "hallucination" too, in a world model? I guess the question is, what hallucination rate are we willing to tolerate.
It's called "world models" because it's a grift. An out-in-the-open, shameless grift. Investors, pile on.
Edit: I said a bit more in the reply to the sibling comment. But we're probably on a similar page.
I was expecting them to test a simple hypothesis and compare the model results to a real world test
Just because there are errors in this doesn't mean it isn't significant. If a machine learning model understands how physical objects interact with each other that is very useful.
> what they display represents a "world" instead of a video frame or image.
Do they?I'm unconvinced. The tiger and girl video is the clearest example. Nothing about that seems world representing
No it doesn't. It merely needs to mimic.
I'm not saying it couldn't be locally violated, but it seems straightforward philosophically that each nesting doll of simulated reality must be imperfect by being less complicated.
Is this more than recursive video? If so, how?
Yes, it should be called an AI Metaverse.
It does do a nice job of short term prediction. That's useful as a component of common sense.