Image Diffusion Models Exhibit Emergent Temporal Propagation in Videos
75 points
8 hours ago
| 2 comments
| arxiv.org
| HN
ttul
1 hour ago
[-]
This is a cool result. Deep learning image models are trained on enormous amounts of data and the information recorded in their weights continues to astonish me. Over in the Stable Diffusion space, hobbyists (as opposed to professional researchers) are continuing to find new ways to squeeze intelligence out of models that were trained in 2022 and are considerably out of date compared with the latest “flow matching” models like Qwen Image and Flux.

Makes you wonder what intelligence is lurking in a 10T parameter model like Gemini 3 that we may not discover for some years yet…

reply
smerrill25
48 minutes ago
[-]
Hey, do you know how you figured out about this information? I would be super curious to keep track of current ad-hoc ways of pushing older models to do cooler things. LMK
reply
onesandofgrain
6 hours ago
[-]
Can someone smarter than me explain what this is about?
reply
magicalhippo
5 hours ago
[-]
Glossing through the paper, here's my take.

Someone previously found that that the cross-attention layers in text-to-image diffusion models captures correlation between the input text tokens and corresponding image regions, so that one can use this to segment the image, pixels containing "cat" for example. However this segmentation was rather coarse. The authors of this paper found that also using the self-attention layers leads to a much more detailed segmentation.

They then extend this to video by using the self-attention between two consecutive frames to determine how the segmentation changes from one frame to the next.

Now, text-to-image diffusion models require a text input to generate the image to begin with. From what I can gather they limit themselves to semi-supervised video segmentation, so that the first frame is already segmented by say a human or some other process.

They then run a "inversion" procedure which tries to generate text that causes the text-to-image diffusion model to segment the first frame as closely as possible to the provided segmentation.

With the text in hand, they can then run the earlier segmentation propagation steps to track the segmented object throughout the video.

The key here is that the text-to-image diffusion model is pretrained, and not fine-tuned for this task.

That said, I'm no expert.

reply
jacquesm
4 hours ago
[-]
For a 'not an expert' explanation you did a better job than the original paper.
reply
nicolailolansen
2 hours ago
[-]
Bravo!
reply
Kalabint
5 hours ago
[-]
> Can someone smarter than me explain what this is about?

I think you can find the answer under point 3:

> In this work, our primary goal is to show that pretrained text-to-image diffusion models can be repurposed as object trackers without task-specific finetuning.

Meaning that you can track Objects in Videos without using specialised ML Models for Video Object Tracking.

reply
echelon
5 hours ago
[-]
All of these emergent properties of image and video models leads me to believe that evolution of animal intelligence around motility and visually understanding the physical environment might be "easy" relative to other "hard steps".

The more complex that an eye gets, the more the brain evolves not just the physics and chemistry of optics, but also rich feature sets about predator/prey labels, tracking, movement, self-localization, distance, etc.

These might not be separate things. These things might just come "for free".

reply
jacquesm
4 hours ago
[-]
There is a massive amount of pre-processing already done in the retina itself and in the LGN:

https://en.wikipedia.org/wiki/Lateral_geniculate_nucleus

So the brain does not necessarily receive 'raw' images to process to begin with, there is already a lot of high level data extracted at that point such as optical flow to detect moving objects.

reply
DrierCycle
2 hours ago
[-]
And the occipital is developed around extraordinary levels of image separation, broken down into tiny areas of the input, scattered and woven for details of motion, gradient, contrast, etc.
reply
Mkengin
3 hours ago
[-]
Interesting. So similar to the vision encoder + projector in VLMs?
reply
fxtentacle
4 hours ago
[-]
I wouldn't call these properties "emergent".

If you train a system to memorize A-B pairs and then you normally use it to find B when given A, then it's not surprising that finding A when given B also works, because you trained it in an almost symmetrical fashion on A-B pairs, which are, obviously, also B-A pairs.

reply