Glyph: Scaling Context Windows via Visual-Text Compression
24 points
6 days ago
| 3 comments
| github.com
| HN
phildougherty
3 days ago
[-]
Can someone compare/contrast with deepseek-ocr?
reply
kburman
3 days ago
[-]
This looks very promising. Are there any downsides or potential gotchas?
reply
ghoul2
3 days ago
[-]
I asked this question on another post and was downvoted, trying again: don't we lose the "contextualization" that LLM embeddings do (embedding on Token X contains not just information about X, but also of all tokens that came before X in the context, causing different embedding for "flies" in "time flies like an arrow" vs "fruit flies like a banana")?

The image embeddings, as I currently understand, are just pixel values of a block of pixels.

What am I missing?

reply