Show HN: I modeled the Voynich Manuscript with SBERT to test for structure
260 points
7 hours ago
| 19 comments
| github.com
| HN
I built this project as a way to learn more about NLP by applying it to something weird and unsolved.

The Voynich Manuscript is a 15th-century book written in an unknown script. No one’s been able to translate it, and many think it’s a hoax, a cipher, or a constructed language. I wasn’t trying to decode it — I just wanted to see: does it behave like a structured language?

I stripped a handful of common suffix-like endings (aiin, dy, etc.) to isolate what looked like root forms. I know that’s a strong assumption — I call it out directly in the repo — but it helped clarify the clustering. From there, I used SBERT embeddings and KMeans to group similar roots, inferred POS-like roles based on position and frequency, and built a Markov transition matrix to visualize cluster-to-cluster flow.

It’s not translation. It’s not decryption. It’s structural modeling — and it revealed some surprisingly consistent syntax across the manuscript, especially when broken out by section (Botanical, Biological, etc.).

GitHub repo: https://github.com/brianmg/voynich-nlp-analysis Write-up: https://brig90.substack.com/p/modeling-the-voynich-manuscrip...

I’m new to the NLP space, so I’m sure there are things I got wrong — but I’d love feedback from people who’ve worked with structured language modeling or weird edge cases like this.

patcon
4 hours ago
[-]
I see that you're looking for clusters within PCA projections -- You should look for deeper structure with hot new dimensional reduction algorithms, like PaCMAP or LocalMAP!

I've been working on a project related to a sensemaking tool called Pol.is [1], but reprojecting its wiki survey data with these new algorithms instead of PCA, and it's amazing what new insight it uncovers with these new algorithms!

https://patcon.github.io/polislike-opinion-map-painting/

Painted groups: https://t.co/734qNlMdeh

(Sorry, only really works on desktop)

[1]: https://www.technologyreview.com/2025/04/15/1115125/a-small-...

reply
brig90
4 hours ago
[-]
Thanks for pointing those out — I hadn’t seen PaCMAP or LocalMAP before, but that definitely looks like the kind of structure-preserving approach that would fit this data better than PCA. Appreciate the nudge — going to dig into those a bit more.
reply
staticautomatic
3 hours ago
[-]
I’ve had much better luck with umap than PCA and t-sne for reducing embeddings.
reply
patcon
46 minutes ago
[-]
PaCMAP (and its descendants localmap) are comparable to t-sne at preserving both local and global structure (but without messing much with finicky hyperparameters)

https://youtu.be/sD-uDZ8zXkc

reply
minimaxir
6 hours ago
[-]
A point of note is that the text embeddings model used here is paraphrase-multilingual-MiniLM-L12-v2 (https://huggingface.co/sentence-transformers/paraphrase-mult...), which is about 4 years old. In the NLP world, that's effectively ancient, particularly as the robustness of even small embeddings models due to global LLM improvements has increased dramatically both in information representation and distinctiveness in the embedding space. Even modern text embedding models not explicitly trained for multilingual support still do extremely well on that type of data, so they may work better for the Voynich Manuscript which is a relatively unknown language.

The traditional NLP techniques of stripping suffices and POS identification may actually harm embedding quality than improvement, since that removes relevant contextual data from the global embedding.

reply
brig90
6 hours ago
[-]
Totally fair — I defaulted to paraphrase-multilingual-MiniLM-L12-v2 mostly for speed and wide compatibility, but you’re right that it’s long in the tooth by today’s standards. I’d be really curious to see how something like all-mpnet-base-v2 or even text-embedding-ada-002 would behave, especially if we keep the suffixes in and lean into full contextual embeddings rather than reducing to root forms.

Appreciate you calling that out — that’s a great push toward iteration.

reply
thih9
4 hours ago
[-]
(I know nothing about NLP)

Does it make sense to check the process with a control group?

E.g. if we ask a human to write something that resembles a language but isn’t, then conduct this process (remove suffixes, attempt grouping, etc), are we likely to get similar results?

reply
awinter-py
14 minutes ago
[-]
yes exactly, why did we not simply ask 100 people to write voynich manuscripts and then train on that dataset
reply
tetris11
6 hours ago
[-]
UMAP or TSNE would be nice, even if PCA already shows nice separation.

Reference mapping each cluster to all the others would be a nice way to indicate that there's no variability left in your analysis

reply
brig90
6 hours ago
[-]
Great points — thank you. PCA gave me surprisingly clean separation early on, so I stuck with it for the initial run. But you’re right — throwing UMAP or t-SNE at it would definitely give a nonlinear perspective that could catch subtler patterns (or failure cases).

And yes to the cross-cluster reference idea — I didn’t build a similarity matrix between clusters, but now that you’ve said it, it feels like an obvious next step to test how much signal is really being captured.

Might spin those up as a follow-up. Appreciate the thoughtful nudge.

reply
lukeinator42
5 hours ago
[-]
Do you have examples of how this reference mapping is performed? I'm interested in this for embeddings in a different modality, but don't have as much experience on the NLP side of things
reply
tetris11
4 hours ago
[-]
Nothing concrete, but you essentially perform shared nearest neighbours using anchor points to each cluster you wish to map to. These form correction vectors you can then use to project from one dataset to another
reply
jszymborski
6 hours ago
[-]
When I get nice separation with PCA, I personally tend to eschew UMAP, since the relative distance of all the points to one another is easier to interpret. I avoid t-SNE at all costs, because distance in those plots are pretty much meaningless.

(Before I get yelled out, this isn't prescriptive, it's a personal preference.)

reply
minimaxir
3 hours ago
[-]
PCA having nice separation is extremely uncommon unless your data is unusually clean or has obvious patterns. Even for the comically-easy MNIST dataset, the PCA representation doesn't separate nicely: https://github.com/lmcinnes/umap_paper_notebooks/blob/master...
reply
jszymborski
2 hours ago
[-]
"extremely uncommon" is very much not my experience when dealing with well-trained embeddings.

I'd add that just because you can achieve separability from a method, the resulting visualization may not be super informative. The distance between clusters that appear in t-SNE projected space often have nothing to do with their distance in latent space, for example. So while you get nice separate clusters, it comes at the cost of the projected space greatly distorting/hiding the relationship between points across clusters.

reply
tomrod
5 hours ago
[-]
We are of a like mind.
reply
us-merul
6 hours ago
[-]
I’ve found this to be one of the most interesting hypotheses: http://voynichproject.org/

The author made an assumption that Voynichese is a Germanic language, and it looks like he was able to make some progress with it.

I’ve also come across accounts that it might be an Uralic or Finno-Ugric language. I think your approach is great, and I wonder if tweaking it for specific language families could go even further.

reply
veqq
5 hours ago
[-]
This thread discusses the many purported "solutions": https://www.voynich.ninja/thread-4341.html While Bernholz' site is nice, Child's work doesn't shed much light on actually deciphering the MS.
reply
us-merul
5 hours ago
[-]
Thanks for this! I had come across Child’s hypothesis after doing a search related to Old Prussian and Slavic languages, so I don’t have much context for this solution, and this is helpful to see.
reply
Avicebron
6 hours ago
[-]
Maybe I missed it in the README but how did you do the initial encoding for the "words"? so for example, if you have ""okeeodair" as a word, where do you map that back to original symbols?
reply
brig90
6 hours ago
[-]
Yep, that’s exactly right — the words like "okeeodair" come directly from the EVA transliteration files, which map the original Voynich glyphs to ASCII approximations. So I’m not working with the glyphs themselves, but rather the standardized transliterated words based on the EVA (European Voynich Alphabet) system. The transliterations I used can be found here: https://www.voynich.nu/

I didn’t re-map anything back to glyphs in this project — everything’s built off those EVA transliterations as a starting point. So if "okeeodair" exists in the dataset, that’s because someone much smarter than me saw a sequence of glyphs and agreed to call it that.

reply
codesnik
3 hours ago
[-]
what I'd expect from a handwritten book like that, if it is just a gibberish, and not a cypher of any sorts - the style, calligraphy, the words used, even letters themselves should evolve from page 1 to the last page. Pages could be reordered of course, but it still should be noticeable.

Unless author hadn't written tens of books exactly like that before, which didn't survive, of course.

I don't think it's a very novel idea, but I wonder if there's analysis for pattern like that. I haven't seen mentions of page to page consistency anywhere.

reply
veqq
1 hour ago
[-]
> I haven't seen mentions of page to page consistency anywhere.

A lot of work's been done here. There are believed to have been 2 scribes (see Prescott Currier), although Lisa Fagin Davis posits 5. Here's a discussion of an experiment working off of Fagin Davis' position: https://www.voynich.ninja/thread-3783.html

reply
cookiengineer
1 hour ago
[-]
Sorry to burst your bubble:

It's not a cipher, it was written by an Egyptian Hebrew speaking traveller, and Rainer Hannig and his wife were able to build up a fairly good grammar before he died two years ago. [1] the general issue of the manuscript itself is that it's evolving in its grammar and ethymological use of words, as the traveller picked up various words and transferred meanings along the way.

But, given that your attempt tries to find similarities between proto languages that were mixed together, this could be a great thing to study/analyze the evolution of languages over time, given that you're able to preserve bayesian inference on top.

[1] https://www.rainer-hannig.com/voynich/

reply
brig90
1 hour ago
[-]
This doesn’t burst my bubble at all — if anything, it’s great to hear that others have been able to make meaningful progress using different methods. I wasn’t trying to crack the manuscript or stake a claim on the origin; this project was more about exploring how modern tools like NLP and clustering could model structure in unknown languages.

My main goal was to learn and see if the manuscript behaved like a real language, not necessarily to translate it. Appreciate the link — I’ll check it out (once I get my German up to speed!).

reply
marcodiego
3 hours ago
[-]
How expensive is a "brute force" approach to decode it? I mean, how about mapping each unknown word by a known word in a known language and improve this mapping until a 'high score' is reached?
reply
munchler
1 hour ago
[-]
This seems to assume that a 1:1 mapping between words exists, but I don't think that's true for languages in general. Compound words, for example, won't map cleanly that way. Not to mention deeper semantic differences between languages due to differences in culture.
reply
brig90
3 hours ago
[-]
That’s a really interesting question — and one I’ve been circling in the back of my head, honestly. I’m not a cryptographer, so I can’t speak to how feasible a brute-force approach is at scale, but the idea of mapping each Voynich “word” to a real word in another language and optimizing for coherence definitely lines up with some of the more experimental approaches people have tried.

The challenge (as I understand it) is that the vocabulary size is pretty massive — thousands of unique words — and the structure might not be 1:1 with how real language maps. Like, is a “word” in Voynich really a word? Or is it a chunk, or a stem with affixes, or something else entirely? That makes brute-forcing a direct mapping tricky.

That said… using cluster IDs instead of individual word (tokens) and scoring the outputs with something like a language model seems like a pretty compelling idea. I hadn’t thought of doing it that way. Definitely some room there for optimization or even evolutionary techniques. If nothing else, it could tell us something about how “language-like” the structure really is.

Might be worth exploring — thanks for tossing that out, hopefully someone with more awareness or knowledge in the space see's it!

reply
marcodiego
2 hours ago
[-]
It might be a good idea for a SETI@home like project.
reply
quantadev
2 hours ago
[-]
Like I said in another post (sorry for repeating) since this was during 1500s, the main thing people would've been encrypting back then was biblical text (or any other religion).

Maybe a version of scripture that had been "rejected" by some King, and was illegal to reproduce? Take the best radiocarbon dating, figure out who was King back then, and if they 'sanctioned' any biblical translations, and then go to the version of the bible before that translation, and this will be what was perhaps illegal and needed to be encrypted. That's just one plausible story. Who knows, we might find out the phrase "young girl" was simplified to "virgin", and that would potentially be a big secret.

reply
user32489318
4 hours ago
[-]
Would analysis of a similar body of text in a known language yield similar patterns? Put it in another way, could you use this type of an analysis on different types of text help understand what this script describes?
reply
ablanton
5 hours ago
[-]
reply
Reubend
4 hours ago
[-]
Most agree that this is not a real solution. Many of the pages translate to nonsense using that scheme, and some of the figures included in the paper don't actually come from the Voynich manuscript in the first place.

For more info, see https://www.voynich.ninja/thread-3940-post-53738.html#pid537...

reply
krick
2 hours ago
[-]
I'm not really following the research, so it's rather a lazy question (assuming you do): does any of it follow the path Derek Vogt was suggesting in his (kinda famous) videos (that he deleted for some reason)? I remember when I was watching them, it felt so convincing I thought "Alright, it looks like there must be a short leap to the actual solution now."

Yet 10 years later I still hear that the consensus is that there's no agreeable translation. So, what, all this mandaic-gypsies was nothing? And all coincidences were… coincidences?

reply
cookiengineer
1 hour ago
[-]
Check out Rainer Hannig's instructions:

https://www.rainer-hannig.com/voynich/

reply
GTP
3 hours ago
[-]
The link to the write-up seems broken, can you write the correct one?
reply
brig90
3 hours ago
[-]
Apologies but its not letting me edit post any longer (I'm new to HN), here's the link though: https://brig90.substack.com/p/modeling-the-voynich-manuscrip...
reply
rossant
3 hours ago
[-]
TIL about the Voynich manuscript. Fascinating. Thank you.
reply
adzm
2 hours ago
[-]
It is a great coffee table book!
reply
quantadev
2 hours ago
[-]
Being from the 15th Century the obvious reason to encrypt text was to avoid religious persecution during "The Inquisition" (and other religion-motivated violence of that time). So it would be interesting to run the same NLP against the Gospels and look for correlations with that. You'd want to first do a 'word'-based comparison, and then a 'character'-based comparison. I mean compare the graphs from Bible to graphs from Voynich.

Also there might be some characters that are in there just to confuse. For example that bizarre capital "P"-like thing that has multiple variations seems to appear sometimes far too often to represent real language, so it might be just an obfuscator that's removed prior to decryption. There may be other characters that are abnormally "frequent" and they're maybe also unused dummy characters. But the "too many Ps" problem is also consistent with just pure fiction too, I realize.

reply
glimshe
6 hours ago
[-]
I strongly believe the manuscript is undecipherable in the sense thats it's all gibberish. I can't prove it, but at this point I think it's more likely than not to be hoax.
reply
lolinder
6 hours ago
[-]
Statistical analyses such as this one consistently find patterns that are consistent with a proper language and would be unlikely to have emerged from someone who was just putting gibberish on the page. To get the kinds of patterns these turn up someone would have had to go a large part of the way towards building a full constructed language, which is interesting in its own right.
reply
ahmedfromtunis
5 hours ago
[-]
Personally, I have no preference to any theory about the book; whichever it turns out to be, I'll take it as is.

That said, I just watched a video about the practice of "speaking in tongues" that some christian congregations practice. From what I understand, it's a practice where believers speak in gibberish for certain rituals.

Studying these "speeches", researches found patterns and rhythms that the speakers followed without even being aware they exist.

I'm not saying that's what's happening here, but maybe if this was a hoax (or a prank), maybe these patterns emerged just because they were inscribed by a human brain? At best, these patterns can be thought of as shadows of the patterns found in the writers mother tongue?

reply
InsideOutSanta
5 hours ago
[-]
> would be unlikely to have emerged from someone who was just putting gibberish on the page

People often assert this, but I'm unsure of any evidence. If I wrote a manuscript in a pretend language, I would expect it to end up with language-like patterns, some automatically and some intentionally.

Humans aren't random number generators, and they aren't stupid. Therefore, the implicit claim that a human could not create a manuscript containing gibberish that exhibits many language-like patterns seems unlikely to be true.

So we have two options:

1. This is either a real language or an encoded real language that we've never seen before and can't decrypt, even after many years of attempts

2. Or it is gibberish that exhibits features of a real language

I can't help but feel that option 2 is now the more likely choice.

reply
neom
5 hours ago
[-]
reply
tonymillion
36 minutes ago
[-]
And let’s not forget “Ken Lee”

https://youtu.be/vUAaHkGpJy8

reply
CamperBob2
5 hours ago
[-]
Or Dead Can Dance, e.g. https://www.youtube.com/watch?v=VEVPYVpzMRA .

It's harder to generate good gibberish than it appears at first.

reply
cubefox
5 hours ago
[-]
Creating gibberish with the statistical properties of a natural language is a very hard task if you do this hundreds of years before the discovery of said statistical properties.
reply
vehemenz
3 hours ago
[-]
I'm not sure where this claim keeps coming from. Voynichese doesn't exhibit the statistical qualities of any known natural language. In a very limited sense, yes, but on balance, no. There is too much repetition for that.
reply
InsideOutSanta
4 hours ago
[-]
Why?
reply
veqq
5 hours ago
[-]
> consistent with a proper language

There's certainly a system to the madness, but it exhibits rather different statistical properties from "proper" languages. Look at section 2.4: https://www.voynich.nu/a2_char.html At the moment, any apparently linguistic patterns are happenstance; the cypher fundamentally obscures its actual distribution (if a "proper" language.)

reply
Loughla
2 hours ago
[-]
If you're going to make a hoax for fun or for profit, wouldn't it be the best first step to make it seem legitimate, by coming up with a fake language? Klingon is fake, but has standard conventions. This isn't really a difficult proposition compared to all of the illustrations and what-not, I would think.
reply
int_19h
2 hours ago
[-]
If you come up with a fake language, then by definition the text has some meaning in said language.
reply
andoando
5 hours ago
[-]
Could still be gibberish.

Shud less kee chicken souls do be gooby good? Mus hess to my rooby roo!

reply
vehemenz
3 hours ago
[-]
Even before we consider the cipher, there's a huge difference between a constructed language and a stochastic process to generate language-like text.
reply
lolinder
2 hours ago
[-]
A stochastic pattern to generate language-like text in the early 1400s is a lot more interesting than gibberish.
reply
himinlomax
4 hours ago
[-]
There are many aspects that point to the text not being completely random or clumsily written. In particular it doesn't fall into many faults you'd expect from some non-expert trying to come up with a fake text.

The age of the document can be estimated through various methods that all point to it being ~500 year old. The vellum parchment, the ink, the pictures (particularly clothes and architecture) are perfectly congruent with that.

The weirdest part is that the script has a very low number of different signs, fewer than any known language. That's about the only clue that could point to a hoax afaik.

reply
andyjohnson0
6 hours ago
[-]
This looks very interesting - nice work!

I have no background in NLP or linguistics, but I do have a question about this:

> I stripped a set of recurring suffix-like endings from each word — things like aiin, dy, chy, and similar variants

This seems to imply stripping the right-hand edges of words, with the assumption that the text was written left to right? Or did you try both possibilities?

Once again, nice work.

reply
brig90
6 hours ago
[-]
Great question — and you’re right to catch the assumption there. I did assume left-to-right when stripping suffixes, mostly because that’s how the transliteration files were structured and how most Voynich analyses approach it. I didn’t test the reverse — though flipping the structure and checking clustering/syntax behavior would be a super interesting follow-up. Appreciate you calling it out!
reply
ck2
4 hours ago
[-]
> "New multispectral analysis of Voynich manuscript reveals hidden details"

https://arstechnica.com/science/2024/09/new-multispectral-an...

but imagine if it was just a (wealthy) child's coloring book or practice book for learning to write lol

reply
Avicebron
4 hours ago
[-]
> but imagine if it was just a (wealthy) child's coloring book or practice book for learning to write lol

Even if it was "just" an (extraordinarily wealthy and precocious) child with a fondness for plants, cosmology, and female bodies carefully inscribing nonsense by repeatedly doodling the same few characters in blocks that look like the illuminated manuscripts this child would also need access to, that's still impressive and interesting.

reply
veqq
6 hours ago
[-]
The best work on Voynich has been done by Emma Smith, Coons and Patrick Feaster, about loops and QOKEDAR and CHOLDAIIN cycles. Here's a good presentation: https://www.youtube.com/watch?v=SCWJzTX6y9M Zattera and Roe have also done good work on the "slot alphabet". That so many are making progression in the same direction is quite encouraging!

https://www.voynich.ninja/thread-4327-post-60796.html#pid607... is the main forum discussing precisely this. I quite liked this explanation of the apparent structure: https://www.voynich.ninja/thread-4286.html

> RU SSUK UKIA UK SSIAKRAINE IARAIN RA AINE RUK UKRU KRIA UKUSSIA IARUK RUSSUK RUSSAINE RUAINERU RUKIA

That is, there may be 2 "word types" with different statistical properties (as Feaster's video above describes)(perhaps e.g. 2 different Cyphers used "randomly" next to each other). Figuring out how to imitate the MS' statistical properties would let us determine cypher system and make steps towards determining its language etc. so most credible work's gone in this direction over the last 10+ years.

This site is a great introduction/deep dive: https://www.voynich.nu/

reply
brig90
6 hours ago
[-]
I’m definitely not a Voynich expert or linguist — I stumbled into this more or less by accident and thought it would make for a fun NLP learning project. Really appreciate you pointing to those names and that forum — I wasn’t aware of the deeper work on QOKEDAR/CHOLDAIIN cycles or the slot alphabet stuff. It’s encouraging to hear that the kind of structure I modeled seems to resonate with where serious research is heading.
reply
akomtu
5 hours ago
[-]
Ock ohem octei wies barsoom?
reply
nine_k
6 hours ago
[-]
In short, the manuscript looks like a genuine text, not like a random bunch of characters pretending to be a text.

<quote>

Key Findings

* Cluster 8 exhibits high frequency, low diversity, and frequent line-starts — likely a function word group

* Cluster 3 has high diversity and flexible positioning — likely a root content class

* Transition matrix shows strong internal structure, far from random

* Cluster usage and POS patterns differ by manuscript section (e.g., Biological vs Botanical)

Hypothesis

The manuscript encodes a structured constructed or mnemonic language using syllabic padding and positional repetition. It exhibits syntax, function/content separation, and section-aware linguistic shifts — even in the absence of direct translation.

</quote>

reply
brig90
6 hours ago
[-]
Yep, that was my takeaway too — the structure feels too consistent to be random, and it echoes known linguistic patterns.
reply
gchamonlive
6 hours ago
[-]
I'd be surprised if it was indeed random, but the consistency is really surprising. I say this because I imagine that anyone that would be able to produce such text is a master scribe that put countless hours writing other works, so he's supposed to be very familiar with such structure, therefore even if he was going for randomness, I would doubt he would achieve it.
reply
InsideOutSanta
5 hours ago
[-]
> the structure feels too consistent to be random

I don't see how it could be random, regardless of whether it is an actual language. Humans are famously terrible at generating randomness.

reply
nine_k
4 hours ago
[-]
The kind of "randomness" hardly compatible with language-like structure could arise from choosing the glyphs according to purely graphical concerns, "what would look nice here", lines being too long or too short, avoiding repeating sequences or, to the contrary, achieving interesting 2D structures in the text, etc. It's not cryptography-class randomness, but it would be enough to ruin the rather well-expressed structures in the text (see e.g. the transition matrix).
reply
InsideOutSanta
4 hours ago
[-]
>choosing the glyphs according to purely graphical concerns, "what would look nice here", lines being too long or too short, avoiding repeating sequences or, to the contrary, achieving interesting 2D structures in the text

I wouldn't assume that the writer made decisions based on these goals, but rather that the writer attempted to create a simulacrum of a real language. However, even if they did not, I would expect an attempt at generating a "random" language to ultimately mirror many of the properties of the person's native language.

The arguments that this book is written in a real language rest on the assumption that a human being making up gibberish would not produce something that exhibits many of the properties of a real language; however, I don't see anyone offering any evidence to support this claim.

reply