Always Bet on Text
123 points
5 hours ago
| 29 comments
| graydon2.dreamwidth.org
| HN
smj-edison
3 hours ago
[-]
I have mixed feelings about this. On the one hand, I agree: text is infinitely versatile, indexable, durable, etc. But, after discovering Bret Victor's work[1], and thinking about how I learned piano, I've also started to see a lot of the limitations of text. When I learned piano, I always had a live feedback loop: play a note, and hear how it sounds, and every week I had a teacher coach me. This is a completely different way to learn a skill, and something that doesn't work well with text.

Bret Victor's point is why is this not also the approach we use for other topics, like engineering? There are many people who do not have a strong symbolic intuition, and so being able to tap into their (and our) other intuitions is a very powerful tool to increase efficiency of communication. More and more, I have found myself in this alternate philosophy of education and knowledge transmission. There are certainly limits—and text isn't going anywhere, but I think there's still a lot more to discover and try.

[1] https://dynamicland.org/2014/The_Humane_Representation_of_Th...

reply
dkarl
2 hours ago
[-]
I think the downside, at least near-term, or maybe challenge would be the better word, is that anything richer than text requires a lot more engineering to make it useful. B♭ is text. Most of the applications on your computer, including but not limited to your browser, know how to render B♭ and C♯, and your brain does the rest.

Bret Victor's work involves a ton of really challenging heavy lifting. You walk away from a Bret Victor presentation inspired, but also intimidated by the work put in, and the work required to do anything similar. When you separate his ideas from the work he puts in to perfect the implementation and presentation, the ideas by themselves don't seem to do much.

Which doesn't mean they're bad ideas, but it might mean that anybody hoping to get the most out of them should understand the investment that is required to bring them to fruition, and people with less to invest should stick with other approaches.

reply
tomjakubowski
1 hour ago
[-]
> B♭ is text.

Yes, but musical notation is far superior to text for conveying the information needed to play a song.

reply
satvikpendem
52 minutes ago
[-]
I don't understand, musical notation is text though so how can it be superior to itself?
reply
BrenBarn
32 minutes ago
[-]
I think they mean staff notation, not a textual notation like "B♭".
reply
dhosek
12 minutes ago
[-]
Although, one could make the argument that staff notation is itself a form of text, albeit one with a different notation than a single stream of Unicode symbols. Certainly, without musical notation, a lot of music is lost (although, one can argue that musical notation is not able to adequately preserve some aspects of musical performance which is part of why when European composers tried to adopt jazz idioms into their compositions in the early twentieth century working from sheet music, they missed the whole concept of swing which is essential to jazz).
reply
codebaobab
54 minutes ago
[-]
Yes. And I create and manage the musical notation for over 100 songs in text, specifically Lilypond.
reply
tomjakubowski
1 hour ago
[-]
Take a problem like untangling a pile of cords. Writing out how to do that in text would be a drag, and reading those directions probably wouldn't be helpful either. But a kid can learn how to untangle just by observation.

Physical intuition is an enormous part of our intelligence, and is hard to convey in text: you could read millions of words about how to ride a bike, and you would learn nothing compared to spending a few hours trying it out and falling over until it clicks.

reply
fercircularbuf
20 minutes ago
[-]
Thank you so much for introducing me to this talk. Changed my way of thinking.
reply
safety1st
1 hour ago
[-]
I mean, this very discussion is a case study in the supremacy of text. I skimmed the OP's blog post in thirty seconds and absorbed his key ideas. Your link is to a 54 minute video on an interesting topic which I unfortunately don't have time to watch. While I have no doubt that there are interesting ideas in it, video's inferior to text for communicating ideas efficiently, so most people reading this thread will never learn those ideas.

Text is certainly not the best at all things and I especially get the idea that in pedagogy you might want other things in a feedback loop. The strength of text however is its versatility, especially in an age where text transformers are going through a renaissance. I think 90%+ of the time you want to default to text, use text as your source of truth, and then other mediums can be brought into play (perhaps as things you transform your text into) as the circumstances warrant.

reply
socketcluster
3 hours ago
[-]
I've also become something of a text maximalist. It is the natural meeting point in human-machine communication. The optimal balance of efficiency, flexibility and transparency.

You can store everything as a string; base64 for binary, JSON for data, HTML for layout, CSS for styling, SQL for queries... Nothing gets closer to the mythical silver-bullet that developers have been chasing since the birth of the industry.

The holy grail of programming has been staring us in the face for decades and yet we still keep inventing new data structures and complex tools to transfer data... All to save like 30% bandwidth; an advantage which is almost fully cancelled out anyway after you GZIP the base64 string which most HTTP servers do automatically anyway.

Same story with ProtoBuf. All this complexity is added to make everything binary. For what goal? Did anyone ever ask this question? To save 20% bandwidth, which, again is an advantage lost after GZIP... For the negligible added CPU cost of deserialization, you completely lose human readability.

In this industry, there are tools and abstractions which are not given the respect they deserve and the humble string is definitely one of them.

reply
yegle
2 hours ago
[-]
As someone who's daily job is to move protobuf messages around, I don't think protobuf is a good example to support your point :-)

AFAIKT, binary format of a protobuf message is strictly to provide a strong forward/backward compatibility guarantee. If it's not for that, the text proto format and even the jaon format are both versatile, and commonly used as configuration language (i.e. when humans need to interact with the file).

reply
beej71
1 hour ago
[-]
I've moved away from DOCish or PDF for storage to text (usually markdown) with Makefiles to build with Typst or whatever. Grep works, git likes it, and I can easily extract it to other formats.

My old 1995 MS thesis was written in Lotus Word Pro and the last I looked, there was nothing to read it. (I could try Wine, perhaps. Or I could quickly OCR it from paper.) Anyway, I wish it were plain text!

reply
handfuloflight
2 hours ago
[-]
I marvel at the constraint and freedom of the string.
reply
whatevermom5
3 hours ago
[-]
Base64 and JSON takes a lot of CPU to decode; this is where Protobuf shines (for example). Bandwidth is one thing, but the most expensive resources are RAM and CPU, and it makes sense to optimize for them by using "binary" protocols.

For example, when you gzip a Base64-encoded picture, you end up 1. encoding it in base64 (takes a *lot* of CPU) and then, compressing it (again! jpeg is already compressed).

I think what it boils down to is scale; if you are running a small shop and performance is not critical, sure, do everything in HTTP/1.1 if that makes you more productive. But when numbers start mattering, designing binary protocols from scratch can save a lot of $ in my experience.

reply
the8472
3 hours ago
[-]
shipping base64 in json instead of a multipart POST is very bad for stream-processing. In theory one could stream-process JSON and base64... but only the json keys prior would be available at the point where you need to make decisions about what to do with the data.
reply
socketcluster
3 hours ago
[-]
Still, at least it's an option to put base64 inline inside the JSON. With binary, this is not an option and must send it separately in all cases, even small binary...

You can still stream the base64 separately and reference it inside the JSON somehow like an attachment. The base64 string is much more versatile.

reply
zzo38computer
2 hours ago
[-]
Even with binary, you can store a binary inline inside of another one if it is a structured format with a "raw binary data" type, such as DER. (In my opinion, DER is better in other ways too, and (with my nonstandard key/value list type added) it is a superset of the data model of JSON.)

Using base64 means that you must encode and decode it, but binary data directly means that is unnecessary. (This is true whether or not it is compressed (and/or encrypted); if it is compressed then you must decompress it, but that is independent of whether or not you must decode base64.)

reply
simonw
9 minutes ago
[-]
Much as I love text for communication, it's worth knowing that "28% of US adults scored at or below Level 1, 29% at Level 2, and 44% at Level 3 or above" - Literacy in the United States: https://en.wikipedia.org/wiki/Literacy_in_the_United_States

Anything below 3 is considered "partially illiterate".

I've been thinking about this a lot recently, as someone who cares about technical communication and making technical topics accessible to more people.

Maybe wannabe educators like myself should spend more time making content for TikTok or YouTube!

reply
Ferret7446
2 hours ago
[-]
Text is just bytes, and bytes are just text. I assume this is talking about human readable ASCII specifically.

I think the obsession with text comes down to two factors: conflating binary data with closed standards and poor tooling support. Text implies a baseline level of acceptable mediocrity for both. Consider a CSV file will millions of base64 encoded columns and no column labels. That would really not be any friendlier than a binary file with a openly documented format and suitable editing tool, e.g. sqlite.

Maybe a lack of fundamental technical skills is another culprit, but binary files really aren't that scary.

reply
bigstrat2003
1 hour ago
[-]
> Text is just bytes, and bytes are just text. I assume this is talking about human readable ASCII specifically.

Text is human readable writing (not necessarily ASCII). It is most certainly not just any old bytes the way you are saying.

reply
jackschultz
2 hours ago
[-]
Reread Story of Your Life again just now, and all it made me want to do is learn Heptapod B and their senagram style of written communication.

Reading “Mathematica - A secret world of intuition and curiosity” as well and a part stuck out in a section called The Language Trap. Example author gives is about for a recipe for making banana bread, that if you’re familiar with bananas, it’s obvious that you need to peel them before mashing. Bit of you haven’t seen a banana, you’d have no clue what to do. Does a recipe say peel a banana or should that be ignored? Questions like these are clear coming up more with AI and context, but it’s the same for humans. He ends that section saying most people prefer a video for cooking rather than a recipe.

Other quote from him:

“The language trap is the belief that naming things is enough to make them exist, and we can dispense with the effort of really imagining them.”

reply
mcswell
1 hour ago
[-]
gnabgib points out that this same article has been posted for comment here three other times since it was written. That said, afaict no one has commented any of these times on what I'm about to say, so hopefully this will be new.

I'm a linguist, and I've worked in endangered languages and in minority languages (many of which will some day become endangered, in the sense of not having native speakers). The advantage of plain text (Unicode) formats for documenting such languages (as opposed to binary formats like Word used to be, or databases, or even PDFs) is that text formats are the only thing that will stanmd the test of time. The article by Steven Bird and Gary Simons "Seven Dimensions of Portability for Language Documentation and Description" was the seminal paper on this topic, published in 2002. I've given later conference talks on the topic, pointing out that we can still read grammars of Greek and Latin (and Sanskrit) written thousands of years ago. And while the group I led published our grammars in paper form via PDF, we wrote and archived them as XML documents, which (along with JSON) are probably as reproducible a structured format as you can get. I'm hoping that 2000 years from now, someone will find these documents both readable and valuable.

There is of course no replacement for some binary format when it comes to audio.

(By "binary" format I mean file formats that are not sequential and readily interpretable, whereas text files are interpretable once you know the encoding.)

reply
scosman
2 hours ago
[-]
This also leads to the unreasonable effectiveness of LLMs. The models are good because they have thousands of years of humans trying to capture every idea as text. Engineering, math, news, literature, and even art/craftmanship. You name it, we wrote it down.

Our image models got good when we started making shared image and text embedding spaces. A picture is worth 1000 words, but 1000 words about millions of images are what allowed us to teach computers to see.

reply
Dwedit
1 hour ago
[-]
Saying that a 20x20 image of a Twitter logo is 4000 bytes is just so wrong.

The image is of a monochrome logo with anti-aliased edges. Due to being a simple filled geometric shape, it could compress well with RLE, ZIP compression, or even predictors. It could even be represented as vector drawing commands (LineTo, CurveTo, etc...).

In a 1-bit-per-pixel format, a 20x20 image ends up as 400 bits (50 bytes).

reply
seveibar
2 hours ago
[-]
This is sort of the premise of all of us electronics-as-code startups. We think that a text-based medium for the representation of circuits is a necessity for AI to be able to create electronics. You can't skip this step and generate schematic images or something. You have to have a human-readable (which also means AI-compatible) text medium. Another confusion: KiCad files are represented in text, so shouldn't AI be able to generate them? No- AI has similar levels of spatial understanding to a human reading these text files. You can't have a ton of XY coordinates or other non-human-friendly components of the text files. Everything will be text-based and human-readable, at least at the first layer of AI-generation for serious applications
reply
gnabgib
3 hours ago
[-]
(2014) Popular in:

2021 (570 points, 339 comments) https://news.ycombinator.com/item?id=26164001

2015 (156 points, 69 comments) https://news.ycombinator.com/item?id=10284202

2014 (355 points, 196 comments) https://news.ycombinator.com/item?id=8451271

reply
socketcluster
3 hours ago
[-]
With LLMs, the text format should be more popular than ever, yet we still see people pushing binary protocols like ProtoBuf for a measly 20% bandwidth advantage which is lost after GZIPing the equivalent JSON... Or a 30% CPU advantage on the serialization aspect which becomes like a 1% advantage once you consider the cost of deserialization in the context of everything else that's going on in the system which uses far more CPU.

It's almost like some people think human-readability, transparency and maintainability are negatives!

reply
handfuloflight
2 hours ago
[-]
What are your thoughts on https://github.com/fastserial/lite3?
reply
mapontosevenths
42 minutes ago
[-]
I disagree. If your goal involves the cooperation of others one should always bet on lazy.

Text will win, unless there is a lower effort option. The lower effort option does not need to be better, just easier.

reply
Lucent
2 hours ago
[-]
It's easy to be a text maximalist now we're in the LLM era, but I disagree that ideas are a separate, nonphysical realm that cannot otherwise be described. https://lucent.substack.com/p/one-map-hypothesis
reply
sweetsocks21
2 hours ago
[-]
For a computer, text is a binary format like anything else. We have decades of tooling built on handling linear streams of text where we sometimes encode higher dimensional structures in it.

But I can't help feel that we try to jam everything into that format because that's what's already ubiquitous. Reminds me of how every hobby OS is a copy of some Unix/Posix system.

If we had a more general structured format would we say the opposite?

reply
zephen
3 hours ago
[-]
I agree 99%.

The 1% where something else is better?

Youtube videos that show you how to access hidden fasteners on things you want to take apart.

Not that I can't get absolutely anything open, but sometimes it's nice to be able to do so with minimal damage.

reply
ilaksh
3 hours ago
[-]
I wonder if some day there will be a video codec that is essentially a standard distribution of a very precise and extremely fast text-to-video model (like SmartTurboDiffusion-2027 or something). Because surely there are limits to text, but even the example you gave does not seem to me to be beyond the reach of a text description, given a certain level of precision and capability in the model. And we now have faster than realtime text to video.
reply
egypturnash
3 hours ago
[-]
This sounds incredibly precarious and prone to breaking when you update to a new model.
reply
ilaksh
2 hours ago
[-]
It would be impossible to change the model. It would be like a codec, like H.264 but with 1-2GB of fixed data attached to that code name. Changing the model is like going to H.265. Different codec.
reply
didip
2 hours ago
[-]
I agree. As a simple exercise, look at all software tools that’s GUI only. They become a large walled garden unable to be penetrated by LLM.

Tools that are mostly text or have text interfaces? Greatly improved by LLM.

So all of those rich multimedia and their players/editors really need to add text representations.

reply
tombert
2 hours ago
[-]
People make fun of it, but I think the fact that Unixey stuff can use tools that have existed since the 70's [1] can be attributed to the fact that they're text based. Every OS has its own philosophy on how to do GUI stuff and as such GUI programs have to do a lot of bullshit to migrate, but every OS can handle text in one form or another.

When I first started using Linux I used to make fun of people who were stuck on the command line, but now pretty much everything I do is a command line program (using NeoVim and tmux).

[1] Yes, obviously with updates but the point more or less still stands.

reply
jamesgill
4 hours ago
[-]
reply
jesseduffield
5 hours ago
[-]
Post from the creator of Rust, 11 years ago. Highly relevant to today.
reply
stephc_int13
1 hour ago
[-]
Text can be surprisingly immersive and rich, often surpassing the most complex VR experiences.

It is amazing what we can do with a few strings of symbols, thanks to the fact that we all learn to decode them almost for free.

The oldest and most important technology indeed.

reply
vacuity
2 hours ago
[-]
I was going to disagree, along the lines of the people bringing up Bret Victor or other modes of communication and learning, but I have long accepted that the written word has been one of the largest boons for learning in human history, so I guess I agree. Still, it'll be an interesting and worthwhile challenge to make a better medium with modern technology.
reply
calebm
2 hours ago
[-]
I just recently intentionally made the decision to keep the equation input in FuzzyGraph (https://fuzzygraph.com) plain text (instead of something like stylized latex like Desmos has) in order to make it easy to copy and paste equations.
reply
ANarrativeApe
2 hours ago
[-]
This is one of those irritating articles where one agrees with the gist, but there are serious flaws in the support. There are societies, even now, that don't have text. Yes, they represent a tiny fraction of 1% of the global population, but they do exist. And the beauty of text is that this level of nuance can be conveyed, a simplistic, inaccurate, broad brush approach is not needed. Nor is it the oldest form of communication. Having recently started exploring the cave art record, the text informs me that this is at least an upper middle single digit multiple of the age of text. Yes, a picture paints a thousand words, which can then be interpreted a thousand ways. Text has the ability to convey precise, accurate, objective information, it does not, as this article demonstrates, necessarily do so.
reply
textnotalwabest
3 hours ago
[-]
Text is not the best medium for the following situations:

- I want to learn how to climb rock walls

- I want to learn how to throw a baseball

- I want to learn how to do public speaking

- I want to learn how to play piano

- I want to make a fire in the woods

- I want to understand the emotional impact of war

- I want to be involved in my child's life

reply
malloryerik
2 hours ago
[-]
I agree with all of these except the emotional impact of war where though slower a novel or memoir might work best. Think "All Quiet on the Western Front." At the same time we do want images of the war and time for grounding.
reply
awesome_dude
2 hours ago
[-]
Why did you create an account just to post that?

In text format no less

reply
skydhash
3 hours ago
[-]
This is one of the core reason I've been focused on building small tools for myself using Emacs and the shell (currently ksh on OpenBSD). HTML and the Web is good, but only in its basic form. A lot of stuff fancies themselves being applications and magazines and they are very much unusable.
reply
sixtyj
3 hours ago
[-]
The older I get, the more I appreciate texts (any).

Videos, podcasts... I have them transcribed because even though I like listening to music, podcasts are best written for speed of comprehension... (at least for me, I don't know about others).

reply
awesome_dude
3 hours ago
[-]
Audio is horrible (for me) for information transfer - reading (90% of the time) is where it's at

Not sure why that is either - because I look at people extolling the virtues of podcasts, saying that they are able to multi task (eg. driving, walking, eat dinner), and still hear the message - which leaves me aghast

reply
mr_toad
2 hours ago
[-]
Podcasts are fine for entertainment, great for tuning out people or the traffic. I don’t expect to absorb information quickly, but try reading anything serious on the train when some guy is non-stop on his phone using his outside voice.
reply
awesome_dude
2 hours ago
[-]
Ha! I used to

I had a 53 minute (each way) commute on the train, and I found it perfect for reading papers or learning skills - I was always amazed that the background noise would disappear and I could get lost in the text

Best study time ever.

reply
ivanjermakov
1 hour ago
[-]
Another fascinating property of text (as compared to video), it's less temporal-sensitive. It means that it's much easier to skim through and skip sections, kind of like teleporting through time it took to write such text.
reply
znort_
2 hours ago
[-]
> But text wins by a mile.

white on dark grey with phosphor green around? not really.

reply
NooneAtAll3
1 hour ago
[-]
I agree about text being absolute

I TOTALLY disagree on terminal being the best way

Even the text tablet shown is using 2D surface in its full ability - we need to strive to bring that as well

reply
citbl
3 hours ago
[-]
The last 2 paragraphs were quite poetic.

PS: 2014

reply
benatkin
2 hours ago
[-]
I was surprised to see something was in text today, until I remembered knowing it at some point - the .har format. Looking at simonw's Claude-generated script [1] to investigate AI agent sent emails [2] by extracting .har archives, I saw that it uses base64 for binary and JSON strings for text.

It might be a good bet to bet on text, but it feels inefficient a lot of the time, especially in cases like this where all sorts of files are stored in JSON documents.

1: https://gist.github.com/simonw/007c628ceb84d0da0795b57af7b74...

2: https://simonwillison.net/2025/Dec/26/slop-acts-of-kindness/

reply