I have mixed feelings about this. On the one hand, I agree: text is infinitely versatile, indexable, durable, etc. But, after discovering Bret Victor's work[1], and thinking about how I learned piano, I've also started to see a lot of the limitations of text. When I learned piano, I always had a live feedback loop: play a note, and hear how it sounds, and every week I had a teacher coach me. This is a completely different way to learn a skill, and something that doesn't work well with text.
Bret Victor's point is why is this not also the approach we use for other topics, like engineering? There are many people who do not have a strong symbolic intuition, and so being able to tap into their (and our) other intuitions is a very powerful tool to increase efficiency of communication. More and more, I have found myself in this alternate philosophy of education and knowledge transmission. There are certainly limits—and text isn't going anywhere, but I think there's still a lot more to discover and try.
I think the downside, at least near-term, or maybe challenge would be the better word, is that anything richer than text requires a lot more engineering to make it useful. B♭ is text. Most of the applications on your computer, including but not limited to your browser, know how to render B♭ and C♯, and your brain does the rest.
Bret Victor's work involves a ton of really challenging heavy lifting. You walk away from a Bret Victor presentation inspired, but also intimidated by the work put in, and the work required to do anything similar. When you separate his ideas from the work he puts in to perfect the implementation and presentation, the ideas by themselves don't seem to do much.
Which doesn't mean they're bad ideas, but it might mean that anybody hoping to get the most out of them should understand the investment that is required to bring them to fruition, and people with less to invest should stick with other approaches.
> You walk away from a Bret Victor presentation inspired, but also intimidated by the work put in, and the work required to do anything similar. When you separate his ideas from the work he puts in to perfect the implementation and presentation, the ideas by themselves don't seem to do much.
Amen to that. Even dynamic land has some major issues with GC pauses and performance issues.
I do try to put my money where my mouth is, so I've been contributing a lot to folk computer[1], but yeah, there's still a ton of open questions, and it's not as easy as he sometimes makes it look.
Although, one could make the argument that staff notation is itself a form of text, albeit one with a different notation than a single stream of Unicode symbols. Certainly, without musical notation, a lot of music is lost (although, one can argue that musical notation is not able to adequately preserve some aspects of musical performance which is part of why when European composers tried to adopt jazz idioms into their compositions in the early twentieth century working from sheet music, they missed the whole concept of swing which is essential to jazz).
Take a problem like untangling a pile of cords. Writing out how to do that in text would be a drag, and reading those directions probably wouldn't be helpful either. But a kid can learn how to untangle just by observation.
Physical intuition is an enormous part of our intelligence, and is hard to convey in text: you could read millions of words about how to ride a bike, and you would learn nothing compared to spending a few hours trying it out and falling over until it clicks.
I mean, this very discussion is a case study in the supremacy of text. I skimmed the OP's blog post in thirty seconds and absorbed his key ideas. Your link is to a 54 minute video on an interesting topic which I unfortunately don't have time to watch. While I have no doubt that there are interesting ideas in it, video's inferior to text for communicating ideas efficiently, so most people reading this thread will never learn those ideas.
Text is certainly not the best at all things and I especially get the idea that in pedagogy you might want other things in a feedback loop. The strength of text however is its versatility, especially in an age where text transformers are going through a renaissance. I think 90%+ of the time you want to default to text, use text as your source of truth, and then other mediums can be brought into play (perhaps as things you transform your text into) as the circumstances warrant.
Actually, you might want to check the video again, it has sections and a full transcript on the right side, precisely to make skimming easy!
> video's inferior to text for communicating ideas efficiently
Depends on the topic tbh. For example, YouTube has had an absolute explosion of car repair videos, precisely because video format works so well for visual operations. But yes, text is currently the best way to skim/revisit material. That's one reason I find Bret's website so intriguing, since he tries to introduce those navigation affordances into a video medium.
> The strength of text however is its versatility, especially in an age where text transformers are going through a renaissance. I think 90%+ of the time you want to default to text, use text as your source of truth, and then other mediums can be brought into play (perhaps as things you transform your text into) as the circumstances warrant.
Agree, though not because of text's intrinsic ability, but because its ecosystem stretches thousands of years. It's certainly the most pragmatic choice of 2025. But, I want to see just how far other mediums can go, and I think there's a lot of untapped potential!
The fidelity and encoding strength of the "idea" you got the gist of from skimming might be less than the "idea" you receive when you spend the time to watch the 54 minute video
I've also become something of a text maximalist. It is the natural meeting point in human-machine communication. The optimal balance of efficiency, flexibility and transparency.
You can store everything as a string; base64 for binary, JSON for data, HTML for layout, CSS for styling, SQL for queries... Nothing gets closer to the mythical silver-bullet that developers have been chasing since the birth of the industry.
The holy grail of programming has been staring us in the face for decades and yet we still keep inventing new data structures and complex tools to transfer data... All to save like 30% bandwidth; an advantage which is almost fully cancelled out anyway after you GZIP the base64 string which most HTTP servers do automatically anyway.
Same story with ProtoBuf. All this complexity is added to make everything binary. For what goal? Did anyone ever ask this question? To save 20% bandwidth, which, again is an advantage lost after GZIP... For the negligible added CPU cost of deserialization, you completely lose human readability.
In this industry, there are tools and abstractions which are not given the respect they deserve and the humble string is definitely one of them.
As someone who's daily job is to move protobuf messages around, I don't think protobuf is a good example to support your point :-)
AFAIKT, binary format of a protobuf message is strictly to provide a strong forward/backward compatibility guarantee. If it's not for that, the text proto format and even the jaon format are both versatile, and commonly used as configuration language (i.e. when humans need to interact with the file).
I've moved away from DOCish or PDF for storage to text (usually markdown) with Makefiles to build with Typst or whatever. Grep works, git likes it, and I can easily extract it to other formats.
My old 1995 MS thesis was written in Lotus Word Pro and the last I looked, there was nothing to read it. (I could try Wine, perhaps. Or I could quickly OCR it from paper.) Anyway, I wish it were plain text!
Base64 and JSON takes a lot of CPU to decode; this is where Protobuf shines (for example). Bandwidth is one thing, but the most expensive resources are RAM and CPU, and it makes sense to optimize for them by using "binary" protocols.
For example, when you gzip a Base64-encoded picture, you end up 1. encoding it in base64 (takes a *lot* of CPU) and then, compressing it (again! jpeg is already compressed).
I think what it boils down to is scale; if you are running a small shop and performance is not critical, sure, do everything in HTTP/1.1 if that makes you more productive.
But when numbers start mattering, designing binary protocols from scratch can save a lot of $ in my experience.
shipping base64 in json instead of a multipart POST is very bad for stream-processing. In theory one could stream-process JSON and base64... but only the json keys prior would be available at the point where you need to make decisions about what to do with the data.
Still, at least it's an option to put base64 inline inside the JSON. With binary, this is not an option and must send it separately in all cases, even small binary...
You can still stream the base64 separately and reference it inside the JSON somehow like an attachment. The base64 string is much more versatile.
Even with binary, you can store a binary inline inside of another one if it is a structured format with a "raw binary data" type, such as DER. (In my opinion, DER is better in other ways too, and (with my nonstandard key/value list type added) it is a superset of the data model of JSON.)
Using base64 means that you must encode and decode it, but binary data directly means that is unnecessary. (This is true whether or not it is compressed (and/or encrypted); if it is compressed then you must decompress it, but that is independent of whether or not you must decode base64.)
Text is just bytes, and bytes are just text. I assume this is talking about human readable ASCII specifically.
I think the obsession with text comes down to two factors: conflating binary data with closed standards and poor tooling support. Text implies a baseline level of acceptable mediocrity for both. Consider a CSV file will millions of base64 encoded columns and no column labels. That would really not be any friendlier than a binary file with a openly documented format and suitable editing tool, e.g. sqlite.
Maybe a lack of fundamental technical skills is another culprit, but binary files really aren't that scary.
Text is bytes that's accompanied with a major constraint on which sequences of bytes are permitted (a useful compression into principal axes that emerged over thousands of years of language evolution), along with a natural connection to human semantics that is due to universal adoption of the standard (allowing correlations to be modelled).
Text is like a complexity funnel (analogous to a tokenizer) that everyone shares. Its utility is derived from its compression and its standardization.
If everyone used binary data with their own custom interpretation schema, it might work better for that narrow vertical, but it would not have the same utility for LLMs.
This also leads to the unreasonable effectiveness of LLMs. The models are good because they have thousands of years of humans trying to capture every idea as text. Engineering, math, news, literature, and even art/craftmanship. You name it, we wrote it down.
Our image models got good when we started making shared image and text embedding spaces. A picture is worth 1000 words, but 1000 words about millions of images are what allowed us to teach computers to see.
Reread Story of Your Life again just now, and all it made me want to do is learn Heptapod B and their senagram style of written communication.
Reading “Mathematica - A secret world of intuition and curiosity” as well and a part stuck out in a section called The Language Trap. Example author gives is about for a recipe for making banana bread, that if you’re familiar with bananas, it’s obvious that you need to peel them before mashing. Bit of you haven’t seen a banana, you’d have no clue what to do. Does a recipe say peel a banana or should that be ignored? Questions like these are clear coming up more with AI and context, but it’s the same for humans. He ends that section saying most people prefer a video for cooking rather than a recipe.
Other quote from him:
“The language trap is the belief that naming things is enough to make them exist, and we can dispense with the effort of really imagining them.”
Much as I love text for communication, it's worth knowing that "28% of US adults scored at or below Level 1, 29% at Level 2, and 44% at Level 3 or above" - Literacy in the United States: https://en.wikipedia.org/wiki/Literacy_in_the_United_States
Anything below 3 is considered "partially illiterate".
I've been thinking about this a lot recently, as someone who cares about technical communication and making technical topics accessible to more people.
Maybe wannabe educators like myself should spend more time making content for TikTok or YouTube!
The inverse of this is the wisdom that pearls should not be cast before swine. If you want to increase literacy rates, it's unclear to me how engaging people on an illiterate medium will improve things.
Technical topics demand a technical treatment, not 30-second junk food bites of video infotainment that then imbue the ignorant audiences with the semblance or false feeling of understanding, when they actually possess none. This is why we have so many fucking idiots dilating everywhere on topics they haven't a clue on - they probably saw a fucking YouTube video and now consider themselves in possession of a graduate degree in the subject.
Rather than try to widely distribute and disseminate knowledge, it would be far more prescient to capitalize on what will soon be a massive information asymmetry and widening intellectual inequality between the can-reads and the can't-reads, accelerated by the production of machine generated, misinformative slop at scale.
Saying that a 20x20 image of a Twitter logo is 4000 bytes is just so wrong.
The image is of a monochrome logo with anti-aliased edges. Due to being a simple filled geometric shape, it could compress well with RLE, ZIP compression, or even predictors. It could even be represented as vector drawing commands (LineTo, CurveTo, etc...).
In a 1-bit-per-pixel format, a 20x20 image ends up as 400 bits (50 bytes).
gnabgib points out that this same article has been posted for comment here three other times since it was written. That said, afaict no one has commented any of these times on what I'm about to say, so hopefully this will be new.
I'm a linguist, and I've worked in endangered languages and in minority languages (many of which will some day become endangered, in the sense of not having native speakers). The advantage of plain text (Unicode) formats for documenting such languages (as opposed to binary formats like Word used to be, or databases, or even PDFs) is that text formats are the only thing that will stanmd the test of time. The article by Steven Bird and Gary Simons "Seven Dimensions of Portability for Language Documentation and Description" was the seminal paper on this topic, published in 2002. I've given later conference talks on the topic, pointing out that we can still read grammars of Greek and Latin (and Sanskrit) written thousands of years ago. And while the group I led published our grammars in paper form via PDF, we wrote and archived them as XML documents, which (along with JSON) are probably as reproducible a structured format as you can get. I'm hoping that 2000 years from now, someone will find these documents both readable and valuable.
There is of course no replacement for some binary format when it comes to audio.
(By "binary" format I mean file formats that are not sequential and readily interpretable, whereas text files are interpretable once you know the encoding.)
This is sort of the premise of all of us electronics-as-code startups. We think that a text-based medium for the representation of circuits is a necessity for AI to be able to create electronics. You can't skip this step and generate schematic images or something. You have to have a human-readable (which also means AI-compatible) text medium. Another confusion: KiCad files are represented in text, so shouldn't AI be able to generate them? No- AI has similar levels of spatial understanding to a human reading these text files. You can't have a ton of XY coordinates or other non-human-friendly components of the text files. Everything will be text-based and human-readable, at least at the first layer of AI-generation for serious applications
With LLMs, the text format should be more popular than ever, yet we still see people pushing binary protocols like ProtoBuf for a measly 20% bandwidth advantage which is lost after GZIPing the equivalent JSON... Or a 30% CPU advantage on the serialization aspect which becomes like a 1% advantage once you consider the cost of deserialization in the context of everything else that's going on in the system which uses far more CPU.
It's almost like some people think human-readability, transparency and maintainability are negatives!
For a computer, text is a binary format like anything else. We have decades of tooling built on handling linear streams of text where we sometimes encode higher dimensional structures in it.
But I can't help feel that we try to jam everything into that format because that's what's already ubiquitous. Reminds me of how every hobby OS is a copy of some Unix/Posix system.
If we had a more general structured format would we say the opposite?
I wonder if some day there will be a video codec that is essentially a standard distribution of a very precise and extremely fast text-to-video model (like SmartTurboDiffusion-2027 or something). Because surely there are limits to text, but even the example you gave does not seem to me to be beyond the reach of a text description, given a certain level of precision and capability in the model. And we now have faster than realtime text to video.
It would be impossible to change the model. It would be like a codec, like H.264 but with 1-2GB of fixed data attached to that code name. Changing the model is like going to H.265. Different codec.
It's easy to be a text maximalist now we're in the LLM era, but I disagree that ideas are a separate, nonphysical realm that cannot otherwise be described.
https://lucent.substack.com/p/one-map-hypothesis
People make fun of it, but I think the fact that Unixey stuff can use tools that have existed since the 70's [1] can be attributed to the fact that they're text based. Every OS has its own philosophy on how to do GUI stuff and as such GUI programs have to do a lot of bullshit to migrate, but every OS can handle text in one form or another.
When I first started using Linux I used to make fun of people who were stuck on the command line, but now pretty much everything I do is a command line program (using NeoVim and tmux).
[1] Yes, obviously with updates but the point more or less still stands.
I was going to disagree, along the lines of the people bringing up Bret Victor or other modes of communication and learning, but I have long accepted that the written word has been one of the largest boons for learning in human history, so I guess I agree. Still, it'll be an interesting and worthwhile challenge to make a better medium with modern technology.
I just recently intentionally made the decision to keep the equation input in FuzzyGraph (https://fuzzygraph.com) plain text (instead of something like stylized latex like Desmos has) in order to make it easy to copy and paste equations.
This is one of those irritating articles where one agrees with the gist, but there are serious flaws in the support.
There are societies, even now, that don't have text. Yes, they represent a tiny fraction of 1% of the global population, but they do exist. And the beauty of text is that this level of nuance can be conveyed, a simplistic, inaccurate, broad brush approach is not needed.
Nor is it the oldest form of communication. Having recently started exploring the cave art record, the text informs me that this is at least an upper middle single digit multiple of the age of text.
Yes, a picture paints a thousand words, which can then be interpreted a thousand ways.
Text has the ability to convey precise, accurate, objective information, it does not, as this article demonstrates, necessarily do so.
This is one of the core reason I've been focused on building small tools for myself using Emacs and the shell (currently ksh on OpenBSD). HTML and the Web is good, but only in its basic form. A lot of stuff fancies themselves being applications and magazines and they are very much unusable.
I agree with all of these except the emotional impact of war where though slower a novel or memoir might work best. Think "All Quiet on the Western Front." At the same time we do want images of the war and time for grounding.
The older I get, the more I appreciate texts (any).
Videos, podcasts... I have them transcribed because even though I like listening to music, podcasts are best written for speed of comprehension... (at least for me, I don't know about others).
Audio is horrible (for me) for information transfer - reading (90% of the time) is where it's at
Not sure why that is either - because I look at people extolling the virtues of podcasts, saying that they are able to multi task (eg. driving, walking, eat dinner), and still hear the message - which leaves me aghast
Podcasts are fine for entertainment, great for tuning out people or the traffic. I don’t expect to absorb information quickly, but try reading anything serious on the train when some guy is non-stop on his phone using his outside voice.
I had a 53 minute (each way) commute on the train, and I found it perfect for reading papers or learning skills - I was always amazed that the background noise would disappear and I could get lost in the text
Another fascinating property of text (as compared to video), it's less temporal-sensitive. It means that it's much easier to skim through and skip sections, kind of like teleporting through time it took to write such text.
I was surprised to see something was in text today, until I remembered knowing it at some point - the .har format. Looking at simonw's Claude-generated script [1] to investigate AI agent sent emails [2] by extracting .har archives, I saw that it uses base64 for binary and JSON strings for text.
It might be a good bet to bet on text, but it feels inefficient a lot of the time, especially in cases like this where all sorts of files are stored in JSON documents.
Bret Victor's point is why is this not also the approach we use for other topics, like engineering? There are many people who do not have a strong symbolic intuition, and so being able to tap into their (and our) other intuitions is a very powerful tool to increase efficiency of communication. More and more, I have found myself in this alternate philosophy of education and knowledge transmission. There are certainly limits—and text isn't going anywhere, but I think there's still a lot more to discover and try.
[1] https://dynamicland.org/2014/The_Humane_Representation_of_Th...
Bret Victor's work involves a ton of really challenging heavy lifting. You walk away from a Bret Victor presentation inspired, but also intimidated by the work put in, and the work required to do anything similar. When you separate his ideas from the work he puts in to perfect the implementation and presentation, the ideas by themselves don't seem to do much.
Which doesn't mean they're bad ideas, but it might mean that anybody hoping to get the most out of them should understand the investment that is required to bring them to fruition, and people with less to invest should stick with other approaches.
Amen to that. Even dynamic land has some major issues with GC pauses and performance issues.
I do try to put my money where my mouth is, so I've been contributing a lot to folk computer[1], but yeah, there's still a ton of open questions, and it's not as easy as he sometimes makes it look.
[1] https://folk.computer/
Yes, but musical notation is far superior to text for conveying the information needed to play a song.
It renders the term "text" effectively meaningless.
Physical intuition is an enormous part of our intelligence, and is hard to convey in text: you could read millions of words about how to ride a bike, and you would learn nothing compared to spending a few hours trying it out and falling over until it clicks.
Text is certainly not the best at all things and I especially get the idea that in pedagogy you might want other things in a feedback loop. The strength of text however is its versatility, especially in an age where text transformers are going through a renaissance. I think 90%+ of the time you want to default to text, use text as your source of truth, and then other mediums can be brought into play (perhaps as things you transform your text into) as the circumstances warrant.
> video's inferior to text for communicating ideas efficiently
Depends on the topic tbh. For example, YouTube has had an absolute explosion of car repair videos, precisely because video format works so well for visual operations. But yes, text is currently the best way to skim/revisit material. That's one reason I find Bret's website so intriguing, since he tries to introduce those navigation affordances into a video medium.
> The strength of text however is its versatility, especially in an age where text transformers are going through a renaissance. I think 90%+ of the time you want to default to text, use text as your source of truth, and then other mediums can be brought into play (perhaps as things you transform your text into) as the circumstances warrant.
Agree, though not because of text's intrinsic ability, but because its ecosystem stretches thousands of years. It's certainly the most pragmatic choice of 2025. But, I want to see just how far other mediums can go, and I think there's a lot of untapped potential!
You can store everything as a string; base64 for binary, JSON for data, HTML for layout, CSS for styling, SQL for queries... Nothing gets closer to the mythical silver-bullet that developers have been chasing since the birth of the industry.
The holy grail of programming has been staring us in the face for decades and yet we still keep inventing new data structures and complex tools to transfer data... All to save like 30% bandwidth; an advantage which is almost fully cancelled out anyway after you GZIP the base64 string which most HTTP servers do automatically anyway.
Same story with ProtoBuf. All this complexity is added to make everything binary. For what goal? Did anyone ever ask this question? To save 20% bandwidth, which, again is an advantage lost after GZIP... For the negligible added CPU cost of deserialization, you completely lose human readability.
In this industry, there are tools and abstractions which are not given the respect they deserve and the humble string is definitely one of them.
AFAIKT, binary format of a protobuf message is strictly to provide a strong forward/backward compatibility guarantee. If it's not for that, the text proto format and even the jaon format are both versatile, and commonly used as configuration language (i.e. when humans need to interact with the file).
My old 1995 MS thesis was written in Lotus Word Pro and the last I looked, there was nothing to read it. (I could try Wine, perhaps. Or I could quickly OCR it from paper.) Anyway, I wish it were plain text!
For example, when you gzip a Base64-encoded picture, you end up 1. encoding it in base64 (takes a *lot* of CPU) and then, compressing it (again! jpeg is already compressed).
I think what it boils down to is scale; if you are running a small shop and performance is not critical, sure, do everything in HTTP/1.1 if that makes you more productive. But when numbers start mattering, designing binary protocols from scratch can save a lot of $ in my experience.
You can still stream the base64 separately and reference it inside the JSON somehow like an attachment. The base64 string is much more versatile.
Using base64 means that you must encode and decode it, but binary data directly means that is unnecessary. (This is true whether or not it is compressed (and/or encrypted); if it is compressed then you must decompress it, but that is independent of whether or not you must decode base64.)
I think the obsession with text comes down to two factors: conflating binary data with closed standards and poor tooling support. Text implies a baseline level of acceptable mediocrity for both. Consider a CSV file will millions of base64 encoded columns and no column labels. That would really not be any friendlier than a binary file with a openly documented format and suitable editing tool, e.g. sqlite.
Maybe a lack of fundamental technical skills is another culprit, but binary files really aren't that scary.
Text is like a complexity funnel (analogous to a tokenizer) that everyone shares. Its utility is derived from its compression and its standardization.
If everyone used binary data with their own custom interpretation schema, it might work better for that narrow vertical, but it would not have the same utility for LLMs.
Text is human readable writing (not necessarily ASCII). It is most certainly not just any old bytes the way you are saying.
Our image models got good when we started making shared image and text embedding spaces. A picture is worth 1000 words, but 1000 words about millions of images are what allowed us to teach computers to see.
Reading “Mathematica - A secret world of intuition and curiosity” as well and a part stuck out in a section called The Language Trap. Example author gives is about for a recipe for making banana bread, that if you’re familiar with bananas, it’s obvious that you need to peel them before mashing. Bit of you haven’t seen a banana, you’d have no clue what to do. Does a recipe say peel a banana or should that be ignored? Questions like these are clear coming up more with AI and context, but it’s the same for humans. He ends that section saying most people prefer a video for cooking rather than a recipe.
Other quote from him:
“The language trap is the belief that naming things is enough to make them exist, and we can dispense with the effort of really imagining them.”
Anything below 3 is considered "partially illiterate".
I've been thinking about this a lot recently, as someone who cares about technical communication and making technical topics accessible to more people.
Maybe wannabe educators like myself should spend more time making content for TikTok or YouTube!
Technical topics demand a technical treatment, not 30-second junk food bites of video infotainment that then imbue the ignorant audiences with the semblance or false feeling of understanding, when they actually possess none. This is why we have so many fucking idiots dilating everywhere on topics they haven't a clue on - they probably saw a fucking YouTube video and now consider themselves in possession of a graduate degree in the subject.
Rather than try to widely distribute and disseminate knowledge, it would be far more prescient to capitalize on what will soon be a massive information asymmetry and widening intellectual inequality between the can-reads and the can't-reads, accelerated by the production of machine generated, misinformative slop at scale.
The image is of a monochrome logo with anti-aliased edges. Due to being a simple filled geometric shape, it could compress well with RLE, ZIP compression, or even predictors. It could even be represented as vector drawing commands (LineTo, CurveTo, etc...).
In a 1-bit-per-pixel format, a 20x20 image ends up as 400 bits (50 bytes).
I'm a linguist, and I've worked in endangered languages and in minority languages (many of which will some day become endangered, in the sense of not having native speakers). The advantage of plain text (Unicode) formats for documenting such languages (as opposed to binary formats like Word used to be, or databases, or even PDFs) is that text formats are the only thing that will stanmd the test of time. The article by Steven Bird and Gary Simons "Seven Dimensions of Portability for Language Documentation and Description" was the seminal paper on this topic, published in 2002. I've given later conference talks on the topic, pointing out that we can still read grammars of Greek and Latin (and Sanskrit) written thousands of years ago. And while the group I led published our grammars in paper form via PDF, we wrote and archived them as XML documents, which (along with JSON) are probably as reproducible a structured format as you can get. I'm hoping that 2000 years from now, someone will find these documents both readable and valuable.
There is of course no replacement for some binary format when it comes to audio.
(By "binary" format I mean file formats that are not sequential and readily interpretable, whereas text files are interpretable once you know the encoding.)
2021 (570 points, 339 comments) https://news.ycombinator.com/item?id=26164001
2015 (156 points, 69 comments) https://news.ycombinator.com/item?id=10284202
2014 (355 points, 196 comments) https://news.ycombinator.com/item?id=8451271
It's almost like some people think human-readability, transparency and maintainability are negatives!
But I can't help feel that we try to jam everything into that format because that's what's already ubiquitous. Reminds me of how every hobby OS is a copy of some Unix/Posix system.
If we had a more general structured format would we say the opposite?
The 1% where something else is better?
Youtube videos that show you how to access hidden fasteners on things you want to take apart.
Not that I can't get absolutely anything open, but sometimes it's nice to be able to do so with minimal damage.
Tools that are mostly text or have text interfaces? Greatly improved by LLM.
So all of those rich multimedia and their players/editors really need to add text representations.
When I first started using Linux I used to make fun of people who were stuck on the command line, but now pretty much everything I do is a command line program (using NeoVim and tmux).
[1] Yes, obviously with updates but the point more or less still stands.
Text will win, unless there is a lower effort option. The lower effort option does not need to be better, just easier.
It is amazing what we can do with a few strings of symbols, thanks to the fact that we all learn to decode them almost for free.
The oldest and most important technology indeed.
- I want to learn how to climb rock walls
- I want to learn how to throw a baseball
- I want to learn how to do public speaking
- I want to learn how to play piano
- I want to make a fire in the woods
- I want to understand the emotional impact of war
- I want to be involved in my child's life
In text format no less
Videos, podcasts... I have them transcribed because even though I like listening to music, podcasts are best written for speed of comprehension... (at least for me, I don't know about others).
Not sure why that is either - because I look at people extolling the virtues of podcasts, saying that they are able to multi task (eg. driving, walking, eat dinner), and still hear the message - which leaves me aghast
I had a 53 minute (each way) commute on the train, and I found it perfect for reading papers or learning skills - I was always amazed that the background noise would disappear and I could get lost in the text
Best study time ever.
white on dark grey with phosphor green around? not really.
I TOTALLY disagree on terminal being the best way
Even the text tablet shown is using 2D surface in its full ability - we need to strive to bring that as well
PS: 2014
It might be a good bet to bet on text, but it feels inefficient a lot of the time, especially in cases like this where all sorts of files are stored in JSON documents.
1: https://gist.github.com/simonw/007c628ceb84d0da0795b57af7b74...
2: https://simonwillison.net/2025/Dec/26/slop-acts-of-kindness/