I think it is underestimated how difficult this truly is.
And this will always remain uniquely human because only The human truly knows their intent (sometimes).
I’ve had the AIs (ala the google) after I say “make me a script that does XYZ”, say here you go, and if I asked does it work and it tests it out will say yep it does, but only I will know if it is actually doing what I intended. I often will have to clarify my intent because I didn’t communicate well the first time. As we’ve all seen even amongst humans to each other, intent is not always well expressed.
There will always be a judgement made by a human with yes that is my intent or no it is not.
But even in old days of writing the “code” itself, most bugs were you not precisely saying what you wanted the program to do.
I think it’s correct to think of LLMs as compiling English to code, like c++ getting compiled to assembly.
English has 1-2 million.
But this is drifting. The point of it all is that the ambiguity of programming languages are already high despite all our best efforts.
The ambiguity of natural language is many orders of magnitude worse.
Which is great for poetry but we’re not writing poetry.
The I learned about y2k, there was such a thing (more or less) from Apple. It implied knowing a strict subset of english and the correct words and constructs… it was a pain to program that (at least for me)
More or less at that time, I started understanding that programming languages limitations, although at the beginning a necessity, were a feature. Indeed it was already a very small subset of English, with very specific, succinct, small grammar, that was easy to learn (well, C++ stoped being learnable some years ago… but you get the point)
The idea of LLM eliminating good designed languages is hard for me to believe, just as stated in the article.
Once we have quantum LLMs, the need for intermediate abstraction layers might change, but that's very [insert magic here].
And so, SQL was born, and is now used all across the globe to manage critical systems. Plain English that even a business person could understand.
Really it's just the B-derived languages (C, C++, C#, Objective-C, D, Rust, Java, Go, etc) that aren't executable pseudo-code. And they weren't even mainstream originally, with Pascal and its ilk being the preferred syntax until the mid-to-late 80s.
I really do think computing took a huge step backwards when C became the default on home computers.
Back in the 80s and 90s the bigger concern was portability and cost of the compiler license, both of which C did better.
C was closer to the metal than vanilla Pascal too. But commercial Pascal dialects supported pointers, inlined assembly, and everything else you needed to write low-level systems code. Which again comes back to C offering better portability and cheaper licenses.
For that, I a big fan of flow based programming as the agnostic part. For the implementation, I’m thinking of Node-Red which is a visual implementation of flow based programming.
To become programming language agnostic, I’ve started on the Erlang-Red project which takes the frontend of Node Red and bolts it onto an Erlang backend.
Eventually the visual flow code will be programming languages independent.
I have dreamed about a programming language which would be basically text, but the editor would present it as a kind of flow chart. Maybe can be done with any existing programming language? But I found some troubles with language extensions… maybe someday someone much smarter than me can implement that in a meaningful way.
That's one of the reason people love Emacs. Once you've loaded the software, you can rebuild it from the inside out without ever shutting it down. You think of a feature, you build it directly, piece by piece, instead of creating a new project, etc,...
- subflows for grouping common code that can be reused
- link nodes for defining "gotos" visually and also making code reusable
- flow tabs that group a set of nodes in a single tab and where link nodes can be used to jump into these tabs
- node packages that define new nodes but also encapsulate NodeJS code into a single visual element, i.e. a node.
Having said that, many textual tools and ideas are still missing their visual equivalence
- how to do version control of visual code
- how to compare visual code between versions
- how to work collaboratively on the same code base
- what is refactoring in a visual program? moving and aligning is a form of refactoring that can lead to better understanding
- how to label visual elements in a consist manner, i.e., coding conventions - what is the equivalent to the 4 versus 2 spaces/tabs debate in visual programming?
But just as many questions remain unanswered in the AI/Vibe coding scene, so that doesn't mean visual programming isn't to taken seriously, it's just means its not trivial.
I think visual programming should be taken more seriously and thought through. I like to say that we went from punchcards to keyboards and somehow we stopped there - when it comes to programming. At the same time we went from phones with operators to dial phones to push button phones to smart phones with touch screens. Why not the same for programming?
What makes programming so inherently keyboard bound?
Programming is done with programming languages, which are fundamentally non-ambiguous (that's a necessity).
Now, software in general is unfortunately pretty bad and full of bugs, so one could argue that LLMs may get to a point where they are not worse than bad software. But for anything important, we will always need a non-ambiguous language.
> if LLMs can translate programming languages seamlessly and accurately, then
If you want to accurately translate programming languages, you need to look into compiler technology. LLMs aren't that.
This is not something AI will ever be good at. Simply, because it is also hard for humans to do.
Translating between programming languages is a very hard problem, because someone needs to fully understand both languages. Both humans and AI have trouble with it for the same reason and only monumental AI progress, which would have other implications, could change this.
Something as basic as addition varies wildly between languages, if you look at the details. And when it comes to understanding the details are exactly what matters.
Even humans can't use natural language do give succinct commands, hence the use of prescribed verbage in air traffic control communication.
Maybe there'd be some opportunity at the small scale, like if you're only a Python person and need to make a small change to a Rust codebase, it might be able to give you some Python pseudocode, and you could maybe make some changes to that and it'd apply them in translation, sure, but you're still going to be on the hook for reviewing the Rust translation; it's not something that can be fully trusted and automated, and it's hard to see where this would scale beyond small code snippets that were easily translatable.
Notably, a modern LLM wouldn't make this mistake.
It's not at all clear to me that LLMs are or will become better at translating Python → C than English → C. It makes sense in theory, because programming languages are precise and English is not. In practice, however, LLMs don't seem to have any problem interpreting natural language instructions. When LLMs make mistakes, they're usually logic errors, not the result of ambiguities in the English language.
(I am not talking about the case where you give the LLM a one-sentence description of an app and it fails to lay out every feature as you'd imagined it. Obviously, the LLM can't read your mind! However, writing detailed English is still easier than writing Python, and I don't really have issues with LLMs interpreting my instructions via Genie Logic.)
I would have found this post more convincing if the author could point to examples of an LLM misinterpreting imprecise English.
P.S. I broadly agree with the author that the claim "English will be the only programming language you’ll ever need" is probably wrong.
I can think of a couple of reasons this may be the case.
1. There is a subset of English that you use unknowingly that has a socially accepted formal definition and so can be used as a substitute for programming language. LLMs have learned this definition. Straying from this subset or expecting a different formal definition will result in errors.
2. The level of detail in your English description is such that ambiguity genuinely does not arise. Unlikely, you would not consider that "natural language".
3. English is not ambiguous when describing program features, and formal definitions can be skipped. Unlikely, because the entire product owner role is built on the frequently exclaimed "that's not what I meant!".
I think its #1, and I think that makes the most sense: through massive statistical data LLMs have learned which natural language instructions cause which modifications in codebases, for a giant amount of generic problems that it has training data on.
The moment you do something new though, all bets are off.
And domains with less training data openly available are areas where innovation and differentiation and business moats live.
Oftentimes, only programming languages are precise enough to specify this type of knowledge.
English is often hopelessly vague. See how many definitions the word break has: https://www.merriam-webster.com/dictionary/break
And Solomonoff/Kolmogorov theories of knowledge say that programming languages are the ultimate way to specify knowledge.
if AI can translate our English descriptions into working code, do we still need programming languages at all?
I think some people equate “source code” with “compiled code” or “AST” (abstract syntax tree). The former contain so many features that are still part of the english language such as functions / variables / types names, source files organization folder and filenames, comments, assets, git repo with log and history etc. And the AI probably wouldnt be as efficient if all those elements were not part of the training data. To get rid of such programming language and have a pure AI programming language would require tons of training data that humans will never produce (chicken and egg paradox)As far as I know and my experience confirms (maybe biased?) the whole chain of SW engineering is there precisely because English is not always optimal.
Indeed fact fact in a project I directed, the whole requirement management was basically a loop
Repeat{talk to customer; write formal language; simulate; validate}until no change;
It was called “runnable specification” not my idea. It worked absolutely incredibly good.
That's basically the agile manifesto, but involving flexibility by having humans instead of policies defining the flow and shorter iterations.
First, it was HW and SW, and HW was simulated. Many go on to build something, then test
Second, requirements are written in a formal language. I’ve never seen formal languages in DOORS (or similar) before
Third we did not do the whole thing, just the bits and pieces in the requirements. After the requirements were done and approved, we move on to development, where the “normal” iterations happen.
Subtle, but not the same
The LLM only needs to convert the human language into some strict machine readable structure (like an AST), which the human doesn't need to bother with. This is a pure conversion task. From there, something else (not a human) can convert that to running machine code.
Effective this would obsolete the human facing programming language.
Sure, but if you remove human facing programming languages from the equation - which are diverse and messy due to human understanding and varied preferences - then the intermediate language that the LLM generates would be much smaller and simpler to "understand" and produce.
Public code could be converted into this intermediate language to train the LLM so it has examples of human intent -> AST.
I don't think this could eliminate programming languages, but my position is that a large percentage of development work today is not very unique at all: it's the same patterns being done over and over by different people in different (or in many cases, the same) way. We could solve this problem without LLMs, but we never will because the will is not there.
To put it another way by mutating a well-known phrase, you might go from "there's obviously a bug" to "no obvious bugs".
It's like trying to find the flaw in a mathematical proof where you personally are lacking in a concept to have the clarity.
So why shouldn't your editor/IDE be aware of your mental model, and present the world to you in a language tailored specifically to your level of abstraction (at that moment). A pseudocode idiolang that might be a blend of concepts from Python, Go, Rust and Typescript as you need them.
And when you hit your limit in debugging a problem because it is too diffuse, you could ask the IDE to teach you the new concepts you need to view the code at a higher level of abstraction. You could imagine the UI presenting the same file side by side, with metaclassing on the one side and the alternative on the other, so you can drill into where the bugs might be hiding.
So something like python is a fairly specialized language. Most of its concepts are not that easy to translate to another language which may involved another set of specialized paradigms.
You will need to revert to a common base, which basically means unravel what gives Python its identity, then rebuild according to the other programming language identity. And there's a lot of human choices in there which will be the most difficult to replicate. The idiomatic way of programming is a subset of what is possible in the language just to enable faster reading between human developers.
So there's no language agnostic programming as there's no agnostic computation models. It's kinda how there's no agnostic hardware architecture. There's a lot of fairly involved work to have cross-platform programs. But that can work as the common platform is very low-level itself (JVM and other runtimes)
Yes, everything is Turing complete and a translation can exist, but how would you make any sense of it as a reader?
But in daily life, people are not accustomed to formalize their thought at that extent as there's a collective substrate (known as culture and jargon) that does the job for natural languages.
But the wish described in TFA comes from a very naive place. Even natural languages can't be reduced to a single set.
I keep coming back to System F or similar.
A few years ago in India, I saw a presentation where people were attempting to write programming in their mother tongue.
One such effort I found on GitHub is https://github.com/betacraft/rubyvernac-marathi (for Marathi, an Indian dialect).
I'm not sure. Imagine that each CPU instruction or group of instructions is mapped to a midi sound and that you slowdown the stream of beeps enough that you can hear the "song" of the program. I wonder if you wouldn't be able to start hearing error states and distinguishing when they happened.
Meaning that I think we do need some way to debug, but I'm not sure it has to be text / programming languages, and if it's an AI doing it text also doesn't seem like the most efficient way to do it, information density wise.
Understanding system architecture constraints Handling edge cases AI doesn't know exist Debugging when AI-generated code breaks in production Knowing when AI's "solution" creates more problems than it solves
That last 30% is what separates engineers from prompt writers. And it's not getting smaller—if anything, it's growing as systems get more complex.
Every new product, integration, or business line introduces new edge cases, dependencies, and coordination paths. What starts as a clean architecture turns into a network of overlapping constraints - legacy data formats, different latency expectations, regulatory quirks, “temporary” patches that become permanent.
You can manage complexity for a while, but you can’t eliminate it. Every layer that simplifies work for one team usually adds hidden coupling for another. Over time, the system stops being a single design and becomes an ecosystem of compromises.
I don't think so
“Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et generaliter nullam in infinitum ultra quadratum potestatem in duas ejusdem nominis fas est dividere: cujus rei demonstrationem mirabilem sane detexi. Hanc marginis exiguitas non caperet.”
Code and math notations help you think. Notations aren't just for the computer.
Debugging a program will become like debugging your relationships - you argue until one side gives up or both are exhausted!
If that's true, what's your value? You don't understand client needs better than a product manager. You don't have an exceptional product vision. You're essentially making yourself obsolete.
Your expertise currently lies in building systems, handling edge cases, optimizing performance, and avoiding technical debt. If that can be expressed in English prompts, anyone can do your job—PMs, analysts, business people.
A programmer who can't write code is just someone with ideas. There are millions of those, and they're worth $0. Programmers who cheerlead the idea that "90% of code will be AI-written" are digging their own graves. In 5 years, they won't be replaced by AI—they'll be replaced by people who can both code AND use AI effectively.
I think we need languages optimized for isolation, without global anything and uncompilable without safety; and for readability. We need LLM oriented languages, meant to be read and not written. Like the author I think they'll look a lot more like Rust than anything else.
We should be programming them in structured natural language that expresses architecture, rather than details. Instead of application code, we also should be generating absurdly detailed and comprehensible test suites with that language, and ignoring the final implementation completely. The detailed architecture document, consisting of heavy commentary generated by the user (but organized and edited for consistency by the LLM in dialog with the user), and the test suite, should be the final product. Dropping it into any LLM should generate an almost identical implementation. That way, the language(s) can develop freely, and in a way oriented towards model usage, rather than having to follow humans who have to be retrained and reoriented after every change.
So maybe LLM-agnostic programming is what I'm asking for? I want LLM interactions to focus on making my intentions obvious, and clarifying things to whatever degree is necessary so it never has to really think about anything when generating the final product. I want the LLMs to use me as a context-builder. They can do the programming. Incidentally, this will obviously still take programmers because we know what is possible and what is not; like a driver feels their car as an extension of their body, although they're communicating with it through a wheel, three pedals, and a stick.*
Right now, LLMs are asking me what I want them to do too much. I want to tell them what I want them to do, and to have them probe the details of that until there's no place for them to make a mistake. A "programmer" will be the one who sets the program.
[*] Imagine the alternative (it's easy) of a autonomous car that says "Do you want to go to the grocery store? Or maybe visit your mother?" Stay out of my business, car. I have an organizer for that. I'll tell you where I want to go.