I expect that over time we will adopt a perspective on programming that the code doesn't matter at all; each engineer can bring whatever syntax or language they prefer, and it will be freely translated into isomorphic code in other languages as necessary to run in whatever setting necessary. Probably we will settle on an interchange format which is somehow as close as possible to the "intent" of the code, with all of the language-specific concepts stripped away, and then all a language will be is a toolbox that an engineer carries with them of their favorite ways to express concepts.
In particular, an error on one line may force you to change a large part of your code. As a beginner this can be intimidating ("do I really need to change everything that uses this struct to use a borrow instead of ownership? will that cause errors elsewhere?") and I found that induced analysis paralysis in me. Talking to an LLM about my options gave me the confidence to do a big change.
I've noticed the same pattern learning other things. Having an on-demand tutor that can see your exact code changes the learning curve. You still have to do the work, but you get unstuck faster.
EDIT: typo fixed, thx
I.e. the parent is speaking in the context of learning, not in the context of producing something that appears to work.
> I don't see why it *shouldn't be even more automated
In my particular case, I'm learning so having an LLM write the whole thing for me defeats the point. The LLM is a very patient (and sometimes unreliable) mentor.
Why not go? Slower than rust. Type system is lacking.
Why not c? Not memory safe.
Why not c++? Also not memory safe.
Why not zig? Not memory safe.
Why rust? Fast, memory safe, type safe. Compiler pushes back on LLM hallucinations and errors.
Over time, security-critical and performance-critical projects will autonomously be rewritten in rust.
That’s not a bad thing though. It’s okay to aim for your thing and be good at just that. No need to try to please everyone.
Re; Go, I don't want to use a language that is slower than C, LLM help or not.
Zig is the real next Javascript, not Rust or Go. It's as fast or faster than C, it compiles very fast, it has fast safe release modes. It has incredible meta programming, easier to use even than Lisp.
Kudos to that guy for solving the puzzle, but I really don't want to use a special trick to get the compiler to let me reuse a buffer in a for loop.
[1]: https://davidlattimore.github.io/posts/2025/09/02/rustforge-...
Rusts borrow checker is only able to prove at compile-time that a subset of correct programs are correct. There are many correct programs that the BC is unable to prove to be correct and therefore rejects them.
I’m a big fan of Rust and the BC. But let’s not twist reality here.
There are programs that "work" but the reason they "work" is complicated enough that the BC is unable to understand it. But such programs tend to be difficult for human readers to understand too, and usually unnecessarily so.
What did it for me was thinking through how mutable==exclusive, non-mutable==shared, and getting my head around Send and Sync (not quite there yet).
AI helps me with boiler plate, but not with understanding, and if I don't understand I cannot validate what AI produces.
Rust is worth the effort.
I'm starting my rusty journey and I'm only a few months in. With the rise of autogenerated code, it's paradoxically much harder to go slow and understand the fundamentals.
Your comment is reassuring to read.
I certainly am no fan of C but from a certain point of view it’s much easier to understand what’s going on in C.
The 'magic' in Python means that skilled developers can write libraries that work at the appropriate level of abstraction, so they are a joy to use.
Conversely, it also means that a junior dev, or an LLM pretending to be a junior dev, can write insane things that are nearly impossible to use correctly.
Oh cool someone has imported a library that does a shedload of really complicated magic that nobody in the shop understands - that's going to go well.
We're (The Software Engineering community as a whole) are also seeing something similar to this with AI generated code, there's screeds of code going into a codebase that nobody understands is full across (give a reviewer a 5 line PR and they will find 14 things to change, give them a 500 line PR and LGTM is all you will see).
Readability gets destroyed when a function can accept 3 different types, all named the same thing, with magic strings acting as enums, and you just have to hope all the cases are well documented.
And the other problem with functions accepting dynamic types is that your function might only in reality handle one type, it still has to defensively handle when someone passes it things that will cause an error.
All the dynamic typing really did is move the cognitive load from the caller to the called.
The better structure and clear typing makes the review much easier.
Python was supposed to be embracing the idea of "there's only one way to do it", which appeals after Perl's "There's many ways to do it", but the reality is, there's 100 ways to do it, and they're all shocking.
You have to rewrite it for every new processor? Big deal. Llm magic means that cost isn't an issue and a rewrite is just changing a single variable in the docs.
But, if you read the article, the reasons given for rust in particular are reasonable, and not matched by assembly or machine code.
That is literally not true - is the author speaking about what he personally sees at his specific workplace(s)?
If 90% of the code at any given company is LLM-generated that is either a doomed company or a company doesn't write any relevant code to begin with.
I literally cannot imagine a serious company in which that is a viable scenario.
But to have 20 copies of Claude Code running simultaneously and the code works so well you don't need testers == high on your own supply.
reminds me of a bar owner who died of liver failure. people said he himself was his best customer
I've recently joined a startup whose stack is Ruby on Rails + PostgreSQL. Whilst I've used PostgreSQL, and am extremely familiar with relational databases (especially SQL Server), I've never been a Rubyist - never written a line of Ruby until very recently in fact - and certainly don't know Rails, although the MVC architecture and the way projects are structured feels very comfortable.
We have what I'll describe as a prototype that I am in the process of reworking into a production app by fixing bugs, and making some pretty substantial functional improvements.
I would say, out of the gate, 90%+ of the code I'm merging is initially written by an LLM for which I'm writing prompts... because I don't know Ruby or Rails (although I'm picking them up fast), and rather than scratch my head and spend a lot of time going down a Google and Stackoverflow black hole, it's just easier to tell the LLM what I want. But, of course, I tell it what I want like the software engineer I am, so I keep it on a short leash where everything is quite tightly specified, including what I'm looking for in terms of structure and architectural concerns.
Then the code is fettled by me to a greater or lesser extent. Then I push and PR, and let Copilot review the code. Any good suggestions it makes I usually allow it to either commit directly or raise a PR for. I will often ask it to write automated tests for me. Once it's PRed everything, I then both review and test its code and, if it's good, merge into my PR, before running through our pipeline and merging everything.
Is this quicker?
Hmm.
It might not be quicker than an experienced Rails developer would make progress, but it's certainly a lot quicker than I - a very inexperienced Rails developer - would make progress unaided, and that's quite an important value-add in itself.
But yeah, if you look at it from a certain perspective, an LLM writes 90% of my code, but the reality is rather more nuanced, and so it's probably more like 50 - 70% that remains that way after I've got my grubby mitts on it.
I’m a bit concerned we might be losing something without the google and stack overflow rabbit holes, and that’s the context surrounding the answer. Without digging through docs you don’t see what else is there. Without the comments on the SO answer you might miss some caveats.
So while I’m faster than I would have been, I can’t help but wonder if I’m actually stunting my learning curve and might end up slower in the long term.
Did I get that right or did I miss anything?
>We are entering an era where the Brain of the application (the orchestration of models, the decision-making) might remain in Python due to its rich AI ecosystem, but the Muscle, the API servers, the data ingestion pipelines, the sidecars, will inevitably move to Go and Rust. The friction of adopting these languages has collapsed, while the cost of not adopting them (in AWS bills and carbon footprint) is rising.
This is the most silicon valley brain thing I've seen for a while
We're entering an era where I continue to write applications in C++ like I've always done because its the right choice for the job, except I might evaluate AI as an autocomplete assistant at some point. Code quality and my understanding of that code remains high, which lets me deliver at a much faster pace than someone spamming llm agent orchestration, and debuggability remains excellent
90% of code written by devs is not written by AI. If this is true for you, try a job where you produce something of value instead of some random silicon valley startup
My take: Any gains from an "LLM-oriented language" will be swamped by the massive training set advantage held by existing mainstream languages. In order to compete, you would need to very rapidly build up a massive corpus of code examples in your new language, and the only way to do that is with... LLMs. Maybe it's feasible, but I suspect that it simply won't be worth the effort; existing languages are already good enough for LLMs to recursively self-improve.
I'm always surprised when agents aren't working directly with the AST.
The real issue with doing this is that there is no body of code available to train your models on. As a result the first few look like opinionated Python.
Python too -- hear me out. With spec-driven development to anchor things, coupled with property-based tests (PBT) using Hypothesis, it's great for prototyping problems.
You wouldn't write mission critical stuff with it, but it has two advantages over so-called "better designed languages": massive ecosystem and massive training.
If your problem involves manipulating dataframes (polars, pandas), plotting (seaborn), and machine learning, Python just can't be beat. You can try using an LLM to generate Rust code for this -- go ahead and try it -- and you'll see how bad it can be.
Better ecosystems and better training can beat better languages in many problem domains.
People do, they also write mission critical stuff in Lua, TCL, Perl, and plenty of other languages. What they generally won't do is write performance critical stuff in those languages. But there is definitely some critical communication infrastructure out there running with interpreted languages like these out there.
Go uses GC, and therefore can't be used for hard real time applications. That's disqualifying as I understand it.
C, C++, Rust, Ada, and Mojo are true system languages IMO. It is true that as long as you can pre-allocate your data structures, and disable GC at runtime, that GC-enabled languages can be used. However, many of them rely on GC in their standard libraries.
Anything that calls itself a “systems language” should support performance engineering to the limits of the compiler and hardware. The issue with a GC is that it renders entire classes of optimization impossible even in theory.
Their definition was not the one most people would have used (leading to C, C++, Rust, Ada, etc. as you listed) but systems as in server systems, distributed services, etc. That is, it's a networked systems language not a low-level systems language.
Also, despite GC there’s a sizeable amount of systems programming already done in Go and proven in production.
Given how much importance is being deservedly given to memory safety, Go should be a top candidate as a memory safe language that is also easier to be productive with.
There's a joke I forgot its name, something goes like
- high performance language but hard-coded
- xml/yaml configs
- dynamic configs and codegen
- lua or python
- let's static type python and using a compiler
- high performance language but hard-coded
The solution is to use a higher level safer, strict language (e.g. Java) that would be easy for us to debug and deeply familiar to all LLMs. Yes, we will generate more code, but if you spend the LLM time focusing on nitpicking performance rather than productivity you would end up in the same problem you have with humans. LLMs also have capacity limits and the engineers that operate them have capacity limits, neither one is going away.
Maybe someone should use AI to write the code for that...
Using an LLM is overkill especially when correctness can never be guaranteed by systems who must sample from a probability distribution.