Any application that can be written in a system language, eventually will be
78 points
3 days ago
| 20 comments
| avraam.dev
| HN
ajkjk
1 hour ago
[-]
I kinda like viewing this as similar to coordinate-invariance in physics / geometry. A programming language is effectively a function from textual programs to behaviors; this serves the same role as a coordinate system on a space, which is a function from coordinates to points. Naturally many different programs and programming languages can describe the same behavior, especially if you forget about implementation-specific details like memory layout or class structures. LLM code generation is just another piece of the same model: turns out that to some degree you can use English+LLMs as coordinate systems for the textual programs.

I expect that over time we will adopt a perspective on programming that the code doesn't matter at all; each engineer can bring whatever syntax or language they prefer, and it will be freely translated into isomorphic code in other languages as necessary to run in whatever setting necessary. Probably we will settle on an interchange format which is somehow as close as possible to the "intent" of the code, with all of the language-specific concepts stripped away, and then all a language will be is a toolbox that an engineer carries with them of their favorite ways to express concepts.

reply
lacunary
1 hour ago
[-]
but the interesting thing about coordinate systems is that they do matter, a lot! many problems are much much easier to solve in one coordinate system than another.
reply
ajkjk
1 hour ago
[-]
no doubt, but... also physics advanced in leaps and bounds every time someone figured out that ways of abstracting out a coordinate system.
reply
n_u
3 hours ago
[-]
This is my second attempt learning Rust and I have found that LLMs are a game-changer. They are really good at proposing ways to deal with borrow-checker problems that are very difficult to diagnose as a Rust beginner.

In particular, an error on one line may force you to change a large part of your code. As a beginner this can be intimidating ("do I really need to change everything that uses this struct to use a borrow instead of ownership? will that cause errors elsewhere?") and I found that induced analysis paralysis in me. Talking to an LLM about my options gave me the confidence to do a big change.

reply
augusteo
2 hours ago
[-]
n_u's point about LLMs as mentors for Rust's borrow checker matches my experience. The error messages are famously helpful, but sometimes you need someone to explain the why.

I've noticed the same pattern learning other things. Having an on-demand tutor that can see your exact code changes the learning curve. You still have to do the work, but you get unstuck faster.

reply
pfdietz
2 hours ago
[-]
I don't see why it shouldn't be even more automated than that, with LLM ideas tested automatically by differential testing of components against the previous implementation.

EDIT: typo fixed, thx

reply
happytoexplain
2 hours ago
[-]
Defining tests that test for the right things requires an understanding of the problem space, just as writing the code yourself in the first place does. It's a catch-22. Using LLMs in that context would be pointless (unless you're writing short-lived one-off garbage on purpose).

I.e. the parent is speaking in the context of learning, not in the context of producing something that appears to work.

reply
pfdietz
2 hours ago
[-]
I'm not sure that's true. Bombarding code with huge numbers of randomly generated tests can be highly effective, especially if the tests are curated by examining coverage (and perhaps mutation kills) in the original code.
reply
n_u
2 hours ago
[-]
I'm assuming you meant to type

> I don't see why it *shouldn't be even more automated

In my particular case, I'm learning so having an LLM write the whole thing for me defeats the point. The LLM is a very patient (and sometimes unreliable) mentor.

reply
monero-xmr
2 hours ago
[-]
I am old but C is similarly improved by LLM. Build system, boilerplate, syscalls, potential memory leaks. It will be OK when the Linux graybeards die because new people can come up to speed much more quickly
reply
lmm
2 hours ago
[-]
The thing is LLM-assisted C is still memory unsafe and almost certainly has undefined behaviour; the LLM might catch some low hanging fruit memory problems but you can never be confident that it's caught them all. So it doesn't really leave you any better off in the ways that matter.
reply
monero-xmr
1 hour ago
[-]
I don’t code C much, is my passion side language. LLM improves my ability to be productive and quickly. Is not a silver bullet, but is an assist
reply
yert3
22 minutes ago
[-]
Why not assembly? Not enough compile time checks, only runtime errors.

Why not go? Slower than rust. Type system is lacking.

Why not c? Not memory safe.

Why not c++? Also not memory safe.

Why not zig? Not memory safe.

Why rust? Fast, memory safe, type safe. Compiler pushes back on LLM hallucinations and errors.

Over time, security-critical and performance-critical projects will autonomously be rewritten in rust.

reply
gritspants
4 minutes ago
[-]
So, I am then interested in non trivial software written with LLM assistance and in Rust. Off the top of my head I'm aware of Turso (supposed sqlite successor), marketing approach aise, I find it interesting to see how that works out. Any others?
reply
jimbob45
9 minutes ago
[-]
Rust isn’t versatile enough to be used in high-level contexts. You simply have to jump through too many hoops. It’s great for low-level apps but if I just want a simple UI and CRUD functionality, I’m reaching for C#/Kotlin.

That’s not a bad thing though. It’s okay to aim for your thing and be good at just that. No need to try to please everyone.

reply
esafak
7 minutes ago
[-]
And Rust compilation is slow.
reply
globalnode
5 minutes ago
[-]
assembly is fun, python is fun, both get the job done.
reply
srcreigh
2 hours ago
[-]
I don't really want to learn how to use the borrow checker, LLM help or not, and I don't really want to use a language that doesn't have a reputation for very fast compile/dev workflow, LLM help or not.

Re; Go, I don't want to use a language that is slower than C, LLM help or not.

Zig is the real next Javascript, not Rust or Go. It's as fast or faster than C, it compiles very fast, it has fast safe release modes. It has incredible meta programming, easier to use even than Lisp.

reply
ycombinatrix
2 hours ago
[-]
Writing code without the borrow checker is the same as writing code with the borrow checker. If it wouldn't pass the borrow checker, you're doing something wrong.
reply
srcreigh
1 hour ago
[-]
Idk. Did you see the "Buffer reuse" section of this blog post? [1]

Kudos to that guy for solving the puzzle, but I really don't want to use a special trick to get the compiler to let me reuse a buffer in a for loop.

[1]: https://davidlattimore.github.io/posts/2025/09/02/rustforge-...

reply
forrestthewoods
1 hour ago
[-]
This is a n objectively false statement.

Rusts borrow checker is only able to prove at compile-time that a subset of correct programs are correct. There are many correct programs that the BC is unable to prove to be correct and therefore rejects them.

I’m a big fan of Rust and the BC. But let’s not twist reality here.

reply
lmm
37 minutes ago
[-]
> There are many correct programs that the BC is unable to prove to be correct and therefore rejects them.

There are programs that "work" but the reason they "work" is complicated enough that the BC is unable to understand it. But such programs tend to be difficult for human readers to understand too, and usually unnecessarily so.

reply
drivebyhooting
2 hours ago
[-]
Come on that’s not true. How would you write and LRU cache in rust? It’s not possible in idiomatic rust. You either need to use unsafe or use integer indices as a poor man’s pointer.
reply
mattgreenrocks
1 hour ago
[-]
Indices are fine. Fixating on the “right” shape of the solution is your hang-up here. Different languages want different things. Fighting them never ends well.
reply
ycombinatrix
2 hours ago
[-]
What's wrong with integer indices? They have bounds checking. You definitely do not need unsafe to do LRU.
reply
felixfbecker
51 minutes ago
[-]
The author makes the argument that in the age of LLMs more type safe languages will be more successful than less type safe ones. But how does that support the claim that Go is more suitable than JavaScript? TypeScript is more type safe than Go: Go doesn’t validate nil pointers, it doesn’t enforce fields to be set when initializing structs, it has no support for union types. All those things can cause runtime errors that are caught are caught at compile time in TypeScript.
reply
giancarlostoro
42 minutes ago
[-]
Not sure, but I gave it a shot weeks ago and finally started building something using Rust for a project I've wanted to build for years now, in maybe 12 hours worth of effort total I've probably done several months worth of engineering effort (when you consider I only touch this project in my spare time). Every time I pick up Rust I fight it for hours because I don't do any Rust in my dayjob, but the LLM helps me pick up that Rust nuance slack wherever I fall short and I can focus on key architectural details as I have been obsessing over years now.
reply
chb
26 minutes ago
[-]
A gross mischaracterization of the author's point (the word "type" doesn't even appear in the article). The author focuses on the cost of interpreted languages, which he describes as "memory hungry" and computationally expensive.
reply
nxobject
1 hour ago
[-]
I hope this law applies to all of the Electron applications out there...
reply
leonidasv
1 hour ago
[-]
I use Claude Code daily to work on a large Python codebase and I'm yet to see the it hallucinating a variable or method (I always ask it to write and run unit tests, so that may be helping). Anyway, I don't think that's a problem at all, most problems I face with AI-generated code are not solved by a borrow-checker or a compiler: bad architecture, lack of forward-thinking, hallucinations in the contract of external API calls, etc.
reply
brown
46 minutes ago
[-]
If LLMs live up to their potential, then they should be able to rewrite language runtimes to eventually be as fast or faster than systems languages. "Sufficiently intelligent compiler" and whatnot
reply
wmf
25 minutes ago
[-]
No, because some language features like monkey patching have inherent runtime cost that cannot be eliminated. And if we reach superintelligence you can just let the AI invent its own language.
reply
PeterWhittaker
1 hour ago
[-]
While I still struggle to think in Rust after years of thinking in C, it is NEVER the borrow checker or lifetimes that trip me, it's the level of abstraction, in that C forced me low level, building my own abstractions, while Rust allows me to think in abstractions first and muse over how to implement those idiomatically.

What did it for me was thinking through how mutable==exclusive, non-mutable==shared, and getting my head around Send and Sync (not quite there yet).

AI helps me with boiler plate, but not with understanding, and if I don't understand I cannot validate what AI produces.

Rust is worth the effort.

reply
justarandomname
51 minutes ago
[-]
THIS, I can barely remember a time with lifetimes or the borrow checker caused me undue suffering but can recall countless times that abstractions (often in the async world) did and sometimes still do.
reply
nvader
1 hour ago
[-]
Thank you for this comment.

I'm starting my rusty journey and I'm only a few months in. With the rise of autogenerated code, it's paradoxically much harder to go slow and understand the fundamentals.

Your comment is reassuring to read.

reply
s1mplicissimus
1 hour ago
[-]
Just wanted to add another bit of reassurance. At some point during my career people started "stack overflow coding". But ultimately someone has to fix the difficult issues and when you have the skills, you are coming out on top where others can just shrug and say "well there's no solution on stack overflow".
reply
felipeccastro
1 hour ago
[-]
It might be the opposite. Python apps still get written despite the performance hit, because understandability matters more than raw performance in many cases. Now that we’re all code reviewers, that quality should matter more, not less. Programmer time is still more expensive than machine time in many cases.
reply
jact
1 hour ago
[-]
Are Python apps really so easy to understand? I seriously disagree with this idea given how much magic goes behind nearly every line of Python. Especially if you veer off the happy path.

I certainly am no fan of C but from a certain point of view it’s much easier to understand what’s going on in C.

reply
fwip
58 minutes ago
[-]
Well-written Python apps are very easy to understand, especially if they use well-designed libraries.

The 'magic' in Python means that skilled developers can write libraries that work at the appropriate level of abstraction, so they are a joy to use.

Conversely, it also means that a junior dev, or an LLM pretending to be a junior dev, can write insane things that are nearly impossible to use correctly.

reply
awesome_dude
1 hour ago
[-]
One of the (many) reasons that I moved away from Python was the whole "we can do it in 3 lines"

Oh cool someone has imported a library that does a shedload of really complicated magic that nobody in the shop understands - that's going to go well.

We're (The Software Engineering community as a whole) are also seeing something similar to this with AI generated code, there's screeds of code going into a codebase that nobody understands is full across (give a reviewer a 5 line PR and they will find 14 things to change, give them a 500 line PR and LGTM is all you will see).

reply
Larrikin
1 hour ago
[-]
I've cooled significantly on Python now that there are a number of strongly typed languages out there that have also gotten rid of the boilerplate of languages Python used to compete with.

Readability gets destroyed when a function can accept 3 different types, all named the same thing, with magic strings acting as enums, and you just have to hope all the cases are well documented.

reply
awesome_dude
56 minutes ago
[-]
Type systems document data movement throughout applications :-)

And the other problem with functions accepting dynamic types is that your function might only in reality handle one type, it still has to defensively handle when someone passes it things that will cause an error.

All the dynamic typing really did is move the cognitive load from the caller to the called.

reply
viraptor
1 hour ago
[-]
I'd much more prefer to review something written in Rust or Go, even if I'd much rather write it in Python if I had to do it manually.

The better structure and clear typing makes the review much easier.

reply
awesome_dude
59 minutes ago
[-]
My biggest reason for liking Go, over Python can be summed up in one word: Discipline.

Python was supposed to be embracing the idea of "there's only one way to do it", which appeals after Perl's "There's many ways to do it", but the reality is, there's 100 ways to do it, and they're all shocking.

reply
noosphr
1 hour ago
[-]
Why stop at a systems language? Why not assembly? Hell why not raw machine code?

You have to rewrite it for every new processor? Big deal. Llm magic means that cost isn't an issue and a rewrite is just changing a single variable in the docs.

reply
rankdiff
55 minutes ago
[-]
why even write the code?? just tell AI to give output directly!
reply
zephen
59 minutes ago
[-]
You might wonder that if you just read the headline.

But, if you read the article, the reasons given for rust in particular are reasonable, and not matched by assembly or machine code.

reply
noosphr
51 minutes ago
[-]
If you read the article you'd know they were talking about go/rust. We can sprinkle bugs with borrow checker to burn them in Rust, fair enough everyone knows that the only bugs that happen in system languages are memory errors. But what's the holy water we can use to banish bugs in Go?
reply
fuddle
1 hour ago
[-]
My take: "Any application that was written in Javascript, eventually will be written in a system language"
reply
captain_coffee
2 hours ago
[-]
> in 2026 devs write 90% of the code using natural language, through an LLM.

That is literally not true - is the author speaking about what he personally sees at his specific workplace(s)?

If 90% of the code at any given company is LLM-generated that is either a doomed company or a company doesn't write any relevant code to begin with.

I literally cannot imagine a serious company in which that is a viable scenario.

reply
PaulHoule
2 hours ago
[-]
I can believe LLM generated after being cut up into small slices that are carefully reviewed.

But to have 20 copies of Claude Code running simultaneously and the code works so well you don't need testers == high on your own supply.

reply
justarandomname
49 minutes ago
[-]
Sadly, I'm seeing a LOT of this kinda of usage. So much so, I know a couple people that brag about how many they have running at time same time, pretty much all the time.
reply
s1mplicissimus
1 hour ago
[-]
> high on your own supply.

reminds me of a bar owner who died of liver failure. people said he himself was his best customer

reply
ravenstine
1 hour ago
[-]
That depends on how you define "doomed". Most screwed up companies don't go belly up overnight. They get sold as fixer-uppers and passed between bigger firms and given different names until, finally, it is sold for parts. The way this works is that all parties behave as if the company is the opposite of doomed. It's in a sense correct. The situation hardly seems doomed if everyone has enough time to make their money and split before the company's final death twitches cannot be denied, in which case the company accomplished its mission. That of course doesn't mean everything from its codebase to its leadership didn't lack excellence the whole time.
reply
bartread
1 hour ago
[-]
Yeah, I would say it's pretty variable, and it depends on what you mean by the word write.

I've recently joined a startup whose stack is Ruby on Rails + PostgreSQL. Whilst I've used PostgreSQL, and am extremely familiar with relational databases (especially SQL Server), I've never been a Rubyist - never written a line of Ruby until very recently in fact - and certainly don't know Rails, although the MVC architecture and the way projects are structured feels very comfortable.

We have what I'll describe as a prototype that I am in the process of reworking into a production app by fixing bugs, and making some pretty substantial functional improvements.

I would say, out of the gate, 90%+ of the code I'm merging is initially written by an LLM for which I'm writing prompts... because I don't know Ruby or Rails (although I'm picking them up fast), and rather than scratch my head and spend a lot of time going down a Google and Stackoverflow black hole, it's just easier to tell the LLM what I want. But, of course, I tell it what I want like the software engineer I am, so I keep it on a short leash where everything is quite tightly specified, including what I'm looking for in terms of structure and architectural concerns.

Then the code is fettled by me to a greater or lesser extent. Then I push and PR, and let Copilot review the code. Any good suggestions it makes I usually allow it to either commit directly or raise a PR for. I will often ask it to write automated tests for me. Once it's PRed everything, I then both review and test its code and, if it's good, merge into my PR, before running through our pipeline and merging everything.

Is this quicker?

Hmm.

It might not be quicker than an experienced Rails developer would make progress, but it's certainly a lot quicker than I - a very inexperienced Rails developer - would make progress unaided, and that's quite an important value-add in itself.

But yeah, if you look at it from a certain perspective, an LLM writes 90% of my code, but the reality is rather more nuanced, and so it's probably more like 50 - 70% that remains that way after I've got my grubby mitts on it.

reply
WD-42
30 minutes ago
[-]
This is exactly how I use AI as well in codebases and languages I’m not familiar with.

I’m a bit concerned we might be losing something without the google and stack overflow rabbit holes, and that’s the context surrounding the answer. Without digging through docs you don’t see what else is there. Without the comments on the SO answer you might miss some caveats.

So while I’m faster than I would have been, I can’t help but wonder if I’m actually stunting my learning curve and might end up slower in the long term.

reply
captain_coffee
57 minutes ago
[-]
So let me get this straight - you vibe code, make what you consider as necessary changes to the LLM-generated code, create PRs that get to be reviewed by another AI tool (Copilot), potentially make changes based on Copilot's suggestions and at the end, when you are satisfied with that particular PR you merge it yourself without having any other human reviewing it and then continue to the next PR.

Did I get that right or did I miss anything?

reply
AstroBen
1 hour ago
[-]
This seems like a really short-sighted view. 6 months from now you'll be much more inexperienced than if you just went through the initial struggle (with an LLM's help!)
reply
WD-42
28 minutes ago
[-]
What does the struggle with an LLMs help look like?
reply
AstroBen
21 minutes ago
[-]
"Explain this to me" until you're able to complete the task, instead of "do this for me"
reply
20k
1 hour ago
[-]
Its insane to me seeing this kind of thing. I write 100% of my code by hand. Of developers I know, they write >95% of code by hand

>We are entering an era where the Brain of the application (the orchestration of models, the decision-making) might remain in Python due to its rich AI ecosystem, but the Muscle, the API servers, the data ingestion pipelines, the sidecars, will inevitably move to Go and Rust. The friction of adopting these languages has collapsed, while the cost of not adopting them (in AWS bills and carbon footprint) is rising.

This is the most silicon valley brain thing I've seen for a while

We're entering an era where I continue to write applications in C++ like I've always done because its the right choice for the job, except I might evaluate AI as an autocomplete assistant at some point. Code quality and my understanding of that code remains high, which lets me deliver at a much faster pace than someone spamming llm agent orchestration, and debuggability remains excellent

90% of code written by devs is not written by AI. If this is true for you, try a job where you produce something of value instead of some random silicon valley startup

reply
ekidd
1 hour ago
[-]
If there's a human in then loop, actually reading the plans and generated code, then it's possible to have 90% of me code generated by an LLM and maintain reasonable quality.
reply
neya
1 hour ago
[-]
In my experience, most of the NodeJS shops do this. Because, LLMs on the surface seemingly are good at giving you a quick solution for JS code. Whether it's a real solution or patchwork is up for debate, but, for most mid-level to junior devs, it's good enough to get the job done. Now, multiply this workflow 10x for 10 employees. That's how you end up with a complete rewrite and hiring a senior consultant.
reply
happytoexplain
2 hours ago
[-]
It seems like it may be true, but pointlessly true. I.e. yes, 90% of code is probably written by LLMs now - but that high number is because there is such a gigantic volume of garbage being generated.
reply
monero-xmr
1 hour ago
[-]
The problem is not coding (for me). The problem is thinking for a long time about what to code, then the execution is merely the side effect of my thinking. The LLM has helped me execute faster. Is not a silver bullet, and I do review the outputs carefully. But I won’t pretend it hasn’t made me much more productive.
reply
liveoneggs
22 minutes ago
[-]
they just have to keep repeating it
reply
spicyusername
2 hours ago
[-]
I wonder when we'll start to see languages designed exclusively to be easy to write by agent programming.
reply
gwern
1 hour ago
[-]
Not going far enough - why would applications be written in either 'systems languages' or 'agent languages' if you have superintelligence too cheap to meter and you will amortize the costs over more than, say, a few days? Just write in raw assembler from a domain-specific design hyperoptimized for solely the task, the way Donald Knuth on steroids would.
reply
nemo1618
2 hours ago
[-]
Here's one attempt: https://x.com/sigilante/status/2013743578950873105

My take: Any gains from an "LLM-oriented language" will be swamped by the massive training set advantage held by existing mainstream languages. In order to compete, you would need to very rapidly build up a massive corpus of code examples in your new language, and the only way to do that is with... LLMs. Maybe it's feasible, but I suspect that it simply won't be worth the effort; existing languages are already good enough for LLMs to recursively self-improve.

reply
Grosvenor
2 hours ago
[-]
Lisp?

I'm always surprised when agents aren't working directly with the AST.

reply
Rustwerks
2 hours ago
[-]
There has been at least one posted here in Hacker News, Mojo. Google shows some other similar attempts.

The real issue with doing this is that there is no body of code available to train your models on. As a result the first few look like opinionated Python.

reply
krackers
2 hours ago
[-]
It seems the language would need to be strongly typed, have good error reporting and testing infrastructure, have a good standard library and high-level abstractions, and be "close enough" to existing languages. Go would seem to already fit that bill, any bespoke language you come up with is going to have less exposure in the training set than Go. Maybe Rust as a second, but Go's memory management might be easier for the LLM (and humans) than Rust's.
reply
wenc
1 hour ago
[-]
Rust, Go and TypeScript are good bets.

Python too -- hear me out. With spec-driven development to anchor things, coupled with property-based tests (PBT) using Hypothesis, it's great for prototyping problems.

You wouldn't write mission critical stuff with it, but it has two advantages over so-called "better designed languages": massive ecosystem and massive training.

If your problem involves manipulating dataframes (polars, pandas), plotting (seaborn), and machine learning, Python just can't be beat. You can try using an LLM to generate Rust code for this -- go ahead and try it -- and you'll see how bad it can be.

Better ecosystems and better training can beat better languages in many problem domains.

reply
Jtsummers
1 hour ago
[-]
> You wouldn't write mission critical stuff with it

People do, they also write mission critical stuff in Lua, TCL, Perl, and plenty of other languages. What they generally won't do is write performance critical stuff in those languages. But there is definitely some critical communication infrastructure out there running with interpreted languages like these out there.

reply
calvinmorrison
2 hours ago
[-]
or rather, maybe we stop seeing new features that are mostly there for developers and find some older languages are quite good and capable, maybe even easier since there's less to reason about
reply
cratermoon
59 minutes ago
[-]
I was expecting an insightful analysis of systems languages versus dynamically typed interpreted languages, but instead I got more sloperator hype.
reply
ElectronCharge
1 hour ago
[-]
I'm surprised the author of this article thinks Go is a "system language".

Go uses GC, and therefore can't be used for hard real time applications. That's disqualifying as I understand it.

C, C++, Rust, Ada, and Mojo are true system languages IMO. It is true that as long as you can pre-allocate your data structures, and disable GC at runtime, that GC-enabled languages can be used. However, many of them rely on GC in their standard libraries.

reply
jandrewrogers
25 minutes ago
[-]
I think the broad consensus (and I agree with it) is that a systems language cannot have a mandatory GC. The issue with GCs isn’t just latency-optimized applications like hard real-time. GCs also reduce performance in throughput-optimized applications that are latency insensitive, albeit for different reasons.

Anything that calls itself a “systems language” should support performance engineering to the limits of the compiler and hardware. The issue with a GC is that it renders entire classes of optimization impossible even in theory.

reply
Jtsummers
47 minutes ago
[-]
The Go creators declared it a systems language and it's stuck around for some reason.

Their definition was not the one most people would have used (leading to C, C++, Rust, Ada, etc. as you listed) but systems as in server systems, distributed services, etc. That is, it's a networked systems language not a low-level systems language.

reply
code_martial
50 minutes ago
[-]
You can preallocate your data structures and control memory layout in Go.

Also, despite GC there’s a sizeable amount of systems programming already done in Go and proven in production.

Given how much importance is being deservedly given to memory safety, Go should be a top candidate as a memory safe language that is also easier to be productive with.

reply
est
1 hour ago
[-]
oh no, not this again.

There's a joke I forgot its name, something goes like

- high performance language but hard-coded

- xml/yaml configs

- dynamic configs and codegen

- lua or python

- let's static type python and using a compiler

- high performance language but hard-coded

reply
invalidname
2 hours ago
[-]
Predicting the future is futile, but I would guess this would be exactly the opposite. LLMs make it remarkably easy to generate a lot of code so they can easily generate a lot of Rust code that looks good. It probably wouldn't be, and for us it would be unreadable when something goes wrong. We would end up in LLM debugging hell.

The solution is to use a higher level safer, strict language (e.g. Java) that would be easy for us to debug and deeply familiar to all LLMs. Yes, we will generate more code, but if you spend the LLM time focusing on nitpicking performance rather than productivity you would end up in the same problem you have with humans. LLMs also have capacity limits and the engineers that operate them have capacity limits, neither one is going away.

reply
imperio59
2 hours ago
[-]
I've been thinking about this and using Rust for my next backend. I think we still lack a true "all in one" web "batteries included" framework like Django or RoR for Rust.

Maybe someone should use AI to write the code for that...

reply
anon291
40 minutes ago
[-]
I mean the simple answer here is to just develop proper frameworks for Futamura projections. There's an exact one to one algorithmic correspondence between an interpreted program and the compiled version of that. GraalVM and PyPy are good options here.

Using an LLM is overkill especially when correctness can never be guaranteed by systems who must sample from a probability distribution.

reply