LLMs as Language Compilers: Lessons from Fortran for the Future of Coding
39 points
1 day ago
| 5 comments
| cyber-omelette.com
| HN
conartist6
1 day ago
[-]
It's funny, but I think the accidental complexity is through the roof. It's skyrocketing.

Nothing about cajoling a model to write what you want it to is essential complexity in software dev.

In addition when you do a lot of building with no theory you tend you make lots and lots of new non-essential complexity.

Devtools are no exception. There was already lots of nonessential complexity in them and in the model era is that gone? ...no don't worry it's all still there. We built all the shiny new layers right on top of all the old decaying layers, like putting lipstick on a pig.

reply
chrisjj
1 day ago
[-]
> LLMs ... completing tasks at the scale of full engineering teams.

Ah, a work of fiction.

reply
bitwize
2 hours ago
[-]
StrongDM is doing it. In fact, their Attractor agentic loop, which generates, tests, and deploys code written as specs, has been released—as a spec, not code. Their installation instructions are pretty much "feed this into your LLM". They are building out not only complete applications, but test harnesses for those applications that clone popular web apps like Slack and JIRA, with no humans in the loop beyond writing the initial spec and giving final approval to deploy.

We're witnessing a "horses to automobile" moment in software development. Programming, as a professional discipline, is going to be over in a year or two on the outside. We're getting the "end of software engineering in six months" before we're getting a real "year of the Linux desktop". Or GTA VI.

reply
lunar_mycroft
50 minutes ago
[-]
StrongDM is attempting that. The code they produced is not inspiring confidence on a relatively small scale [0], and based on what I saw with a cursory inspection I very much doubt I wouldn't find much deeper issues if I took the time to really dig into their code.

Don't get me wrong, "sort of works if you squint at it" is downright miraculous by the standards of five years ago, but current models and harnesses are not sufficient to replace developers at this scale.

[0] https://news.ycombinator.com/item?id=46927737

reply
slopusila
1 day ago
[-]
> My concerns about obsolescence have shifted toward curiosity about what remains to be built. The accidental complexity of coding is plummeting, but the essential complexity remains. The abstraction is rising again, to tame problems we haven't yet named.

what if AI is better at tackling essential complexity too?

reply
marcus_holmes
1 hour ago
[-]
The essential complexity isn't solvable by computer systems. That was the point Fred Brooks was making.

You can reduce it by process re-engineering, by changing the requirements, by managing expectations. But not by programming.

If we get an LLM to manage the rest of the organisation, then conceivably we could get it to reduce the essential complexity of the programming task. But that's putting the cart before the horse - getting an LLM to rearrange the organisation processes so that it has less complexity to deal with when coding seems like a bad deal.

And complexity is one of the things we're still not seeing much improvement in LLMs managing. The common experience from people using LLM coding agents is that simple systems become easy, but complex systems will still cause problems with LLM usage. LLMs are not coping well with complexity. That may change, of course, but that's the situation now.

reply
jmclnx
1 day ago
[-]
This is well worth a read!
reply
measurablefunc
3 hours ago
[-]
Why? What is compelling about it?
reply
rvz
1 day ago
[-]
> With the price of computation so high, that inefficiency was like lighting money on fire. The small group of contributors capable of producing efficient and correct code considered themselves exceedingly clever, and scoffed at the idea that they could be replaced.

There will always be someone ready to drag down prices of computation low enough so that it is then democratized for all, some may disagree but that would eventually be local inference as computer hardware gets better with clever software algorithms.

In this AI story, you can take a guess who are the "The Priesthood" of the 2020s are.

> You still have to know what you want the computer to do, and that can be very hard. While not everyone wrote computer programs, the number of computers in the world exploded.

One can say, the number of AI agents will explode and surpass humans on the internet in the next few years, and reading the code and understanding what it does when generated from an AI will be even more important than writing it.

So you do not get horrific issues like this [0] since now the comments in the code are now consumed by the LLM and due to their inherent probabilistic and unpredictable nature, different LLMs produce different code and cannot guarrantee that it is correct other than a team of expert humans.

We'll see if you're ready to read (and fix) an abundance of lots of AI slop and messy architectures built by vibe-coders as maintainance costs and security risks skyrocket.

[0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...

reply