All abstraction layers are predictable and repeatable when used correctly.
You specify something in the language of the abstraction, and get a result that is precisely understood by the rules and requirements of the abstraction.
Only those who programmed by trial and error before AI do not see a difference. That's because they treated their compilers as mysterious AI, and must massaged their programs into working. In other words, they were already accustomed to a kind of prompt engineering.
So we generate one or many changesets (in series or in parallel) then iterate on one. We force the “chosen one” to be the one true codification of the spec + the other stuff we didn’t write down anywhere. Call it luck driven development.
But there’s another way.
If we keep starting fresh from the spec, but keep adding detail after detail, regenerating from scratch each time.. and the LLM has enough room in context to handle a detailed spec AND produce output, and the result is reasonably close to deterministic because the LLM makes “reasonable choices” for everything underspecified.. that’s a paradigm shift.
How is that different from how it worked without LLMs? The only difference is that we can now get a failing product faster and iterate.
> If we keep starting fresh from the spec, but keep adding detail after detail, regenerating from scratch each time..
This sounds like the worst way to use AI. LLMs can work existing code, whether it was generated by an LLM or written by human. It can even work on code that has been edited by a human, there is no good reason to not be iterative when using an LLM to develop code, and plenty of good reasons to be iterative.
The difference is that there is an engineer in the middle who can judge if the important information is provided or not as input.
1. for a LLM "the button must be blue" has the same level of importance as "the formula to calculate X is..."
2. failing faster and iterating is good thing if the parameters of failing are clear which is not always the case with vibecoding, especially when done by people with no prior experience in developing. plenty of POCs build with vibecoding have been presented with no aparent failure in their happy path but with disastrous results in edge cases or with disastrous Security etc.
3. where previously, familairity with the codebase and especially the "history of changes" gave you context about why some workarounds were put into place, these are things that are lost to a LLM. Vibecoding a change to an existing system risks removing those "special workarounds" that keep in mind much more than the current context of the specifications or prompt.
You can divide those into two prompts though, there is no point for the LLM to work on both features at the same time. This is why iterative is so useful (oh, the button should be blue, ... and later, the formula should be X).
> 2. failing faster and iterating is good thing if the parameters of failing are clear which is not always the case with vibecoding, especially when done by people with no prior experience in developing. plenty of POCs build with vibecoding have been presented with no aparent failure in their happy path but with disastrous results in edge cases or with disastrous Security etc.
This isn't about vibecoding. If you are vibecoding, then you aren't developing software, you are just wishing for good code from vague descriptions that you don't plan to iterate on.
> 3. where previously, familairity with the codebase and especially the "history of changes" gave you context about why some workarounds were put into place, these are things that are lost to a LLM. Vibecoding a change to an existing system risks removing those "special workarounds" that keep in mind much more than the current context of the specifications or prompt.
LLMs can read and write change logs just as well as humans can (LLMs need change logs to do updates, you can't just give it a changed dependency and expect the LLM to pick up on the change, it isn't a code generator). Actually, this is my current project, since a Dev AI pipeline needs to read and write change logs to be effective (when something changes, you can't just transmit the changed artifact, you need to transmit a summary of the change as well). And again, this is serious software engineering, not vibecoding. If you are vibecoding, I have no advice to give you.
I won't lie and say "That's a great idea" when it isn't.
It's like the nix philosophy.
When changes are needed, improve the spec and you can nuke the entire thing and start over.
something like immutable code development.
One major problem is: how do you not break existing data on the database when code changes?
Maybe include current database structure in the spec.
When I think of English specifications that (generally) aim to be very precise, I think of laws. Laws do not read like plain, common language, because plain common language is bad at being specific. Interpreting and creating laws requires an education on par with that required of an engineer, often greater.
And to create software specifications with language, the same thing will need to happen. You’ll need shared terminology and context that the LLM will correctly and consistently interpret, and that other engineers will understand. This means that very specific meanings become attached to certain words and phrases. Without this, you aren’t making precise specifications. To create and interpret these specifications will require learning the language of the specs. It may well still be easier than code - but then it would also be less precise.
That sounds awfully similar to... software development.
Usually programming languages intend to make editing as easy as possible, but also understanding what the program does, as well as reasoning about performance, with different languages putting different emphasis on the various aspects.
But without the need to “program” you can focus on the end user and better understand their needs - which is super exciting.
Curious whether the author is envisioning changing configuration of running code on the fly (which shouldn’t require an interpreted language)? Or whether they are referring to changing behavior on the fly?
Assuming the latter, and maybe setting the LLM aspect aside: is there any standard safe programming paradigm that would enable this? I’m aware of Erlang (message passing) and actor pattern systems, but interpreted languages like Python don’t seem to be ideal for these sorts of systems. I could be totally wrong here, just trying to imagine what the author is envisioning.
"Configuration" implies a preset, limited number of choices; dynamic languages allow you to rewrite the entire application in real time.
Of course this requires a huge architecture change from OS level and up.
Interpretation isn’t strictly required, but I think runtimes that support hot-swap / reloadable boundaries (often via interpretation or JIT) make this much easier in practice.
You are not changing the abstraction, you are generating it in a different way. That is a hugely different idea.
Until the ONLY thing you look at for a long lived product is the "english spec" then the analogy is incredibly wrong.
Every single documentation out there for new libs is AI generated and that is feed again into LLMs with MCP/Skills servers, the age of the RTFM gang is over sigh
Funny enough, that wasn't the case for me recently. I was working with an old database with no FKs and naturally, rows that pointed to nowhere. I was letting search.brave.com tell me what delete statement I needed to clean up the data given an alter table statement to create an FK.
It was just magically giving me the correct delete statements, but then I had a few hundred to do. So I asked it to give me a small program that could do the same thing. It could do the job for me, but it could not write the program to do the job. After about 30 minutes of futzing with prompts, it was clearly stuck trying to create the proper regex and I just went back to pasting alter tables and getting deletes back until the job was done.
There was no intermediate product. The LLM was the product there.
If you're copying and pasting SQL statements, then SQL statements are the intermediate product. The fact that you didn't carefully review them and just ran them immediately is no different than an LLM producing Java source code that you shipped to the user without reviewing because it worked correctly in your limited testing. There's still an intermediate product that should have gone through the same software development robustness process that all source code should go through, you just didn't care to do it (and maybe rightly so if it's not super important).
Where can I see examples of this?
https://news.ycombinator.com/item?id=46439753
https://news.ycombinator.com/item?id=46369114
https://news.ycombinator.com/item?id=46366864
Juts the first three I found via hn.algolia.com.
I can only assume people saying that don't even know what assembly is. Actually, as I typed that out I remembered seeing one comment where someone said "hexcode" instead of assembly (lol)