Autolisp definitely sent me down the left-paren path.
But the fact is, whatever productivity gains I may have gained in Lisp are absolutely dwarfed by those gained by using an LLM. I have literally seen LLMs pointed at a problem, solve it almost instantly. And LLMs do better the more popular the programming language you're working in. So what's the point of choosing Lisp? Oh, your feeble human brain can understand the problem and craft a solution much more quickly and in a flow state without being bogged down by tedium? That's nice. Claude Code can understand the problem and craft a solution without you even being in the room. It's a cheat code. It's iddqd. It's "pay to win" for what used to be the challenging, demanding, and fun game of programming.
And Lisp went from being "still kinda the best programming language ever" to a retrocomputing curiosity almost overnight. There is no practical reason to start a new project in Lisp in 2026.
But the force-multiplier effects of LLMs are not to be denied, even if you are that kind of hacker. Eric S. Raymond doesn't even write code by hand anymore—he has ChatGPT do everything. And he's produced more correct code faster with LLMs than he ever did by hand so now he's one of those saying "you're not a real software engineer if you don't use these tools". With the latest frontier models, he's probably right. You're not going to be able to keep pace with your puny human brain with other developers using LLMs, which is going to make contributing to open source projects more difficult unless you too are using LLMs. And open source projects which forbid LLM use are going to get lapped by those which allow it. This will probably be the next Linux development after Rust. The remaining C code base may well be lifted into Rust by ChatGPT, after which contributing kernel code in C will be forbidden throughout the entire project. Won't that be a better world!
Before the incessant AI hype it was crypto, and before that it was JavaScript frameworks and before that it was ...
I’m comfortable to declare that are not macros the most powerful thing of Lisp, but the concept of an environment. Still in 2026 many languages now implement the concept of evaluating the code and make it immediately available but nothing is like Lisp.
Lower level programming languages today they all still requires compilation. Lisp is one of the few that I found having the possibility to eval code and its immediately usable and probably the only that really relies heavily on REPL driven development.
Env+REPL imo is the true power still far ahead of other languages. I can explore the memory of my program while my program is running, change the code and see the changes in real time.
The issue is that CL is old, and Clojure is so close to be perfect if it wasn’t for Java. Clojure replaces Java, not CL and this is its strength but also its weakness.
I mean, you can theoretically do it in Python etc., but nobody does.
Emacs (+Guix) is a glimpse into how I wish computing had developed. Being able to jump into any program (whether running or not), at any time, read the code and debug + modify + continue.
We are very far from that ideal but emacs is the closest we got.
Also, your Lisp will always behave exactly as you intended and hallucinate its way to weird destinations.
What they can certainly do is iterate with a listener with you acting as a crude cut and paste proxy. It will happily give you forms to shove into a REPL and process the results of them. I’ve done it, in CL. I’ve seen it work. It made some very interesting requests.
I’ve seen the LLM iterate, for example, with source code by running it, adding logging, running it again, processing the new log messages, and cycling through that, unassisted, until it found its own “aha” and fixed a problem.
What difference does it make whether it’s talking to a shell or a CL listener? It’s not like it cares. Again, the mechanics of hooking up an LLM to a listener directly, I don’t know. I haven’t dabbled enough in that space to matter. But that’s a me problem, not an LLM problem.
As for hallucinations, I believe those are like version 0 of the thing we call lateral thinking and creativity when humans manifest it. Hallucinations can be controlled and corrected for. And again—you really need to spend some time with the paid version of a frontier model because it is fundamentally different from what you've been conditioned to expect from generative AI. It is now analyzing and reasoning about code and coming back with good solutions to the problems you pose it.
FWIW, I also think performant languages like rust will gain way more prominence. Their main downside is that they’re more “involved” to write. But they’re fast and have good type systems. If humans aren’t writing code directly anymore, would a language being simpler or cleverer to read and write ultimately matter? Why would you ask a model to write your project in python, for instance? If only a model will ever interact with code, choice of language will be purely functional. I know we’re not fully there yet but latest models like opus 4.6 are extremely good at reasoning and often one-shotting solutions.
Going back to lower level languages isn’t completely out of the picture, but models have to get way better and require way less intervention for that to happen.
I used to appreciate Lisp for the enhanced effectiveness it granted to the unaided human programmer. It used to be one of the main reasons I used the language.
But a programmer+LLM is going to be far more effective in any language than an unaided programmer is in Lisp—and a programmer+LLM is going to be more effective in a popular language with a large training set, such as Java, TypeScript, Kotlin, or Rust, than in Lisp. So in a world with LLMs, the main practical reason to choose Lisp disappears.
And no, LLMs are doing more than just generating text, spewing nonsense into the void. They are solving problems. Try spending some time with Claude Opus 4.6 or ChatGPT 5.3. Give it a real problem to chew on. Watch it explain what's going on and spit out the answer.