The point is both to have fun with this kind of simulations, and also explore the "functional core / imperative shell" approach to architecture. I also developed a tile and tile effect definition DSL, which makes this even easier to extend. From this point of view it's a success: easy testing, easy extension,
Gameplay-wise the simulation is too simplistic, and needs input from people interested in this kind of toys. The original Micropolis/SimSity is the last time I built a virtual city.
I never intended this project be something like a wonder of 3dfx or something. In fact, the hope was to discuss implementation details.
Either way, for those interested: screenshots of both the terminal and graphical version are now included in the README.org.
I’ve been doing my own exploration of terminal ASCII games via Dwarf Fortress instead of SimCity. I’ve learned that letting a coding agent play is an interesting way to get feedback as well :)
but then we'd have to write an interface package to run it from emacs
You might point out that there are things like elisp.lisp that purports to run Emacs Lisp in Common Lisp, but I'm not sure that's viable for anything but trivial programs. There's also something for Guile, but I remain unconvinced.
(I’m just trying to defend GP’s point – I’m not a heavy lisp user myself, tbh.)
Here, the point was to have everything in emacs completely, and also see if the architectural contraints make sense for elisp (and they do)
And have some fun, of course.
Finally RMS can play SimCity.
(zone 'zone-pgm-dissolve)https://medium.com/ssense-tech/a-look-at-the-functional-core...
The idea being that business logic gets written in synchronous blocking functional logic equivalent to Lisp, which is conceptually no different than a spreadsheet. Then real-world side effects get handled by imperative code similar to Smalltalk, which is conceptually similar to a batch file or macro. A bit like pure functional executables that only have access to STDIN/STDOUT (and optionally STDERR and/or network/file streams) being run by a shell.
I think of these like backend vs frontend, or nouns vs verbs, or massless waves like photons vs massive particles like nucleons. Basically that there is no notion of time in functional programming, just state transitions where input is transformed into output (the code can be understood as a static graph). While imperative programming deals with state transformation where statically analyzing code is as expensive as just running it (the code must be traced to be understood as a graph). In other words, functional code can be easily optimized and parallelized, while imperative code generally can't be.
So in model-view-controller (MVC) programming, the model and view could/should be functional, while the controller (event handler) could/should be imperative. I believe that there may be no way to make functional code handle side effects via patterns like monads without forcing us to reason about it imperatively. Which means that impure functional languages like Haskell and Scala probably don't offer a free lunch, but are still worth learning.
Why this matters is that we've collectively decided to use imperative code for almost everything, relegating functional code to the road not taken. Which has bloated nearly all software by perhaps 10-100 times in terms of lines of code, conceptual complexity and even execution speed, making perhaps 90-99% of the work we do a waste of time or at least custodial.
It's also colored our perception of what programming is. "Real work" deals with values, while premature optimization deals with references and pointers. PHP (which was inspired by the shell) originally had value-passing semantics for arrays (and even subprocess fork/join orchestration) via copy-on-write, which freed developers from having to worry about efficiency or side effects. Unfortunately it was corrupted through design by committee when PHP 5 decided to bolt-on classes as references rather than unifying arrays and objects by making the "[]" and "." operators largely equivalent like JavaScript did. Alternative implementations like Hack could have fixed the fundamentals, but ended up offering little more than syntactic sugar and the mental load of having to consider an additional standard.
To my knowledge there has never been a mainstream FCIS language. ClojureScript is maybe the closest IMHO, or F#. Because of that, I mostly use declarative programming in my own work (where the spec is effectively the behavior) so that the internals can be treated as merely implementation details. Unfortunately that introduces some overhead because technical debt usually must be paid as I go, rather than left for future me. Meaning that it really only works well for waterfall, not agile.
I had always hoped to win the internet lottery so that I could build and test some of these alternative languages/frameworks/runtimes and other roads not taken by tech. The industry's failure to do that has left us with effectively single-threaded computers which run around 100,000 times slower today (at 100 times the cores per decade) than they would have if we hadn't abandoned true multicore superscalar processing and very large scale integration (VLSI) in the early 2000s when most R&D was outsourced or cancelled after the Dot Bomb and the mobile/embedded space began prioritizing lower cost and power usage.
GPUs kept going though, which is great for SIMD, but doesn't help us as far as getting real work done. AI is here and can recruit them, which is great too, but I fear that they'll make all code look like its been pair-programmed and over-engineered, where the cognitive load grows beyond the ability of mere humans to understand it. They may paint over the rot without renovating it basically.
I hope that there's still time to emulate a true multiple instruction multiple data (MIMD) runtime on SIMD hardware to run fully-parallelized FCIS code potentially millions of times faster than anything we have now for the same price. I have various approaches in mind for that, but making rent always comes first, especially in inflationary times.
It took me over 30 years to really understand this stuff at a level where I could distill it down to these (inadequate) metaphors. So maybe this is TMI, but I'll leave it here nonetheless in the hopes that it helps someone manifest the dream of personal supercomputing someday.
For the purpose of this game spliting things into core/shell makes certain things super easy: saving and restoring state, undo, debugging, testing, etc.
And one more bit, relevant to this new reality we find outselves in. Having a bunch of pure functions merged into a very focused DSL makes it easy to extend the systems through LLMs: a description of well-understood inputs and outputs fits into limited context windows.
By the way.
It is true that dedicated languages never arrived but FCIS is not a language feature, it's more like a architectural paradigm.
So who cares?
Search for the section labeled: Visual Demo
Notice how it says "simplified snapshot","general layout". I don't think this is the actual representation of how the game looks like :)
Admittedly, while working on this, I did consult my LLMs advisor through gptel(https://github.com/karthink/gptel) with a few custom tools setup, which I cannot recommend enough.