This is the most important takeaway, imo, and a very valuable technique: Start with the obvious, stupid solution that definitely works. Then do the optimized version, while making sure it matches the naive implementation. In this case, the optimized version could even be generated from the naive one.
It is KISS stack for me personally (Keep it stupid simple)
I would still consider technical debt to be different than other forms of debt though, It feels way more of a tradeoff to me but perhaps all debt can be classified as such. Either way I think it makes for an interesting decision nonetheless.
(Assuming for the sake of argument that you guided it to the SQL version first)
Almost everything needs to be contextualized before you can even begin to answer what the right way forward is, depends so heavily on what situation you're in.
The K shaped LLM scenario makes a lot of sense to me. Educated and experienced devs get better output because they know what to ask.
Years ago, I entered a Scrabble programming contest and needed to compress a GADDAG dictionary to fit into my 6MB L3 cache. Without knowing the official name for it, I ended up using the exact same suffix-compression mechanism by moving characters to the edges instead of the nodes to merge overlapping paths.
Sharing my old write-up here in case you or other data-structure nerds find the overlap interesting! https://williame.github.io/post/87682811573.html
Sure enough, the first paragraph on the Wikipedia entry for DAFSA is:
DAFSA is the rediscovery of a data structure called Directed Acyclic Word Graph (DAWG)
First Blumer et al., 1983 came up with a "DAWG", but reading the abstract [1] I was left a little confused as to how exactly we get from 'here is how we store all substrings of a string in O(|string|) space, with "is this a substring [yn]" recognition in O(|substring|) time' to the modern DAFSA, as cool and useful as that is. Come to think of it I bet I could use that in some LeetCode problems.
But the structure we actually think of as a DAWG or DAFSA (or FST, I guess, thanks to this Rust crate) is in the paper "The World’s Fastest Scrabble Program". That worked but you had to construct a whole trie first, then compact it down, so build time was a memory hog. Then Dr. Daciuk of 3city sharpened the blade in 2000 by proving that this was as good as it gets in the unsorted case, but on a sorted set you could build the DAFSA incrementally, because an increasingly large part of the graph you were building was already optimized.
And then from there BurntSushi got involved with the implementation and the rest is history.
[1]: https://www.sciencedirect.com/science/article/pii/0304397585...
(That's what I can glean from ~30 minutes of not particularly focused reading. Forgive me if I have made any mistakes.)
https://moodle2.units.it/pluginfile.php/718375/mod_resource/...