Anyway, boring is bad. Boring is what spends your attention on irrelevant things. Cobol's syntax is boring in a bad way. Go's error handling is boring in a bad way. Manually clicking through screens again and again because you failed to write UI tests is boring in a bad way.
What could be "boring in a good way" is something that gets things done and gets out of your way. Things like HTTPS, or S3, or your keyboard once you have leaned touch typing, are "boring in a good way". They have no concealed surprises, are well-tested in practice, and do what they say on the tin, every time.
New and shiny things can be "boring in the good way", e.g. uv [2]. Old and established things can be full of (nasty) surprises, and, in this regard, the opposite of boring, e.g. C++.
Boring is good. I don't want to be excited by technology, i want to be bored in the sense that it's simple and out of my way.
Same for KISS. I tend to tell people to not only keep things simple, but boring even. Some new code i need to read and fix or extend? I want to be bored. Bored means it's obvious and simple.
The difference? There are many complex libraries. By definition they are not simple technology.
For example a crypto library. Probably one of the most complex tasks. I would consider it a good library if it's boring to use/extend/read.
Cobol was (and for some, still is) exciting at first, but _becomes_ boring once you master it, and the ecosystem evolves to fix or work around its shortcomings. Believe it or not, even UX/UI testers can deal with and find happiness in clicking through UIs for the tenth thousand time (sure, last time I saw such Tester, was at around 2010).
This doesn't mean the technology itself becomes bad or stays good. It just means the understanding (and usage patterns) solidifies, so it becomes less exciting, hence: "boring".
But you can't sell a book with the title "Choose well-established technology". Because people would be like, no sht, Sherlock, I don't need a book to know that.
All this was my understanding before, so not sure why you think "boring" was meant to be equivalent to "old"
As much as I do get the idea, I can see how promoting use of tedious to use and dull tools is something that really misses the mark.
Well known and mature tools are still sharp and lots of them are not tedious to use.
I do not want my browser to be exiting. I do not want for it to change every week. Say moving buttons to different places. Changing how address bar operates. Maybe trying new short cut keys...
Same goes for most useful software. I actually do want them to be dull. And do their job and not get in between and make my day more interesting by having to fight against it.
If an LLM can solve a complex problem 50% of the time, then that is still very valuable. But if you are writing a system of small LLMs doing small tasks, then even 1% error rates can compound into highly unreliable systems when stacked together.
The cost of LLMs occasionally giving you wrong answers is worth it for answers to harder tasks, in a way that it is not worth it for smaller tasks. For those smaller tasks, usually you can get much closer to 100% reliability, and more importantly much greater predictability, with hand-engineered code. This makes it much harder to find areas where small LLMs can add value for small boring tasks. Better auto-complete is the only real-world example I can think of.
I'd adjust that statement - If an LLM can solve a complex problem 50% of the time and I can evaluate correctness of the output, then that is still very valuable. I've seen too many people blindly pass on LLM output - for a short while it was a trend in the scientific literature to have LLMs evaluate output of other LLMs? Who knows how correct that was. Luckily that has ended.
I misread this the first time and realised both interpretations are happening. I've seen people copy-paste out of ChatGPT without reading, and I've seen people "pass on" or reject content simply because it has been AI generated.
That said, I do think there are lots of problems where verification is easier than doing the task itself, especially in computer science. I think it is easier to list tasks that aren't easier to verify than to do from scratch actually. Security is one major one.
The emperor's new clothes ...
If he means they will never outperform humans at cognitive or robotics tasks, that's a strong claim!
If he just means they aren't conscious... then let's don't debate it any more here. :-)
I agree that we could be in a bubble at the moment though.
- LLM's are too limited in capabilities and make too many mistakes - We're still in the DOS era of LLM's
I'm leaning more towards the the 2nd, but in either case pandora's box has been opened and you can already see the effects of the direction our civilization is moving towards with this technology.
Small models doing interesting (boring to the author) use-cases is a fine frontier!
I don't agree at all with this though:
> "LLMs are not intelligent and they never will be."
LLMs already write code better than most humans. The problem is we expect them to one-shot things that a human may spend many hours/days/weeks/months doing. We're lacking coordination for long-term LLM work. The models themselves are probably even more powerful than we realize, we just need to get them to "think" as long as a human would.
If you mean better than most humans considering the set of all humans, sure. But they write code worse than most humans who have learned how to write code. That's not very promising for them developing intelligence.
In human terms, we would call that knowing how to bullshit. But just like a college student hitting junior year, sooner or later you'll learn that bullshitting only gets you so far.
That's what we've really done. We've taught computers how to bullshit. We've also managed to finally invent something that lets us communicate relatively directly with a computer using human languages. The language processing capabilities of an LLM are an astonishing multi-generational leap. These types of models will absolutely be the foundation for computing interfaces in the future. But they're still language models.
To me it feels like we've invented a new keyboard, and people are fascinated by the stories the thing produces.
I have nothing against cloud or AI per se, but I still believe in the right tool for the right job and in not doing things just for the sake of it. While raising valuation is a good thing, raising costs, delaying more useful features and adding complexity should also be taken into account.
um, dynamo is a generator, it takes mechanical energy and turns into to electricity.
That's just your experience, based on your geolocation and chain of events.
One can argue that every other field of engineering outside of Software Engineering, specializes in making complex things into boring things.
We are the unique snowflakes that take business use cases and build castle in the clouds that may or may not actually solve the business problem at hand.
... and if it all falls down, don't blame us - you clicked the EULA /s