Forget about LLM. That's for normies.
I totally understand the lack of motivation. Doing something just for the sake of doing it is pointless. So you have to find a project to do and the language will then become merely a tool instead of the whole point. So find out what you need or want to do, maybe a 3D engine, mobile application, database, search engine, image recognition/OCR, maybe robotics or some arduino automation or whatever and just start working on it.
I remembered why I wanted to learn this stuff. It's not for money, or to look cool.
It's for the fascination I have for computing.
How do electrons flow through a wire? How do the chips within a computer direct that flow to produce an image on a screen? These questions are mind-blowing for me. I don't think LLMs can kill this fascination. Although, for web programming, sure. I always hated front-end programming, and now I don't really have to do it (I don't have the same fascination for the why of such tech). So will I ever learn a new front-end framework? Most likely not.
What I've found now is LLMs allow me to use/learn new languages, frameworks, and other technologies while being significantly more productive than before. On the flip side, as others have mentioned, I shoot myself in the foot more often.
Basically, I output more, I see more pitfalls upfront, and I get proficient sooner. This can be the case for you if you take on an active development approach rather than a blind/passive one where you just blindly accept whatever a model outputs.
Reactive.
Not just React itself, but the paradigm, so also SwiftUI.
I never had a problem with "massive ViewControllers", the magic that's supposed to glue it all together is just a little bit fragile and hard to debug when it does break, and the syntactic sugar (at least for SwiftUI) is just self-similar enough for me to keep mixing it up.
But learning new languages? Nah, I'm currently learning/getting experience with JavaScript and ruby by code-reviewing LLM output.
Interestingly as AI models are becoming "more competent" I'm finding more and more issues with AI generated code in the project I work on...
Whenever AI is used by a more junior dev (or a senior dev who simply can't be assed) you always find strange patterns which a senior would never have done...
Typically the code works, but there might be subtle security issues or just unusual coding patterns where it appears an LLM has written slop, and instead of taking a step back and reconsidering its approach when errors crop up, LLMs tend to just add layers of complexity to patch over its slop.
These problems obviously compound if left unchecked.
I actually prefer how things were last year when coding models were less competent because at least if a problem was hard enough they'd get nowhere. Today they're good enough to keep hacking until the slop it writes is just about working.
In regards to OPs question though, I suspect there's less point in playing around with different technologies to get some basic understanding of how they work today (LLMs can do this). But if you want to be able to guide LLMs towards good solutions and ensure the code being produced in the era of AI is good, then having engineers with a deep understanding of the technologies they're using is very important.
The same can be said about coding. Code to think and explore a problem. See how different languages help you approach a problem. Get a deeper understanding about a topic by coding.
> Enough
After talking for 4 hours and 3 coffe cups, I got enough corner cases and main case to understand what they wanted. 1 week later I got a list of criteria that can be programmed. 5 years later most of the unusual but anoying rought corners were fixed. We still had a button to aprove manually the weird cases.
The current AI hype may have placed us in a filter bubble or echo chamber, shaping our conclusions. These highly specialized algorithms can nudge or reward us for thinking in specific ways.
Regarding programming languages, there is immense value in understanding internal primitives.
As example, consider concurrency primitives. Different languages provide different levels of abstraction: high-level library support in Python, the event loop structure in JavaScript, compiler-level implementations in Rust and C++, runtime-intrinsic mechanisms in Go and Java, and virtual machine intrinsics, such as Erlang.
By viewing languages through this lens, you recognize that each implements these primitives differently, allowing you to choose the most effective tool for the job.
If your goal is to assess the short-term economic value of a technology, your logic is understandable. However, learning new languages and tools remains worthwhile. When AI agents begin invoking these tools on the fly, you may not know if a specific choice is the most effective one. Without this knowledge, you will have some gaps to challenge the AI's decision.
In the long run, making the effort to master these concepts yields far greater value as a software engineer. It enables you to understand the rationale behind applying a precise tool to a precise task.
There are valid arguments supporting various perspectives on this. However, while any approach can be useful, this discussion highlights the need for wisdom: the awareness of one's own biases. As I noted earlier, filter bubbles can distort judgment. Continuously questioning your conclusions helps ensure you move toward the best outcomes. I hope you find this recommendation useful.