Anyone else feel this? And more specifically — what deliberate practices are you using to keep your reasoning sharp?
After I see how astonishingly poor LLMs are at decision-making (even the best ones, like Claude Opus or GPT 5.4) while writing code, I naturally stop trusting them in other areas of life too much to "just have a conversation with them and get all the answers".
It's all fun and games while the stakes are non-existent, but if the question really matters, would you trust an LLM fully as much as to not exercise thinking at all?
The actual thinking task is still very much mine.
LLM helps with solving the task faster, but here too, with lots of corrections and involving an extremely solid supervision process.
So, I kind of doubt the sincerity of your post. In fact, I doubt it very much.
I like AI use cases where I can offload large tasks and follow up on them to review the output critically. I think there is a ton of opportunity to leverage yourself with AI, as all of the hype posts say you can. But most tools are simply invasive and, to me, the equivalent of social media 'junk food for the mind' where you just turn your brain off and start clicking approve, approve, copy, paste, etc.. as opposed to doom scrolling.
Or worse you delegate an easy task to an LLM agent and start doom scrolling while you wait on it. No bueno.