Has anyone else felt the boost brings weakness at times when you need to show strength?
"Habitual use of GPS negatively impacts spatial memory during self-guided navigation": https://www.nature.com/articles/s41598-020-62877-0
I caught myself at some point feeling like that was happening when going over my own codebase. I stopped being aggressive on AI usage for important stuff, and relegated for side stuff that I just want to get started.
I think my spelling ability is mostly correlated to how much I have read. I know how to spell many words I've never used myself.
Typing in a hurry or you forgot to add one more s in the word 'necessary'? No worries! Keep typing incorrectly and the phone will take care of the proper spelling of the word for you! And it will do so faster than you figuring out the right spelling for the misspelled word. Many of us have probably regressed in our spelling ability to some degree at this point.
(Though it could also just be that I've gotten significantly better at taking notes over the last 5 years)
For me the main difference is that I search and read much less when problem solving. In the past I'd often have to search docs and SO for ages looking for information about some bug/functionality. Today about 50% of the time I can just ask AI if it knows what I need. Although the other 50% of the time it will give me some BS hacky answers or just make something up – especially when I'm using less common libraries.
Where I have got much lazier is when it comes to writing BS code like regexs, etc. It's not that I can't write them, I just know that it will take me longer to think about and create the regex than explaining to an LLM what I want to match on. But you still need to know regex to review it so I don't feel like losing any skills, just able to work faster in some cases like how pulling a email regex from SO would have help you code a solution faster a decade a go.
So I use it for brainstorming, and as a temporary crutch for working with unfamiliar languages/frameworks/tools, and wean myself off it as quickly as I can.
For my work, which is very research and CS theory-heavy, I'm faster without it.
I believe it's equally important to ask it to explain why it generated the code when the output is unexpected. More often that not, it's acting more like "fancy autocomplete", though.
- Unplug your laptop's charger
- Disconnect from the internet
- Work using only using your battery charge and the documentation on your laptop
If you find that you need an information about something that's in a doc you don't have, only connect to the internet to download it, then disconnect. Nothing more.
Batteries will only last for about two hours when they're not new, and you have the docs on your machine without distractions online. This forces you to work only on what you're working on, and do it fast as the battery is discharging so you catch yourself more often "drifting" in your thoughts, and you'll get better at catching yourself and focusing again, because "Quick! It's discharging!"
I know it sounds silly as you can just connect to the internet or plug your charger, but it really works. Its real job is to remind you to focus and get things done.
Try it, and you'll find soon enough just how much of your attention was scattered, but you'll be weaning as soon.
I use AI for coding, but basically as a shortcut to API documentation. It lets me stay focused on my task. It removes a lot of the tedium of coding, so that I can focus on the actual problems.
Granted, I refuse to take AI solutions and just plug them blindly without looking at them. I never use “YOLO”-mode. I’m always questioning and thinking about the code it outputs, the same way I’d critique a jr dev’s pull request.
This fails when people expect rote memorization like in coding interviews, but it works well when I'm actually working through problems in a company - I have a set of references up and available, usually of the internal codebase. I use AI as an advanced reference primarily so I haven't had the problem as stated here yet!
I think a lot of folks are finding that via AI they're switching over to the way that I do code, without the benefits of internal thinking structure I've had to come up with over time to manage it. I try to think of whatever I'm doing in a sort of scaffold, and then change my focus into the details. If you lose your ability to zoom in or out on layers of abstraction (because you don't understand whats happening in detail), that's when you need to spend time diving in.
You can't just sit back all the time like a farmer who's got a machine to plow and tend to all the fields, you need to be doing QA and making sure its doing what it says its doing. If you're letting it handle what the scaffold is, you know even less.
You decide which if any mental models you outsource to AI. You decide what you manage, and whats important for you to manage. Don't let short term productivity gains hobble your ability to zoom in!
And for good measure, those stop having value in the future - as they can be referenced from AI as easily as looking up how to convert a date format or center a div.
If not for it I'd be spending even longer scouring StackOverflow and documentation for ideas and example code, so other than a potential timesaver it's not really that different than what I was doing before.
Hopefully LLMs won't vanish away... I'd be at a net negative.
I know Primagen had a similar experience that you had, he found it useful to keep the AI chat out of his direct editor, and that's how I've been using it too.
I have integrated llms into my dev workflow as much as possible and they are just not capable of doing all that much.
Of course, long term misuse by outsourcing your cognitive exercise to a machine has a negative impact.
Just auto complete your way to being completely dumb.
As a learning or exploratory tool it is best thing ever. Just don't use it to tab-tab your way out of actual work.
It's helped me stretch & explore new things that are a touch beyond my technical competence.
...but has also made me more lazy in some ways. e.g. I paste errors I don't immediately recognise into an LLM. Pre LLM I would have looked at the error in more detail first, then tried a couple of things blindly, then googled it.