I have shipped a few projects, I always review AI-suggested code, do daily coding practice without AI, watch youtube videos, etc. but still don't know if I'm striking the right balance or whether I can really call myself a programmer.
I often see people say that the solution is to just fully learn to code without AI, (i.e, go "cold turkey"), which may be the best, but I wonder if the optimal path is somewhere in between given that AI is clearlly changing the game here in terms of what it means to be a programmer.
I'm curious how you have all handled this balancing act in the past few years. More concretely, what strategies do you use to both be efficient and able to ship / move quickly while ensuring you are also taking the time to really process and understand and learn what you are doing?
In my view AI tools are a sort of super-advanced interactive documentation. You can learn factual information (excluding allucinations) by either asking or looking at the generated code and explanations of it. But in the same way documentation alone was not a sufficient learning tool before, AI is not now.
What AI cannot give you and I suggest you to learn through other resources:
- algorithmic proficiency, i.e. how to decompose your problems into smaller parts and compose a solution. You don’t necessarily need a full algorithms course (even though you find good ones online for free) but familiarising with at least some classical non-trivial algorithm (e.g. sorting or graph-related ones) is mind-changing.
- high-level design and architecture, i.e. how to design abstractions and use them to obtain a maintainable codebase when size grows. Here the best way is to look at the code of established codebases in your preferred programming language. A good writer is an avid reader. A good programmer reads a lot of other people’s code.
- how programming languages work, i.e. the different paradigms and way of thinking about programming. This lets you avoid fixing on a single one and lets you pick the right tool for each task. I suggest learning both strongly-typed and dynamic languages, to get the feeling of their pros and cons.
That’s an incomplete list from the top of my mind.
You can still use AI as a tool in learning these things, but good old books and online resources (like Coursera) worked really well for decades and are not obsolete at all.
And the last thing is the most important: curiosity about how things work and about how to make them better!
But here is my advice. Learning by doing with AI seems akin to copying source from one location (I.e. view source, stackoverflow).
My tips:
- Understand all of the code in a commit before committing it (per feature/bug).
- Learn by asking AI for other ways or patterns to accomplish the something it suggests.
- Ask Claude Code to explain the code until you understand it.
- If code looks complex, ask if it can be simplified. Then ask why the simple solution is better.
- Tell AI that you’d like to use OOP, functional programming, etc.
One way to measure if you’re learning is to pay attention to how often you accept AI’s first suggestion versus how many times you steer it in a different direction,
It’s really endless if your mindset is to build AND learn. I don’t think you need to worry about it based on the fact you’re here asking this question.
Just repeat this until you understand a language's unique ways of implementing things, and understand why a language has those choices compared to others. I always pick one of these experiments to learn a new language with/out LLM support. 1. Ray tracing 2. Gameboy Emulator 3. Expression evaluation (JSONLogic or Regex)
These are super easy to implement in 100s of lines of code, however if you want to optimize or perfect the implementation, it takes forever and you need to know a language's nuances to get it better. Focus on performance tuning these implementations and see how far you can go.
Versus someone or something giving you the, or even several, correct answers and you picking one. You are given what works. But you don't know why, and you don't know what doesn't work.
Learning from AI coding probably is somewhere between traditional coding and just reading about coding. I'm not sure which one it's closer to though.
However, it may not be necessary to learn to that depth now that AI coding is here. I don't really know.
I recognised that my weaknesses are more in understanding the mathematical fundamentals of computation, so now I’m mostly studying maths rather than coding, currently linear algebra and propability theory. Coding is the easy part I’d say. Hopefully I get to concentrate on the study of my sworn enemy, algorithms, at some point.
I’d like to be able to do low-code and graphics/sound -programming some day. Or maybe use that knowledge some other cool stuff, if we are all replaced by robots anyway.
That said, learning the fundamental topics can limit your thinking first if they feel difficult, so it's an interesting question how to keep the naïve creativity of the beginner that's something that can really help when building with AI because there are less limitations in your thinking based on how things used to be.
And it can be pretty great for that. But I'm not sure if this works well for people who don't have experience reading API documentation or coding support sites like Stackoverflow. Beginners having a problem most likely don't know any abstract term for the problem they want to solve, so they'll feed their scenario meticulously to the LLM, causing it to output an already tailored solution which obfuscates the core logical solution.
It's like starting to learn how to code by reading advanced code of a sophisticated project.
Side note, I'm assuming you find joy in programming. If you don't, there's better ways to spend your time.
If you really can't drop the AI, ask it how to do stuff when you are really blocked, ask it not to provide code (you need to write it to understand and learn), but I suspect you'd be better served by a regular web search and by reading tutorials written by human beings who crafted and optimized the writting for pedagogy.
It will probably feel slow and tedious, but that's actually a good, mpre efficient use of your time.
At this point of your journey, where your goal is above all to learn, I doubt the AI works in your interest. It's already unclear it provides long term productivity boost to people who are a bit more experienced but still need to improve their craft.
You don't need to optimize the time it takes to build something. You are the one to "optimize".
I appreciate this will be a deeply controversial statement here. As someone who's been coding for 25+ years and has some part of my identity in my ability to code this hurts and wounds me, but it is sadly true. The skills I've built and honed have little value in this new market. This must be how musicians felt when radio, records etc came about. My craft has been commoditized and it turns out nobody cared about the craft. They are happy listening to canned music in restaurants. Musicians are now like zoo animals where people pay an entry fee to see them for the novelty value. I exaggerate to illustrate the shift but part of me fears this might be more analogous than I dare to understand.
Code is about providing value to a business not in the lines of code themselves. Code is a means to an end.
If you want to understand coding for your own intellectual and hobbyist pursuit then please do. Generations of autistic-leaning people have found satisfaction doing so - but don't do it thinking it will remain a rewarding career.
The answer: because people find joy in doing it themselves.
The only exception really are greenfield apps like "create a toy todo app demo" or "scaffold this react project" but that's like 0.001% of real world engineering work.
I bust my ass getting software written by hand using The Book and the API reference. Then I paste it into an LLM and ask it to review it. I steal the bits I like. The struggle is where we learn, after all.
I also bounce ideas off LLMs. I tell it of a few approaches I was considering and ask it to compare and contrast.
And I ask it to teach me about concepts. I tell it what my conception is, and ask it to help me better understand it. I had a big back and forth about Rust's autoderef this morning. Very informative.
I very, very rarely ask it to code things outright, preferring to have it send me to the API docs. Then I ask it more questions if I'm confused.
When learning, I use LLMs a lot. I just try to do it to maximize my knowledge gain instead of maximizing output.
I'm of the belief that LLMs are multipliers of skill. If your base skill is zero, well, the product isn't great. But if you possess skill level 100, then you can really cook.
Put more bluntly, a person with excellent base coding skills and great LLM skills with always outperform, significantly, someone with low base coding skills and great LLM skills.
If I were writing code for a living, I'd have it generate code for me like crazy. But I'd direct it architecturally and I'd use my skills to verify correctness. But when learning something, I think it's better to use it differently.
IMHO. :)
Before the AI era, I didn’t know much bash, but I was a reasonably OK programmer besides that I think. I found by getting AI to write me a lot of bash scripts and following along and then making edits myself when I needed small things changed I ended up with the ability to write bash now, and actually kind of appreciated as a language where as before I thought it was confusing. YMMV
Like anything with enough dedication you can achieve what you want.
Probably in 10y from now, it will be a flex if someone is building or doing stuff without using AI, just like now if you are using a manual screwdriver instead of impact driver or actually going to the library to research a topic instead of googling it.
I think this is the correct answer. Also we technically never stop learning. There's always some new coding trick that alluded us until AI spits it out.
My 2 cents: Switch to chat mode from agent mode and have better chats about approaches to code. I'm constantly challenging AI to explain its code, including talking about pros and cons of this or that method, and even the history of why certain new features were brought to javascript for example. It's also fun to query the AI about performance optimisation, presuming we all want the least amount of cycles used for the given procedure.
I don't see why even with Ai you won't need to have a solid understanding of the parts of computing programing is built on top of.
Even if your prompting, you need to know what to prompt for. How are you going to ask it to make it faster if you don't know it can be faster, or if you waste time on trying to make something faster that can't be?
Go through something like cs classes from MIT and do the work.
A career in software development 30+ years later, and I'm back learning from day one again, because LLMs are profoundly changing how we do this.
Example: two years ago, I built a website as an MVP to test a hypothesis about our customers. It took me 6 weeks, didn't look good, but worked and we used it to discover stuff about our customers. This week I've vibe-coded a much better version of that MVP in an afternoon. This is revolutionary for the industry.
The state of the art on LLM coding is changing fast and by orders of magnitude. They still get things wrong and screw up, but a lot less than they did a year ago. I fully expect that in a couple of years [0] writing code by hand will be completely archaic.
So, what does this mean for people learning to code?
Firstly, that hand-rolling code will become artisanal, a hobby. Hand-coding a program will become like hand-carving a spoon; the point is not to produce the best spoon in the most efficient manner, but to create Art.
Secondly, that commercial coding as a career will revolve around collecting business requirements, translating them into prompts, and orchestrating LLM code engines. This will be a cross between "Product Manager", "Project Manager", and "Solution Architect" in current role definitions.
Thirdly, that at least for next few years, understanding how code actually works and how to read it will be an advantage in that commercial career space. And then it'll be a disadvantage, unnecessary and potentially distracting. Soft social skills will be the primary factor in career success for this profession in the future.
The industry has been through similar changes before. Most obviously, the invention of compilers. Pre-compiler, programmers wrote machine code, and had to manage every single part of the operation of the computer themselves. Need a value from memory for an operation? You had to know where it was stored, clear a register to receive it, fetch it, and work out where to store the result. Post-compiler, the compiler managed all of that, and we were able to move to high-level languages where the actual operation of the computer was a couple of abstraction layers below where we're thinking. We no longer need to know the actual physical memory address of every value in our program. Or even manage memory allocation at all. The compiler does that.
And yes, there was a generation of programmers who hated this, and considered it to be "not real programming". They said the compilers would write worse, less efficient, programs. And for years they were right.
So, to answer your question:
> AI is clearlly changing the game here in terms of what it means to be a programmer.
> More concretely, what strategies do you use to both be efficient and able to ship / move quickly while ensuring you are also taking the time to really process and understand and learn what you are doing?
Embrace the change. Learn to manage an LLM, and never touch the code. Just like you're not writing machine code - you're writing a high-level language and the compiler writes the machine code - the future is not going to be writing code yourself.
Good luck with it :)
[0] There are lots of questions around the finances and sustainability of the entire LLM industry. I'm assuming that nothing bad happens and the current momentum is maintained for those couple of years. That may not be the case.