That this kind of writing puts a great number of us off is not important to many who seek their fortune in this industry.
I hear the cry: "it's my own words the LLM just assisted me". Yes we have to write prompts.
I'll let an LLM update code documentation or even write a README for my project but I'll edit that to ensure it doesn't express opinions or say things like "This is designed to help make code easier to maintain" - because that's an expression of a rationale that the LLM just made up.
I use LLMs to proofread text I publish on my blog. I just shared my current prompt for that here: https://simonwillison.net/guides/agentic-engineering-pattern...
I'm not shy to admit that LLMs even from 2 years ago could communicate ideas much better than me, especially for a general audience.
It’s like everything else that AI can do - looks fine at a glance, or to the inexperienced, but collapses under scrutiny. (By your own admission you’re not a great communicator… how can you tell then?)
Thankfully we don't have to know how to write well to enjoy a well written book.
A lot of the time, the inability to express an idea clearly hints at some problem with the underlying idea, or in one's conceptualisation of that idea. Writing is a fantastic way to grapple with those issues, and iron out better and clearer iterations of ideas (or one's understanding thereof).
An LLM, on the other hand, will happily spit out a coherent piece of writing defending any nonsense idea you throw at it. Nothing is learnt, nothing is gained from such "writing" (for either the author or the audience).
It doesn't come naturally to the more introverted type of person who cares about the object level problem and not whatever anyone else may know or doubt, I'll admit this. But slapping LLMs on it is not a great solution.
We should probably normalize publishing things in our native languages, and expecting the audience to run it through a translator. (I have been toying with the idea of writing everything in Esperanto (not my native language, but a favorite) and just posting links to auto-translated English versions where the translation is good enough).
EDIT: as someone with friends and family from Eastern Europe, I can tell you that the prevailing attitude is: "everything is bullshit anyway" (which, to be fair, has a lot of truth to it), and so it is no surprise that people would enthusiastically embrace a pocket-sized bullshit factory, hook it up to a fire-hose, and start spraying. We saw it with spam, and we see it now with slop. It won't stop unless the system stops rewarding it.
I doubt it; share something you wrote prior to, say... 2024.
It seems to bother people, perhaps since it may have been low-effort. Doesn't it not matter as long as the content is good? Otherwise, it seems to be no different than a standard low-quality post.
"Why is everyone railing against my spam? Doesn't it not matter as long as the deal I am offering is good?"
When people don't want the spam, it is irrelevant whether the spammer is offering a good deal or not.
I don’t think there will be a point in coming to this site if it’s just going to be slop on the front page all the time.
Maybe mods should consider a tag or flag for AI generated content submissions?
Like look at this paragraph:
> Junior engineers have traditionally learned by doing the simpler, more task-oriented work. Fixing small bugs. Writing straightforward features. Implementing well-defined tickets. This hands-on work built the foundational understanding that eventually allowed them to take on more complex challenges.
The first sentence was enough to convey everything you needed to know, but it kept on adding words in that AI cadence. The entire post is filled with this style of writing, which, even if it is not AI, is extremely annoying to read.
Here's another example from the blog:
> Here is something that gets lost in all the excitement about AI productivity: most software engineers became engineers because they love writing code.
> Not managing code. Not reviewing code. Not supervising systems that produce code. Writing it. The act of thinking through a problem, designing a solution, and expressing it precisely in a language that makes a machine do exactly what you intended. That is what drew most of us to this profession. It is a creative act, a form of craftsmanship, and for many engineers, the most satisfying part of their day.
can just be:
> Most software engineers became engineers because they love writing code. It is a creative act, a form of craftsmanship, and for many engineers, the most satisfying part of their day.
Clarity is something that is taught in every writing class but AI generated text always seems to have this weird cadance as follows: The sound is loud. Not a whimper, not a roar, a simple sound that is very loud. And that's why... blah blah blah.
You have to care about your readers if you're writing something seriously. Throwing just a bunch of text that all mean the same thing in your writing is one of the bigger sins you can do, and that's why most people hate reading AI writing.
The part you'd like to remove ("Not managing code...") may be not required to convey the objective meaning of the sentence, but humans have emotions, too. I could have written stuff like that. To build up a bigger emotional picture.
> The act of thinking through a problem, designing a solution, and expressing it precisely in a language that makes a machine do exactly what you intended.
This sentence may not be relevant for whatever you experience to be the relevant message of the text. But it still says something the remaining paragraph does not. And also something I can relate to.
Also, as LLMs are statistical models, one has to assume that they write like this because their training data tells them to. Because humans write like this. Not when they do professional writing maybe, but when they just ramble. Not all blogs are written by professionals. I'd say most aren't. LLM training data consists mostly of humans rambling.
I also sometimes write long comments on the internet. And while I have no example to check, I feel like I do write such sentences, expanding on details to express more emotional context. Because I'm not a robot and I like writing a lot. I think it's a perfectly human thing to do. I find it sad that "writing more than absolutely needed" is now regarded as a sign of AI writing.
I keep seeing this assertion and I keep responding "Please, point to the volume of writing with this specific cadence that has a date prior to 2024" and I keep getting... crickets!
You're asserting that this is a common way for humans to write, correct? Should be pretty easy, then, to find a large volume of examples.
I would read the hell out of Joyce’s Perl 5 documentation, but only after six or seven beers.
5 sentence paragraph. First sentence is parataxis claim. Followed by 3 examples in sentence fragments, missing verbs, that familiar cadence. Then the final sentence, in this case also missing a verb.
Pure AI slop.
Reading AI code is very pleasant. It's well annotated and consistent - how I like to read code (although not how I write code LOL). Reading language/opinions is not meant to be this way. It becomes repetitive, boring, and feels super derivative. Why would you turn the main way we communicate with each other into a soulless, tedious, chore?
I think with coding it's because I care* about what the robot is doing. But, with communication, I care about what the person is thinking in their mind, not through the interpretation of the robot. Even if the person's mind isn't as strong. At least then I can size the person up - which is the other reason understanding each other is important and ruined when you put a robot in between.
If you're talking to someone on the phone and halfway through they identify themselves as a bot, surprising you, there's a profound sense of something like betrayal. A moment ago you were having a human connection, and suddenly that vaporized. You were misled and were just talking to an unfeeling robot.
And heartfelt writing is similar. We imagine the human at the other side of the screen and we relate. And when we discover it was a bot, no matter how accurate the sentiment, that relationship vanishes.
But with math and software, it's already sterile from a human connection perspective. It's there for a different purpose. Yes, it can be beautiful, but when we read it we don't tend to build a human connection with the coder.
An interesting exception is comments. When we read the fast inverse square root code and see the "what the fuck..." comment, we instantly relate to the person writing the software. If we later learned that comment was generated by an LLM, we'd lose that connection, again.
IMHO. :)
Not so sure about the respect aspect: I have lots of self-respect, but I don't generally broadcast respect for random other people when I write my blogs - the most recent one even called readers stupid, IIRC!
I feel it's more a matter of expression of contempt: if you can't be bothered to write it, WTF are you expecting people to read it?
I hate it. I couldn't read much more after that.
I see the post is even flagged now.
Irrespective of who wrote it or how it was written, the essay is packed with wisdom.
I’ve been programming for 30+ years and leading teams for the last 20 - and I found the essay deeply insightful.
I realise I’m a sample size of 1, but just figured I’d comment here to advocate against this post being flagged. Surprised that it is.
Looks like something AI would say. Regardless of how it really was written
Admittedly it was so long and basic, I stopped halfway.
It probably was
A better question is "Why can't the devs producing code with AI spot the same poor patterns in the code they are generating?"
Maybe my point is that, to a poor speaker of English, the AI blogpost looks good and reads well. In much the same way, to a poor programmer, the AI produced code looks good and reads well.
In a nutshell, if it generates poor English, WTF would anyone think it generates anything but poor code?
That's probably just default settings though - I asked it to rewrite, and most of the tell-tale signs are gone as I can see (apart from the em-dash)
A surgeon (no coding experience) used Claude to write a web app to track certain things about procedures he had done. He deployed the app on a web hosting provided (PHP LAMP stack). He wanted to share it with other doctors, but wasn't sure if it was 'secure' or not. He asked me to read the code and visit the site and provide my opinion.
The code was pretty reasonable. The DB schema was good. And it worked as expected. However, he routinely zipped up the entire project and placed the zip files in the web root and he had no index file. So anyone who navigated to the website saw the backups named Jan-2026.backup, etc. and could download them.
The backups contained the entire DB, all the project secrets, DB connection strings, API credentials, AWS keys, etc.
He had no idea what an 'index' file was and why that was important. Last I heard he was going to ask Claude how to secure it.
1) I guess I am not included in the set named "most software engineers."
2) If the title is "Software Engineer," I think I should be engineering, not coding.
This has probably been beaten to death, but I think this is the biggest disciminating question between "pro ai" and "against ai" in the software world is: "Dp you do (this) becuase you like writing code, or because you like building things for the world?"
Of course I don't think it's a binary decision.
Although I more more motivated by building things, I do somewhat miss the programmer flow state I used to get more often.
One concrete example of this realization was when I was researching how to optimize my claude code environment with agents, skills, etc. I read a lot of technical documents on how these supplemental plugins work and how to create them. After an hour of reading through all this, I realized I could just ask Claude to optimize the environment for me given the project context. So I did, and it was able to point out plugins, skills, agents that I can install or create. I gave it permission to create them and it all worked out.
This was a case of where I should not think more technically deeper, but at a more "meta" level to define the project enough for Claude to figure out how to optimize the environment. Whether that gave real gains is another question of course. But I have anecdotally observed faster results and less token usage due context caching and slightly more tools-directed prompts.
Prompt: "The text of the blog article is copy-pasted below. Please summarize it in one parargaph."
Result: "The blog article explores the paradox that while AI has made coding easier by automating many tasks, it has also made the role of software engineers significantly harder. AI tools have raised expectations for productivity, leading to increased workloads and burnout without corresponding support or recognition. Engineers, especially juniors, are now expected to take on broader responsibilities—ranging from product thinking to deployment—while the hands-on act of writing code has diminished. This shift in roles and expectations is often unacknowledged by leadership, creating a disconnect between those who use AI tools and those who manage the work. The article emphasizes the growing complexity of the engineer's job, the burnout that results from unrealistic demands, and the identity crisis many engineers face as their core role changes. It calls for better leadership, structured training, and realistic expectations to avoid the erosion of trust and talent in the industry."
Whatever. Blog posts about AI that dance around the topic are not that valuable. For a project, you'll hire somebody, not some AI (as bcachefs' author does). What matters is that you trust a person to do the job, and expecting them to churn code is useless. The real benefits of someone appear on their CV (experience, known technologies, human abilities -- communication, empathy, understanding).
AI generated content is an existential threat to human knowledge.
In what way? False positives or false negatives?
No jobs get easier with automation - they always move a step up in abstraction level.
An accountant who was super proficient in adding numbers no longer can rely on those skills once calculator was invented.
This is the key. I haven't found that things have become harder. The hard parts are still hard, and those have been the most important and prominent parts of my job once I reached a certain level.
However, I do wonder how we will train juniors to become seniors. Perhaps the answer is that the curriculum changes from coding and data structures to architecture and design which was typically a last minute addition in college.
That said, there are plenty of amateurs who find coding to be approachable and system design to me daunting. For them, eliminating coding and moving the focus to system design would be a nightmare.
I dunno about that. Look at blogging as an example - AI took away the "easy"[1] part of blogging, and now we are left with 90% crap AI-generated "articles" like the one you just read.
I feel it's the other way around - AI took away the hard parts, of both blogging and programming, and now what have to look forward to every single damn day is a deluge of AI slop of absolutely poor quality.
Continuing with the literature analogy (because this article was written by an AI), adding AI as a tool for authors isn't producing the next Terry Pratchett quicker, it's delaying the production of the next Terry Pratchett because the next Terry Pratchett will be drowned out by an unstoppable volume of AI slop.
After all, if you can't recognise obvious AI blog posts, what makes you think you can recognise poor code?
---------------------
[1] I am using the term as you are using it. I don't really believe that it took away the easy part.
I don't think this is true. I'm pretty sure most of them do it because it pays good salary.
I was always a mediocre engineer, and stopping out on a personal usually happened bc "feature XYZ is way too hard to build and I won't spend another three weeks on it". Nowadays anything can be built in a couple of days, scope creep plus "would be cool if it could also do XYZ" makes it harder to walk away from a project and call it done.
But ofc these are personal projects, and I use them daily (like a personal workout system and tracker which I run w/ Claude Code, which love to call Claude Co-Workout). It doesn't "work" as a standalone app. It's mostly a "display system" for whatever CC outputs to me, so I can take the daily workout to the gym.
I got into software bc I liked to put out fun products and projects; I never really liked the process of writing software itself. But either way I'm still running into the "it's harder to put projects out than ever" dilemma, even though the projects are way easier to make, and higher quality than ever.
I'm wondering if it'd be fun to have a "Ask HN: Show us what you've build with (mostly) AI" thread?
These, surely, are the skills they always needed? Anyone who didn't have these skills was little more than a human chatgpt already, receiving prompts and simply presenting the results to someone for evaluation.
I'd say this -- if you really want to be a real engineer, you should avoid many career paths out there. Potentially ANY positions DIRECTLY facing business stakeholders is at best not a good choice, and at worst deprive your already remote chance to be a good engineer. The lower level you move into, the better, because the environment FORCES you to be a true engineer -- either you don't and fail, or you do and keep the job.
The scenario I'm somewhat worried about is that instead of 1 PM, 1 designer and 5 developers, there will be 1 PM, 1 designer and 1 developer. Even if tech employment stays stable or even slightly increases due to Jevons paradox, the share of software developers in tech employment will shrink.
Maybe this is not entirely true yet, but it most likely will be in the near future.
Can they really? Engineering is about keeping the whole picture in mind so that you know which lever to push and which to not push for a certain goal. Trying until you're lucky can get you to that goal, but it's costly and not sustainable. So you need someone that can work out a model for experimentation in a less costly manner.
Judgment in this case is about deciding which path to direct the project, tradeoffs is being aware that there are other paths that are better in some aspects. And responsibility is acknowledging that a bad decision will bear a personal cost.
Everyone does the above in their own domain. But I don't think I've ever see a manager wanting to do it in the engineering domain. It's more about pushing the engineer to accept the responsibility, but denying them the power of judgment.
This resonates somewhat, but for a different reason. My mental model is that there are two kinds of developers, the craftsmen and the artists.
The artist considers the act of writing code their actual fulfillment. They thrive on beautifully written code. They are often attached to their code to a point where they will be hurt if someone criticizes (or even deletes) it.
The craftsman understands that code exists to serve a purpose and that is to make someone's life easier. This can be a totally non-technical customer/user that now can get their work done better. It could be another developer that benefits from using a library we wrote.
The artist hates LLMs as it takes away their work and replaces their works of beauty with generic, templatized code.
The craftsman acknowledges that LLMs are another tool in the toolbelt and using them will make them create more benefits for their customers.
Interestingly, most jobs don't incentivize working harder or smarter, because it just leads to more work, and then burn-out.
[1] https://en.wikipedia.org/wiki/Automation#Paradox_of_automati...
What I never enjoyed was looking up the cumbersome details of a framework, a programming language or an API. It's really BORING to figure out that tool X calls paging params page and pageSize while Y offset and limit. Many other examples can be added. For me, I feel at home in so many new programming languages and frameworks that I can really ship ideas. AI really helps with all the boring stuff.
AI makes using them a breeze.
I can actually build nice UIs as a traditional ML engineer (no more streamlit crap). People are using them and genuinely impressed by them
I can fly through Rust and C++ code, which used to take ages of debugging.
The main thing that is clear to me is that most of the ecosystem will likely converge toward Rust or C++ soon. Languages like Python or Ruby or even Go are just too slow and messy, why would you use them at all if you can write in Rust just as fast? I expect those languages to die off in the next several years
I think there's a big split between those who derive meaning and enjoyment from the act of writing code or the code itself vs. those who derive it from solving problems (for which the code is often a necessary byproduct). I've worked with many across both of these groups throughout my career.
I am much more in the latter group, and the past 12mo are the most fun I've had writing software in over a decade. For those in the first group, it's easy to see how this can be an existential crisis.
If you give an AI a very general prompt to make an app that does X, it could build that in any imaginable way. Someone who doesn't know how these things are done wouldn't understand what way was chosen and the trade-offs involved. If they don't even look at the code, they have no idea how it works at all. This is dangerous because they are entirely dependant on the AI to make good decisions and to make any changes in the future.
Someone who practices engineering by researching, considering their options, planning and designing, and creating a specification, leaves nothing up to chance. When the prompt is detailed, the outcome is constrained to the engineer's intent. If they then review the work by seeing that it wrote what they had in mind, they know that it worked and they know that the system design matches their own design. They know how it works because they designed it and they can modify that design. They can and have read the code so they can modify it without the help of the AI.
If you know what code you want generated, reviewing it is easy - just look and see if it's what you expected. If you didn't think ahead about what the code would look like, reviewing is hard because you have to start by figuring out what the codebase even does.
This goes the same for working in small iterations rather than prompting am entire application into existence. We all know how difficult it is to review large changes and why we prefer small changes. Those same rules apply for iterations regardless of whether it was written by a person or an AI.
AI code generation can be helpful if the engineer continues acting as an engineer. It's only when someone who isn't an engineer or when an engineer abdicates their responsibilities to the AI that we end up with an unmaintainable mess. It's no different than amateurs writing scripts and spreadsheets without a full understanding of the implications of their implementation. Good software comes from good engineering, not just generating code; the code is merely the language by which we express our ideas.
In the past, I would give them an assignment and they would take a few days to return with the implementation. I was able to see them struggling, they would learn, they would communicate and get frustrated by their own solution, then iterate.
Today, there are two kinds: 1) the ones who take a marginally smaller amount of time because they’re busy learning, testing and self reviewing, and 2) the ones who watch Twitch or Youtube videos while Claude does the job and come to me after two hours with “done, what’s next” while someone has to comb through the mess.
Leadership might see #2 and think they’re better, faster. But they are just a fucking boat anchor that drags down the whole team while providing nothing more than a shitty interface to an LLM in return.
A. Measurably demonstrate that atleast 50% of code/tests are AI generated.
B. X% Faster delivery timelines due to improved productivity tools.
You can't expect to make a pizza in 50% less time just because you bought a faster doughmaker. Specially when you don't even know whether the dough comes out under kneaded, over kneaded or as plain lumps!
That can't be right?
I stopped here. Was this written by an an LLM? This sentence in particular reads exactly like the author supplied said essay as context and this sentence is the LLM's summarization of it. Nowhere is the original article linked, either, further decreasing trust. Moreover, there's an ad at the bottom for some BS "talent" platform to hire the author. This article is probably an LLM generated ad.
My trust is vacated.
This makes me feel that the SWE work/identity crisis is less important than the digital trust crisis.
So for me being able to have AI wrote certain things extremely fast with me just doing voice to text with my specific approach, is amazing.
I am all in on everything AI and have a discord server just for openclaw and specialized per repo assistants. It really feels like when I'm busy I can throw it an issue tracker number for things.
Then I will ssh via vs code or regular ssh which forwards my ssh key from 1password. My agents have read only repo access and I can push only when I ssh in. Super secure. Sorry for the tangent to the article but I have always loved coding now I love it even more.
> That is not an upgrade. That is a career identity crisis.
This is not X. It is Y.
> The trap is ...
> This gap matters ...
> This is not empowerment ...
> This is not a minor adjustment...
Your typical AI slop rhetorical phrasing.
Phrases like: "identity crisis", "burnout machine", "supervision paradox", "acceleration trap", "workload creep"
These sound analytical but are lightly defined. They function as named concepts without rigorous definition or empirical grounding.
There might be some good arguments in the article, but AI slop remains AI slop.
> AI is an in-context learner, not a standards enforcer.
> The AI is not judging your code. It is learning from it.
> Speed without structure is not speed. It is borrowed time.
> This is not about premature optimization or over-engineering. It is about giving the AI the patterns it needs to work effectively on your behalf.
> This is not a theoretical distinction. It is the single most important practical reality of working with AI coding tools in 2026.
Its not this, its that.
> But here is the part nobody wants to hear: the reverse is equally true.
> The result was transformative.
> Here is why.
If you want I can provide N=3 with the same AI pattern and phrases again.
Can you point to examples of these patterns with the same frequency in any written content dated any time prior to 2024?
But I have no issue with your argumentation whatsoever, it is just that I think there is more than sufficient evidence, and you think there is not.
In any case, I think we should start treating the majority of code as a commodity that will be thrown away sooner or later.
I wrote something about this here: https://chatbotkit.com/reflections/most-code-deserves-to-die - it was inspired by another conversation on HN.
It never was
That's different than saying a lot of people *believed* writing code was the hardest/most important part.
LLMs Can accelerate you if you use best practices and focus on provability and quality, but if you produce slop LLMs will help you produce slop faster.
... most software engineers became engineers because they love writing code. Not managing code. Not reviewing code. Not supervising systems that produce code. Writing it. The act of thinking through a problem, designing a solution, and expressing it precisely in a language that makes a machine do exactly what you intended. That is what drew most of us to this profession. It is a creative act, a form of craftsmanship, and for many engineers, the most satisfying part of their day.
Actually surprised none of the other comments have picked up on this, as I don't think it's especially about AI. But the periods of my career when I've been actually writing code and solving complicated technical problems have been the most rewarding times in my life, and I'd frequently work on stuff outside work time just because I enjoyed it so much. But the other times when I was just maintaining other people's code, or working on really simple problems with cookie-cutter solutions, I get so demotivated that it's hard to even get started each day. 100%, I do this job for the challenges, not to just spend my days babysitting a fancy code generation tool.
Is this still true?
"This is not a minor adjustment. It is a fundamental shift in professional identity. "
"That is not empowerment. That is scope creep without a corresponding increase in compensation"
Honestly, it's lazy. At least edit the bloody thing.
A SWE who bases their entire identity and career around only writing code is not an engineer - they are a code monkey.
The entire point of hiring a Software ENGINEER is to help translate business requirements into technical requirements, and then implement the technical requirements into a tangible feature or product.
The only reason companies buy software is because the alternative means building in-house, and for most industries software is a cost-center not a revenue generator.
I don't pay (US specific) 200K-400K TCs for code monkeys, I pay that TC for Engineers.
And this does a disservice to the large portion of SWEs and former SWEs (like me) who have been in the industry because we are customer-outcome driven (how do we use code to solve a tangible customer need) and not here to write pretty code.
Look, AI/ML and especially LLMs are powerful, but there does remain a degree of instability and non-determinism which will require human intervention to remediate.
That said, there is a lot of dev work in companies that is a cost-center, and those are the portions that will start getting vibe coded and deployed in product with little-to-no oversight (eg. a support portal for SMBs at an enterprise), but the equivalent feature would have already been an afterthought even without LLMs and probably given to a couple SWEs we'd be fine re-orging in a quarter anyhow.
> but there does remain a degree of instability and non-determinism which will require human intervention to remediate.
I agree.
I mean, it depends on the feature/product and how critical it is to the health of the business.
Like I mentioned in my edited comment, there is a lot of dev work in companies that is a cost-center, and those are the portions that will start getting vibe coded and deployed in product with little-to-no oversight (eg. a support portal for SMBs at an enterprise), but the equivalent feature would have already been an afterthought even without LLMs and probably given to a couple SWEs we'd be fine re-orging in a quarter anyhow because we cannot justify spending $500K-750K a year (the backend cost of 3 FT SWEs or Contractors for a company) on a customer form which nets $0 in revenue and is not directly tied with pipeline generation.
Leaders thinking they will basically prompt out new revenue generating features with no human engineers to "figure it out". Not cost centers, low hanging fruit, etc. No these are not giant corps like Google or whatever and likely run by morons, but it was easier when they did not think they were "empowered". There is no opportunity for engineers to "think in higher abstractions" or whatever in these cases.
Yeah and I'm telling you as one of those leaders that most of the leaders I am meeting with know this is unrealistic and non-tech enterprises.
I think the issue is, a lot of SWEs think their work actually matters to the bottom line (and PMs and execs will massage their ego - I'm guilty of doing this as well) but in reality they don't matter because they are working in a cost-center product or feature.
Every SWE on HN should sit down and ask themselves whether or not
1. The feature they are working on directly generates revenue for their employer.
2. If it does, does it generate revenue equivalent to at least 1% of overall revenue per year.
3. Whether the cost of your team of SWEs+PMs are putting the feature/product in the red (ie. If you are 3 Eng and 1 PM working on a product who's revenue is only $500K/yr).
If all of those questions are negative, your product/feature is at risk from LLMs but was already at risk of being offshored.
it's all so fucking tiresome
THE MARKET WILL FILL THAT VOID
IT DOES NOT MAKE IT TRUE
Also, check out the dude's linkedin: https://www.linkedin.com/in/ivanturkovic/
Another little thing that resonated was a tweet that said "some will use it to learn everything and some so that they don't have to learn anything ". Of course it's not really a hard truth. It's questionable how much you can learn without really getting your hands dirty. But I do think people looking at it as a tool that helps then and/or makes them better will profit more than people looking to cut corners.
> Not managing code. Not reviewing code. Not supervising systems that produce code. Writing it. The act of thinking through a problem, designing a solution, and expressing it precisely in a language that makes a machine do exactly what you intended. That is what drew most of us to this profession. It is a creative act, a form of craftsmanship, and for many engineers, the most satisfying part of their day.
> Now they are being told to stop.
Yeah, so what I've been realizing from witnessing the Rise of the Agents™ is that there are tons of developers that actually don't like writing code and were in it for the money all along. Nothing wrong with money --- I love the green stuff myself --- but it definitely sucks to have their ambivalence (at best) or disdain (at worst) for the craft imposed on the rest of us.
Feel free to replace `writing code` for most work functions that are enjoyable for some that are being steamrolled by Big AI atm (writing, graphic design, marketing copy, etc.).
And yes, there are also traditionalists who think the old ways are the best ways.
"Write me a feature that does _x_" isn't satisfying for me, and, like the author said in the post, it sucks that people that think otherwise are telling me that my way is the "old way", as you put it.
(It's doubly-ironic for me, as I actually like writing documentation!)