(But maybe I'm out of touch!)
The amount of free training coming out on AI shows just how keen they are to push adoption to meet their targets.
Eventually these training will no longer be free as they pivot to profit.
https://en.wikipedia.org/wiki/DevOps_Research_and_Assessment
For the second time of the week this morning, I spent 45 min reviewing a merge request where the guy has no idea what he did, didn’t test, and let the llm hallucinate a very bad solution to a simple problem.
He just had to read the previous commit, which introduced the bug, and think about it for 1min.
We are creating young people that have a very limited attention span, have no incentive to think about things, and have very pleasing metrics on the dora scale. When asked what their code is doing, they just don’t know. They can’t event explain the choices they made.
Honestly I think AI is just a very very sharp knife. We’re going to regret this just like regretting the mass offshoring in the 2000s.
I'm a coding tutor and the most frustrating part of my job is when my students use LLM generated code. They have no clue what the code does (or even what libraries they're using) and just care about the pretty output. Whenever I try asking them questions about the code one of them responded verbatim "I dunno" and continued prompting ChatGPT (I ditched that student afterward). Something like Warp where the expectation is to not even interact with the terminal is equally bad as far as I'm concerned since students won't have any incentive to understand what's under the hood of their GUIs.
To be clear, I don't mind people using LLMs to code (I use them to code my SaaS project) but what I do mind is them not even trying to understand wtf is on their screen. This new breed of vibe coders are going to be close to useless in real world programming jobs which when combined with the push targeted at kids that "coding is the future" is going to result in a bunch of below mediocre devs both flooding the market and struggling to find employment.
But our management has drunk the Kool Aid and has now everybody obliged to use Copilot or other LLM assists.
...except when the C-suite is pressuring the entire org to use AI tools. Then these people are blessed as the next generation of coders.
AI usage like that is a symptom not the problem.
This isn't about age. I'm in my 40's and my attention span seems to have gotten worse. I don't use much social media anymore either. I see it in other people too regardless of age.
To be clear, the guy moved back a Docker image from being non-root (user 1000), to reusing a root user and `exec su` into the user after doing some root things in the entrypoint. The only issue is that when looking at the previous commit, you could see that the K8S deployment using this image wrongly changed the userId to be 1000 instead of 1001.
But since the coding guy didn't take the time to take a cursory look at why working things started to not work, he asked the LLM "I need to change the owner of some files so that they are 1001" and the LLM happily obliged by using the most convoluted way (about 100 lines of code change).
The actual fix I suggested was:
securityContext:
- runAsUser: 1000
+ runAsUser: 1001
If anything, AI helps expose shortcomings of companies. The strong ones will fix them. The weak ones will languish.
How do you propose that AI will do what you suggest, exposing shortcomings of companies? Right now, when it's being implemented, it's largely dictates from above with little but FOMO driving it, no cohesive direction to guide its use.
This has nothing to do with AI, and everything to do with a bad hire. If the developer is that bad with code, how did they get hired in the first place? If AI is making them lazier, and they refuse to improve, maybe they ought to be replaced by a better developer?
There's going to be a massive opportunity for agencies that are skilled enough to come in and fix all of this nonsense when companies realize what they've invested in.
Cyberpunk future here we come baby
This is interesting -- It's helping in some cases and possibly worsening in some others. Does anyone have the details? (I haven't looked into the report as yet.) Thanks.
Or the respondents have hard times admitting AI can replace them :-)
I'm a bit cynical but sometimes when I use Claude, it is downright frightening how good it is sometimes. Having coded for a lot of year, I'm sometimes a bit scared that my craft can, somtimes, be so easily replaced... Sure it's not building all my code, it fails etc. but it's a bit disturbing to see that somethign you have been trained a for a very long time can be done by a machine... Maybe I'm just feeling a glimpse of what others felt during the industrial revolution :-)
I've found AI is pretty good at dumb boilerplate stuff. I was able to whip out prototypes, client interfaces, tests, etc pretty fast with AI.
However, when I've asked AI "Identify performance problems or bugs in this code" I find it'll just make up nonsense. Particularly if there aren't problems with the code.
And it makes sense that this is the case. AI has been trained on a mountain of boilerplate and a thimble of performance and bug optimizations.
That's not exactly it, I think. If you look through a repository's entire history, the deltas for the bug fixes and optimizations will be there. However, even a human who's not intimately familiar with the code and the problem will have a hard time understanding why the change fixes the bug, even if they understand the bug conceptually. That's because source code encodes neither developer intent, nor specification, nor real design goals. Which was cause of the bug?
* A developer who understood the problem and its solution, but made a typo or a similar miscommunication between brain and fingers.
* A developer who understood the problem but failed to implement the algorithm that solves it.
* An algorithm was used that doesn't solve the problem.
* The algorithm solves the problem as specified, but the specification is misaligned with the expectations of the users.
* Everything used to be correct, but an environment change made it so the correct solution stopped being correct.
In an ideal world, all of this information could be somehow encoded in the history. In reality this is a huge amount of information that would take a lot of effort to condense. It's not that it wouldn't have value even for real humans, it's just that it would be such a deluge of information that it would be incomprehensible.
"this function should do X, spot inconsistencies, potential issues and bugs"
It's eye opening sometimes.
Much like in person I pretend to think AI is much more powerful and inevitable than I actually think it is. Professionally it makes very little sense to be truthful. Sincerity won't pay the bills.
If only people could be genuinely critical without worrying they will be fired
And to be honest, I don't really care. It is a very comfortable position to be in. Allow me to explain:
I genuinely believe AI poses no threat to my employment. I identify the only medium term threat the very likely economic slowdown in the coming years.
Meanwhile, I am happy to do this silly dance while companies waste money and resources on what I see as a dead-end, wasteful technology.
I am not here to make anything better.
When I have AI generate code using features I'm very familiar with I can see that it's okay but not premium code.
So it makes sense that I feel more productive but also a little skeptical.
Almost every person I worked with that is impressed by AI generated code has been a low performer that can’t spot the simplest bugs in the code. Usually the same developers that blindly copy pasted from stack overflow before.
Anyone got a pulse on what the art community thinks?
My (merged) PR rate is up about 3x since i started using claude code over the course of a few months. I correspondingly feel more productive and that i have a good grasp of what it can and cannot do. I definitely see some people use it wrong. I also see it fail on some tasks id expect it to succeed at, such as abstracting a singleton in an ios app i am tinkering with, that suggests its not merely operator error but also that its skill is uneven dep on task, ecosystem, and language.
I am curious for those that use it regularly, have you measured your actual commit rates? Thats ofc still not the same as measuring long term valuable output, but were still a ways off from being able to determime that imho.
I can increase dramatically my number of commits by breaking up my commits in very small chunks.
Typically when I am using AI I tend to reduce a lot the scope of a commit to make it more focused and easier to handle.
1. AI doesn't improve productivity and people just have cognitive biases. (logical, but I also don't think it's true from what I know...)
2. AI does improve productivity, but only if you find your own workflow and what tasks it's good for, and many companies try to shoehorn it into things which just don't work for it.
3. AI does improve productivity, but people aren't incentivised to improve their productivity because they don't see returns from it. Hence, they just use it to work less and have the same output.
4. The previous one but instead of working less, they work at a more leisurely pace.
5. AI doesn't improve producivity, people just feel it's more productive because it requires less cognitive effort to use than actually doing the task.
Any of these is plausible, yet they have massively different underlying explanations.... studies don't really show why that's the case. I personally think it's mostly 2. and 3., but it could really be any of these.
I was very impressed when I first started using AI tools. Felt like I could get so much more done.
A couple of embarrassing production incidents later, I no longer feel that way. I always tell myself that I will check the AI's output carefully, but then end up making mistakes that wouldn't have happened if I wrote the code myself.
* Checking it closely myself, which sometimes takes just as long as it would have taken me to implement it in the first-place, with just about as much cognitive load since I now have to understand something I didn't write
* OR automating the checking by pouring on more AI, and that takes just as long or longer than it would have taken me to check it closely myself. Especially in cases where suddenly 1/3 of automated tests are failing and it either needs to find the underlying system it broke or iterate through all the tests and fix them.
Doing this iteratively has made the overall process for an app I'm trying to implement 100% using LLMs to take at least 3x longer than I would have built it myself. That said, it's unclear I would have kept building this app without using these tools. The process has kept me in the game - so there's definitely some value there that offsets the longer implementation time.
Why report to your boss that you managed to get a script to do 80% of your work, when you can just use that script quietly, and get 100% of your wage with 20% of the effort?
It is from what Ive seen. It has the same visible effect on devs as a slot machine giving out coins when it spits out something correct. Their faces light up with delight when it finally nails something.
This would explain the study that showed a 20% decline in actual productivity where people "felt" 20% more productive.
This options is insidious in that not only people initially asked about the effect are initially oblivious, it is very beneficial for them to deny the outcome altogether. Individual integrity may or may not overcome this.
In theory competition is supposed to address this.
However, our evaluation processes generally occur on human and predictable timelines, which is quite slow compared to this impulse function.
There was a theory that inter firm competition could speed this clock up, but that doesn't seem plausible currently.
Almost certainly AI will be used, extensively, for reviews going forward. Perhaps that will accelerate the clock rate.
I've personally witnessed every one of these, but those two seem like different ways to say the same thing. I would fully agree if one of them specified a negative impact to productivity, and the other was net neutral but artificially felt like a gain.
> Significant productivity gains: Over 80% of respondents indicate that AI has enhanced their productivity.
_Feeling_ more productive is inline with the one proper study I've seen.
Can we stop citing this study
I'm not saying the DORA study is more accurate, but at least it surveyed 5000 developers, globally and more recently (between June 13 and July 21, 2025) which means using the most recent SOTA models
It's asking a completely different question; it is a survey of peoples' _perceptions of their own productivity_. That's basically useless; people are notoriously bad at self-evaluating things like that.
So in my case, yes but not on activities these sellers are usually claiming.
Yes, it's about AI. I'm interested to know what you were expecting. Was it titled or labeled differently 11 hours ago?
I am proudly part of the 10%!