Key findings:
• 85.7% have unresolved tensions (efficiency vs quality, convenience vs skill)
• Creatives struggle MOST yet adopt FASTEST
• Scientists have lowest anxiety despite lowest trust (see ai as a tool, plain and simple)
• 52% of creatives frame AI through "authenticity" (using it makes them feel like a fraud)
Same data, different lens. Full methodology at bottom of page.
Analysis: https://www.playbookatlas.com/research/ai-adoption-explorer
Dataset: https://huggingface.co/datasets/Anthropic/AnthropicInterview...Or maybe teach your LLM to fix itself. Starting rule set:
Everywhere I look the adoption metrics and impact metrics are a tiny fraction of what was projected/expected. Yes tech keynotes have their shiny examples of “success” but the data at scale tells a very different story and that’s increasingly hard to brush under the carpet.
Given the amount of financial engineering shenanigans and circular financing it’s unclear how much longer the present bonanza can continue before the financial and business reality playing out slams on the brakes.
People are absolutely torn. It seems that ai usage starts as a clutch, then it becomes an essential tool and finally it takes over the essence of the profession itself. Not using it feels like a waste of time. There’s a sense of dread that comes from realizing that it’s not useful to “do work” anymore. That in order to thrive now, we need to outsource as much of your thinking to GPT as possible. If your sense of identity comes from “pure” intellectual pursuits, you are gonna have a bad time. The optimists will say “you will be able to do 10x the amount of work”. That might be true, but the nature of the work will be completely different. Managing a farm is not the same as planting a seed.
This is 180 degrees from how to think about it.
The more thinking you do as ratio to less toil, the better. The more time to apply your intellect with the better machine execution to back that up, the more profit.
The Renaissance grand masters used ateliers of apprentices and journeymen while the grand masters conceived, directed, critiqued, and integrated their work into commissioned art; at the end signing their name: https://smarthistory.org/workshop-italian-renaissance-art/
This is how to leverage the machine. It's your own atelier in a box. Go be Leonardo.
Granted, that's not everywhere. There are absolutely places where you will be recognized for doing amazing work. But I think many feel pressured to use AI to produce high volumes of sub-par work instead of small volumes of great work
> see Bill Gate's famous quote about hiring lazy people
I think this is part of why all this is so contentious. There's been a huge culture shift over the last decade and AI is really just a catalyst to it. We went from managers needing to stop engineers from using too much abstraction and optimizing what doesn't need to be optimized to the engineers themselves attacking abstraction. Just look how people turn Knuth's "premature optimization is the root of evil" went from "get a profiler before you optimize" to "optimization? Are you crazy?"Fewer and fewer people I know are actually passionate about programming and it's not uncommon to see people be burned out and just want to do their 9-5. And I see a strong correlation with these people embracing AI. It makes sense if you don't care and are just trying to get the job done. I don't think it's surprising things are getting buggier and innovation slowed. We killed the passion and tried to turn it into a mechanical endeavor. It's a negative feedback loop
LLM capabilities are tied to their model, and won't improve on their own. You learn the quirks of prompting them, but they have fixed levels of skill. They don't lie, because they don't understand concepts such as truth or deception, but that means they'll spout bullshit and it's up to you to review everything with a skeptical eye.
In this analogy, you aren't the master, you're one part client demanding work, one part the janitor cleaning up after their mistakes.
They trick the reptilian part of your brain that you're dealing with something resembling a human being, but if they were one, they'd be described as a pathological liar and gaslighter. You can't go off on them for it, because they don't give a shit, and you shouldn't go off on them for it, because making a habit of that will make you a spiteful, unpleasant piece of shit for your coworkers to be around.
It's one thing when a machine or a tool doesn't function in the way you intend it to. It's another when this soulless, shameless homunculus does.
I'm a professional developer, and nothing interesting is happening to the field. The people doing AI coding were already the weakest participants, and have not gained anything from it, except maybe optics.
The thing that's suffocating is the economics. The entire economy has turned its back on actual value in pursuit of silicon valley smoke.
As an engineer that's lead multiple teams including one at a world leading SaaS company, I don't consider myself one of the weakest participants in the field and neither do my peers generally. I'm long on agents for coding, and have started investing heavily in making our working environment productive not only for humans, but now for agents too.
Like for instance we have a task runner in our project that provides a central point to do all manner of things like linting, building, testing, local deployment etc. The build, lint and test tasks are shared between local development and CI. The test tasks run the tests, take the TRX files and use a library to parse it to produce a report. So the agent can easily get access to the same info as CI is putting out about test failures. The various different test suites output reports under a consistent folder structure, they also write logs to disk under a consistent folder structure too. On failure the test tasks output a message to look at the detailed test reports and cross-reference that with the logs to debug the issue. Where possible the test reports contain correlation IDs inlined into the report.
With the above system when the agent is working through implementing something and the tests don't pass, it naturally winds up inspecting the test reports, cross referencing that with the logs, and solving the problems at a higher rate than compared to just taking a wild guess at how to run the tests and then do something random.
Getting it to write it's own guardrails by creating Roslyn Analyzers to make the build fail when it deviates from the project architecture and conventions has been another big win.
Tonnes of small things like that start to add up.
Next on my list is getting a debug MCP server, so it can set breakpoints and step through code etc.
It lets me concentrate on the parts I'm good at and ignore things I don't care about. Claude is a lot better at React than I am and I don't care.
Those are just not realistic numbers.
In my spare time I write some algorithmic C, you can check that stuff out on github (https://github.com/DelusionalLogic) if your curious.
I was an early adoter of LLM's. I used to lurk in the old EleutherAI discord and monitor their progress in reconstructing GPT-2 (I recall it being called GPT-J). I also played around a bunch with image generation. At this point nobody really tried applying them to code. We were just fascinated that it wrote back at all.
I have tried most of the modern models for development. I find then to generate a lot of nonsensical and unexplainable code. I've had no success (in the 30 or so times I've tried) at getting any of the models to debug or develop even small features. They usually get lost in some "best practice" and start looping on that forever.
I don't really use stackoverflow either, I don't trust its accuracy, and it's easy to get cargo culted in software.
AI is basically a toy for 99% of us. It's a long long ways away from the productivity boost people love to claim to justify the sky high valuations. It will fade to being a background tech employed strategically I suspect - similar to other machine learning applications and this is exactly where it belongs.
I'm forced to use it (literally, AI usage is now used as a talent review metric...) and frankly, it's maybe helped speed me up... 5-10%? I spend more time trying to get the tools to be useful than I would just doing the task myself. The only true benefit I've gotten has been unit test generation. Ask it to do any meaningful work on a mature code base and you're in for a wild ride. So there's my anecdotal "sentiment".
I've said this in the past and I'll continue to say it - until the tools get far better at managing context, they will be hard locked for value in most use cases. The moment I see "summarizing conversation" I know I'm about to waste 20 minutes fixing code.
If I worked on different types of systems with different types of tasks I might feel the same way as you, i think AI works well in specific targeted use cases, where some amount of hallucination can be tolerated and addressed.
What models are you using, I use opus 4.5, which can one shot a surprising ratio of tasks.
> so it the model isn't the cause
Thing is, the prompts, those stupid little bits of English that can't possiu matter all that much? It turns out they affect the models performance a ton.
I get it that some people just want to see the thing on the screen. Or your priority is to be a high status person with a loving family etc.. etc... All noble goals. I just don't feel a sense of fulfillment from a life not in pursuit of something deeper. The AI can do it better than me, but I don't really care at the end of the day. Maybe super-corp wants the AI to do it then, but it's a shame.
And yet, the Renaissance "grand masters" became known as masters through systematizing delegation:
Surely Donald Knuth and John Carmack are genuine masters though? There's the Elon Musk theory of mastery where everyone says you're great, but you hire a guy to do it, and there's the <nobody knows this guy but he's having a blast and is really good> theory where you make average income but live a life fulfilled. On my deathbed I want to be the second. (Sorry this is getting off topic.)
Steve Jobs wrote code early on, but he was never a great programmer. That didn’t diminish his impact at all. Same with plenty of people we label as "masters" in hindsight. The mastery isn’t always in the craft itself.
What actually seems risky is anchoring your identity to being the best at a specific thing in a specific era. If you're the town’s horse whisperer, life is great right up until cars show up. Then what? If your value is "I'm the horse guy," you're toast. If your value is taste, judgment, curiosity, or building good things with other people, you adapt.
So I’m not convinced mastery is about skill depth alone. It's about what survives the tool shift.
"taste, judgment, curiosity, or building good things with other people"
Taste is susceptible to turning into a vibes / popularity thing. I think success is mostly about (firstly just doing the basics like going to work on time and not being a dick), then ego, personality, presentation, etc... These things seem like unfulfilling preoccupations, not that I'm not susceptible to them like anyone else, so in my best life I wouldn't be so concerned about "success". I just want to master a craft and be satisfied in that pursuit.
I'd love to build good things with other people, but for whatever reason I've never found other people to build things with. So maybe I suck, that's a possibility. I think all I can do is settle on being the horse guy.
(I'm also not incurious about AI. I use AI to learn things. I just don't want to give everything away and become only a delegator.)
Edit: I'm genuinely terrified that AI is going to do ALL of the things, so there's not going to be a "survives the shift" except for having a likable / respectable / fearsome personality
I doubt Jobs would classify himself as a great programmer, so point being?
> So I’m not convinced mastery is about skill depth alone. It's about what survives the tool shift.
That's like saying karate masters should drop the training and just focus on the gun? It does lose meaning.
No, they are rational. At least those with a lot of money.
> None of the investments going on have followed any semblance of fundamentals - it's all pure instinct and chasing hype
That's not what investments are about. Their fundamentals are if they can get a good return on their money. As long as the odds of the next sucker to buy them up exists it is a good investment.
> AI is basically a toy for 99% of us.
You do pay for toys right? Toy shops aren't irrational?
So you're at the "first they laugh at us" stage then.
But I will give you this, the "first they ignore us" stage is over, at least for many people.
To contrast with this, my org tried using a simple QA bot for internal docs and has been struggled to move anything beyond proof of concept. The proof of concepts have been awful. It answers maybe 60-70% of questions correctly. The major issue seems to be related to taking PDFs laced with images and poorly written explanations. To get decent performance from these RAG bots, a large FAQ has to be written for every question it gets wrong. Of course this is just my org so it can’t necessarily be extrapolated across industry. However, how often have people come across a new team and find there is little to no documentation, poorly written documentation, or outdated documentation?
Where am I going with these two thoughts? Maybe the blocker to pushing more adoption within orgs is twofold, getting the correct context into the model and having decent context to start with.
Extracting value from these things is going to require a heavy lift in data curation and developing the harnesses. So far most of that effort has gone into coding. It will take time for the nontechnical and technical to work together to move the rest of an org into these tools in my opinion.
The big bet of course then is ROI and time to adoption vs current burn rates of the model providers.
Agents are going to struggle with those same difficulties the way humans do too. You need to put work into making an environment productive to work in, and after having purposely switched my development workflow for the stuff I do outside of work to being "AI first on mobile", that's such a bandwidth constrained setup that it's really helping me to find all the things to optimise for to increase the batting average and minimise the back and forth.
I'm not sure if you realise that those two sentences sound like 100% verbatim LLM output, or am I actually replying to a bot and not a human.
“Their headline?”
“Scientists are thriving. The workforce is managing. But creatives?”
“The top trust destroyer?”
“Create a web page infographic report that is convincing and boils down the essential truths of how people are feeling about AI in different professions and domains.. Include statistics and numbers and some rolling/animated sound bite quotes.”
Here is my guess for the puzzle: creative work is subjective and full of scaffolding. AI can easily generate this subjective scaffolding to a "good enough" level so it can get used without much scrutiny. This is very attractive for a creative to use on a day to day basis.
But, given the amount of content that wasn't created by the creative, the creative feels both a rejection of the work as foreign and a feeling of being replaced.
The path is less stark in more objective fields because the quality is objective, so harder to just accept a merely plausible solution, and the scaffolding is just scaffolding so who cares if it does the job.
"creatives" tend to have a certain political tribe, that political tribe is well-represented in places that have this precise type of authenticity/etc. language around AI use...
Basically a good chunk of this could be measuring whether or not somebody is on Bluesky/is discourse-pilled... and there's no way to know from the study.
There are a broad range of opinions but the expression seems to have been extremely chilled.