Anthropic Economic Index report: economic primitives
126 points
1 day ago
| 10 comments
| anthropic.com
| HN
adverbly
1 day ago
[-]
This is very cool but it's not quite what I expected out of economic primitives.

I expected to see measures of the economic productivity generated as a result of artificial intelligence use.

Instead, what I'm seeing is measures of artificial intelligence use.

I don't really see how this is measuring the most important economic primitives. Nothing related to productivity at all actually. Everything about how and where and who... This is just demographics and usage statistics...

reply
p1necone
1 day ago
[-]
> I expected to see measures of the economic productivity generated as a result of artificial intelligence use.

>Instead, what I'm seeing is measures of artificial intelligence use.

Fun fact: this is also how most large companies are measuring their productivity increases from AI usage ;), alongside asking employees to tell them how much faster AI is making them while simultaneously telling them they're expected to go faster with AI.

reply
xkcd-sucks
1 day ago
[-]
When your OKRs for the past year include "internal adoption of ai tools"
reply
BobbyJo
8 hours ago
[-]
It is weird right? I don't remember any other time in my career where I've been measured based on how I'm doing the work.

In my experience, "good management" meant striving to isolate measurements as much as possible to output/productivity.

reply
brandonmenc
8 hours ago
[-]
The generous interpretation is that it's meant to incentivize "carpenters who refuse to use power tools" for their own good.
reply
no_wizard
8 hours ago
[-]
Nice way to make all that data meaningless. I already know some people who’s jobs have pushed adoption of AI tools and it’s clear that whether or not it meaningfully impacts their speed it is not going to do them any favors to say it doesn’t even when it does not
reply
hazyc
1 day ago
[-]
productivity is such a nebulous concept in knowledge work - an amalgamation of mostly-qualitative measures that get baked into quantitative measures that are mostly just bad data
reply
adverbly
12 hours ago
[-]
economic productivity is absolutely not nebulous. Its a measure of GDP per hour worked.

https://ourworldindata.org/grapher/labor-productivity-per-ho...

reply
mr_toad
8 hours ago
[-]
And in a business you can easily measure total profit and divide by total hours worked.

When you try and break it down to various products and cost centers is where it comes unstuck. It’s hard to impossible to measure the productivity of various teams contributing to one product, let alone a range of different products.

reply
reactordev
1 day ago
[-]
You can thank agile for that
reply
janwirth
1 day ago
[-]
You don't seem to like agile, whatever that word even means.
reply
reactordev
20 hours ago
[-]
On the contrary. I like agile for when you don’t know exactly what you’re building but you can react quickly to change and try to capture it.

Moving fast and breaking things, agile.

On the other hand. When you know what you want to build but it’s a very large endeavor that takes careful planning and coordination across departments, traditional waterfall method still works best.

You can break that down into an agile-fall process with SAFe and Scrum of Scrums and all that PM mumbo jumbo if you need to. Or just kanban it.

In the end it’s just a mode of working.

reply
PunchyHamster
11 hours ago
[-]
Knowing exactly what you want to build is pretty rare and is pretty much limited to "rewriting existing system" or some pretty narrow set of projects

In general, delaying infrastructure decisions as much as possible in process usually yields better infrastructure because the farther you are the more knowledge you have about the problem.

...that being said I do dislike how agile gets used as excuse for not doing any planning where you really should and have enough information to at least pick direction.

reply
reactordev
8 hours ago
[-]
If someone comes to you and says: "I want to build a platform that does WhizzyWhatsIt for my customers, it has to be on AWS so it's mingled with my existing infrastructure. It needs to provide an admin portal so that I can set WhizzyWhatsIt prices and watch user acquisition make my bank account go brrrrrtt. It needs the ability for my quazi-illegal affiliate marketing ring to be able to whitelabel and brand it as their own for a cut of the profits."

This is obviously satire but there's a clear ask, some features, from there you know what you need to have to even achieve those features, what project management process would you employ? Agile? Waterfall? Agile-fall? Kanban? Call me in 6 months?

reply
salynchnew
1 day ago
[-]
So.... motivated reasoning makes the world go 'round?

https://en.wikipedia.org/wiki/Motivated_reasoning

reply
Animats
7 hours ago
[-]
This is more like an internal marketing study. Nothing wrong with that, but it's being hyped as more than that.
reply
johnrob
1 day ago
[-]
Until AI is used to generate new revenue streams (i.e. acquire new customers), I don’t think the economic impact is going to impress. My two cents.
reply
mr_toad
8 hours ago
[-]
People used to say the same things about computers. Even back in the early 90s people still questioned the value of computers in the workplace.
reply
no_wizard
8 hours ago
[-]
What people, exactly? You could see the introduction of desktop computing and other types of computing in industry with a double digit increase to productivity, all other things being equal.

Any organization that properly adopted computers found out quickly how much they could improve productivity. The limiting factor was always understanding.

The trouble with AI tools is they don’t have this trajectory. You can be very versed on using them well, know all the best practices and where they apply and you get at best uneven gains. This is not the introduction of desktops 2.0

reply
w10-1
10 hours ago
[-]
"wantin' ain't gettin'": you might find productivity more important, but they didn't sign up for that.

They define primitives as "simple, foundational measures of how Claude is used". They're not signing up to measure productivity, which would combine usage with displacement, work factoring, and a whole host of things outside their data pool.

What's the point? They're offering details on usage patterns relative to demographics that can help people assessing Anthropic's business and the utility of LLM-based AI. Notably, tasks and usage are concentrated in some industries (notably software) and localities (mainly correlated with GDP and the Gini index). This enables readers to project how much usage growth can be expected.

As far as I know, no one publicly offers this level of data on their emerging businesses - not google, ebay, apple, microsoft, amazon, nvidia or any of the many companies that have reshaped our technical and economic landscape in the last 30 years.

Normally we measure value with price and overall market (productivity gains is but one way that clients can recoup their price paid). But during this the build-out of AI, investors (of all stripes) are subsidizing costs to get share, so until we have stable competitive markets for AI services, value is an open question.

But it's clear some businesses feel that AI could be a strategic benefit, and they don't want to be left behind. So there is a stampede, as reflected in the neck-and-neck utilization of chat vs API.

reply
verisimi
2 hours ago
[-]
Reframing the meaning of words, in a way that your product's usages becomes the metric - eg so "economy" = "Claude" - makes this some sort of 'Claude promo pack'.
reply
PunchyHamster
11 hours ago
[-]
I think we can surmise how bad that looked from the omission..
reply
sinnsro
11 hours ago
[-]
I wonder if it is even possible to get such measurements. With so many things affecting output, how can one establish a baseline or avoiding to compare apples to oranges?
reply
amelius
12 hours ago
[-]
I expected a simulation of the economy using economic primitives and AI.
reply
kurttheviking
1 day ago
[-]
agree, was similarly hoping for something akin to a total factor productivity argument
reply
fuzzfactor
1 day ago
[-]
>expected to see measures of the economic productivity

I know what you mean.

Imagine my disappointment when I was expecting their unique approach and brainpower to have arrived at a straightforward index of overall world macroeconomic conditions rather than an internal corporate outlook for AI alone.

reply
bicepjai
2 hours ago
[-]
Recently I listened to an interview with a serial SaaS startup CEO, and one piece of advice clicked for me: “Get out there, talk to your customers, and write blogs, lots of them.” It clarified why companies keep churning out blog posts, reports, “primitives,” even “constitutions”; content is a growth channel.

It also made me notice how much attention I’ve been giving these tech companies, almost as a substitute for the social media I try to avoid. I remember being genuinely excited for new posts on distill.pub the way I’d get excited for a new 3Blue1Brown or Veritasium video. These days, though, most of what I see feels like fingers-tired-from-scrolling marketing copy, and I can’t bring myself to care.

reply
siliconc0w
1 day ago
[-]
Skimmed, some notes for a more 'bear' case:

* value seems highly concentrated in a sliver of tasks - the top ten accounting for 32%, suggesting a fat long-tail where it may be less useful/relevant.

* productivity drops to a more modest 1-1.2% productivity gain once you account for humans correcting AI failure. 1% is still plenty good, especially given the historical malaise here of only like 2% growth but it's not like industrial revolution good.

* reliability wall - 70% success rate is still problematic and we're getting down to 50% with just 2+ hours of task duration or about "15 years" of schooling in terms of complexity for API. For web-based multi-turn it's a bit better but I'd imagine that would at least partly due to task-selection bias.

reply
storystarling
1 day ago
[-]
I've found that architecting around that reliability wall is where the margins fall apart. You end up chaining verification steps and retries to get a usable result, which multiplies inference costs until the business case just doesn't work for a bootstrapped product.
reply
xiphias2
1 day ago
[-]
,,1% is still plenty good, especially given the historical malaise here of only like 2% growth but it's not like industrial revolution good.''

You can't compare the speed of AI improvements to the speed of technical improvements during the industrial revolution. ChatGPT is 3 years old.

reply
reppap
8 hours ago
[-]
As long as people claim it's revolutionary it's fair to compare it to other revolutions.
reply
xiphias2
7 hours ago
[-]
I mean you can compare, but at the start it was also super small improvements.

The main difference is that people had no idea of the disruption it would cause and of course there wasn't there a huge investment industry around it.

The only question is about ROI of the investors will be positive (which depends on the timeline), not whether it is disruptive (or it will be after for example 30 years from now), and I see people confusing the two here quite often.

reply
mlsu
1 day ago
[-]
> This also highlights the importance of model design and training. While Claude is able to respond in a highly sophisticated manner, it tends to do so only when users input sophisticated prompts.

If the output of the model depends on the intelligence of the person picking outputs out of its training corpus, is the model intelligent?

This is kind of what I don't quite understand when people talk about the models being intelligent. There's a huge blindspot, which is that the prompt entirely determines the output.

reply
TrainedMonkey
1 day ago
[-]
Humans also respond differently when prompted in different ways. For example, politeness often begets politeness. I would expect that to be reflected in training data.
reply
mlsu
1 day ago
[-]
If I, a moron, hire a PhD to crack a tough problem for me, I don't need to go back and forth prompting him at a PhD level. I can set him loose on my problem and he'll come back to me with a solution.
reply
nl
1 day ago
[-]
> hire a PhD to crack a tough problem for me, I don't need to go back and forth prompting him at a PhD level. I can set him loose on my problem and he'll come back to me with a solution.

In my experience with many PhDs they are just as prone to getting off track or using their pet techniques as LLMs! And many find it very hard to translate their work into everyday language too...

reply
preciousoo
4 hours ago
[-]
The PhD can't read minds, the quality if the request from a moron would be worse than the quality of the request from someone with avg intelligence. And the output would probably noticeably differ accordingly
reply
chasd00
8 hours ago
[-]
Unless your problem fits the very narrow but very deep area of expertise of the PhD you’re not going to get anything. The phds I have worked with can’t tie their shoes because that wasn’t in their dissertation.
reply
Herring
1 day ago
[-]
Well if it ever gets to be a full replacement for phds, you’ll know cause it will have already replaced you.
reply
HPsquared
1 day ago
[-]
I think that's what is happening. It's simulating a conversation, after all. A bit like code switching.
reply
b00ty4breakfast
1 day ago
[-]
that seems like something you wouldn't want from your tools. humans have that and that's fine, people are people and have emotions but I don't want my power-drill asking me why I only call when I need something.
reply
freejazz
11 hours ago
[-]
>Humans also respond differently when prompted in different ways.

And?

reply
zozbot234
1 day ago
[-]
What is a "sophisticated prompt"? What if I just tack on "please think about this a lot and respond in a highly sophisticated manner" to my question/prompt? Anyone can do this once they're made aware of this potential issue. Sometimes the UX layer even adds this for you in the system prompt, you just have to tick the checkbox for "I want a long, highly sophisticated answer".
reply
nl
1 day ago
[-]
In general it will match the language style you use.

If you ask a sophisticated question (lots of clauses, college reading level or above) it will respond in kind.

You are basically moving where the generation happens in the latent space. By asking in a sophisticated way you are moving the latent space away from say children's books and towards say PhD dissertations.

reply
aisuxmorethanhn
1 day ago
[-]
I don’t find this to be true at all. You can ask it in text speech with typos and then append how you’d like the response to be phrased and it will follow the instructions.
reply
blackqueeriroh
9 hours ago
[-]
Yeah, because you told it explicitly how you would like the response to be phrased, which is the same thing you’re doing implicitly when you simply talk to it in a certain way.

Come on, this is human behavior 101, y’all.

reply
mlsu
1 day ago
[-]
They have a chart that shows it. The education level of the input determines the education level of the output.

These things are supposed to have intelligence on tap. I'll imagine this in a very simple way. Let's say "intellignce" is like a fluid. It's a finite thing. Intelligence is very valuable, it's the substrate for real-world problem solving that makes these things ostensibly worth trillions of dollars. Intelligence comes from interaction with the world; someone's education and experience. You spend some effort and energy feeding someone, clothing them, sending them to college. And then you get something out, which is intelligence that can create value for society.

When you are having a conversation with the AI, is the intelligence flowing out of the AI? Or is it flowing out of the human operator?

The answer to this question is extremely important. If the AI can be intelligent "on its own" without a human operator, then it will be very valuable -- feed electricity into a datacenter and out comes business value. But if a model is only intelligent as someone using it, well, the utility seems to be very harshly capped. At best it saves a bit of time, but it will never do anything novel, it will never create value on its own, independently, it will never scale beyond a 1:1 "human picking outputs".

If you must encode intelligence into the prompt to get intelligence out of the model, well, this doesn't quite look like AGI does it?

reply
mlsu
1 day ago
[-]
ofc what I'm getting at is, you can't get something from nothing. There is no free lunch.

You spend energy distilling the intelligence of the entire internet into a set of weights, but you still had to expend the energy to have humans create the internet first. And on top of this, in order to pick out what you want from the corpus, you have to put some energy in: first, the energy of inference, but second and far more importantly, the energy of prompting. The model is valuable because the dataset is valuable; the model output is valuable because the prompt is valuable.

So wait then, where does this exponential increase in value come from again?

reply
felixgallo
1 day ago
[-]
the same place an increase in power comes from when you use a lever.
reply
retsibsi
1 day ago
[-]
> the same place an increase in power comes from when you use a lever.

I don't understand the analogy. A lever doesn't give you an increase in power (which would be a free lunch); it gives you an increase in force, in exchange for a decrease in movement. What equivalent to this tradeoff are you pointing to?

reply
wat10000
1 day ago
[-]
A smart person will tailor their answers to the perceived level of knowledge of the person asking, and the sophistication of the question is a big indicator of this.
reply
thousand_nights
1 day ago
[-]
i don't know, are we intelligent?

you could argue that our input (senses) entirely define the output (thoughts, muscle movements, etc)

reply
HPsquared
1 day ago
[-]
There's a bit of baked-in stuff as well. We are a full culture-mind-body[-spirit] system.
reply
fuzzfactor
1 day ago
[-]
Fortunately we've got the full system because even under ideal conditions nobody's actually ever been intelligent at all times and we need the momentum from that full system to resume in an intelligent direction after an upset when it's not all at its best.
reply
bossyTeacher
1 day ago
[-]
The whole point of humans is the way we process the input. Every life form out there receives sound vibrations and has photons hitting their body all the time, not everyone uses that information in the same way or at all. That plus natal reflexes and hardcoded assumptions
reply
dingdingdang
1 day ago
[-]
The title actually cringes me out a bit, it reads like early report titles in academia where young students (myself no doubt incl back when) try their hardest at making a title sound clever but in actuality only achieve obscuration of their own material.
reply
bilsbie
1 day ago
[-]
Reminds me of psychohistory.
reply
dingdingdang
22 hours ago
[-]
Never read the Foundation series, the concept of psychohistory makes me want to though!
reply
bix6
1 day ago
[-]
> These “primitives”—simple, foundational measures of how Claude is used, which we generate by asking Claude specific questions about anonymized Claude.ai and first-party (1P) API transcripts

I just skimmed but is there any manual verification / human statistical analysis done on this or we just taking Claude’s word for it?

reply
sdwr
1 day ago
[-]
Looks like they are relying on Claude for it, which is interesting. I bet social scientists are going to love this approach
reply
ossa-ma
1 day ago
[-]
I'm not an economist so can someone explain whether this stat is significant:

> a sustained increase of 1.0 percentage point per year for the next ten years would return US productivity growth to rates that prevailed in the late 1990s and early 2000s

What can it be compared to? Is it on the same level of productivity growth as computers? The internet? Sliced bread?

reply
mips_avatar
1 day ago
[-]
Every single AI economic analysis talks about travel planning but none of the AI labs have the primitives (transit routing, geocoding, etc.) in a semantic interface for the models to use.
reply
doganugurlu
1 day ago
[-]
Unfortunately, in this context travel planning means planning a vacation. And not the travel route of a traveling salesman.
reply
malshe
1 day ago
[-]
Coincidentally, YouTube demos on vibe coding commonly make travel planning apps!
reply
brap
1 day ago
[-]
All of this performative bullshit coming out of Anthropic is slowly but surely making them my least favorite AI company.

We get it guys the very scary future is here any minute now and you’re the only ones taking it super seriously and responsibly and benevolently. That’s great. Now please just build the damn thing

reply
ossa-ma
1 day ago
[-]
These are economic studies on AI's impact on productivity, jobs, wages, global inequality. It's important to UNDERSTAND who benefits from technology and who gets left behind. Even putting the positive impacts of a study like this aside - this kinda due diligence is critical for them to understand developing markets and how to reach them.
reply
futuraperdita
1 day ago
[-]
But the thing is that they really aren't rigorous economic studies. They're a sort of UX research-like sociological study with some statistics, but don't actually approach the topic with any sort of econometric modeling or give more than loose correlations to past economic data. So it does appear performative: it's "pop science" using a quantitative veneer to push a marketing message to business leaders in a way that looks well-optimised mathematically.

Note the papers cited are nearly all ones about AI use, and align more closely with management case studies vs. economics.

reply
brap
1 day ago
[-]
Ok Dario
reply
blibble
1 day ago
[-]
> How is AI reshaping the economy?

oh I know this one!

it's created mountains of systemic risk for absolutely no payoff whatsoever!

reply
andy_xor_andrew
1 day ago
[-]
no payoff whatsoever? I just asked Claude to do a task that would have previously taken me four days. Then I got up and got lunch, and when I was back, it was done.

I would never make the argument that there are no risks. But there's also no way you can make the argument there are no payoffs!

reply
blibble
1 day ago
[-]
> I just asked Claude to do a task that would have previously taken me four days.

I think this probably says more about you than the "AI"

reply
steve_adams_86
1 day ago
[-]
That's not a very constructive thought given you don't know what the task is or why it could have taken them days. In a field as large and complex as software, there are myriad reasons why any single person could find substantial time-saving opportunities with LLMs, and it doesn't have to point to their own inadequacies.
reply