Technical Deflation
50 points
3 days ago
| 16 comments
| benanderson.work
| HN
raw_anon_1111
3 hours ago
[-]
What I don’t understand is that how is this automatically good for startups?

Say I have a startup that vibe codes “AI for real estate”. What about customer acquisition?

On the other hand, if I’m Zillow, why can’t I just throw a developer on the same feature and automatically have a customer base for it?

If you look at most of the YC funded startups these days, they are just prompt engineers with no go to market strategy and some don’t even have any technical people and are looking for “technical cofounders” that they can underpay with a promise of equity that will statistically be meaningless.

reply
zkmon
50 minutes ago
[-]
Infact, the opposite is true. There is tech inflation. Coding will get easier, but that doesn't mean tech is getting easier. It will only get more complex and will drive more segregation.

Tech is dividing the society and driving a wedge deeper. There is a huge population that are being thrown wayside by the high-speed tech highway. Which means the tech is getting more and more unreachable.

AI assistants are only going to make this worse, by removing the direct touch between users and the tech. Tech becomes just unmanageable for average person.

Just like how you were able to do all repairs for your bike, as a kid. But you can't do the same for your car now. Tech never gets easier or reachable.

reply
Zigurd
4 hours ago
[-]
Technology has always been deflationary. But you don't put off buying a computer because it will be cheaper next year. Nobody seems to be putting off buying GPUs despite scary depreciation and a blistering pace of new product introductions that are ever cheaper faster and better.
reply
vrighter
3 hours ago
[-]
only really faster and better if you don't use them for gaming, unfortunately. Upscaling and frame generation is not a better GPU, it's one with a band-aid applied to hide the fact that it actually did not get much faster.

RTX doesn't count to me either, because that's some bullcrap pushed by gpu manufacturers that requires the aformentioned upscaling and frame generation techniques to fake actually being anywhere close to what gpu manufacturers want gamers to believe.

reply
Aurornis
1 hour ago
[-]
> only really faster and better if you don't use them for gaming, unfortunately. Upscaling and frame generation is not a better GPU, it's one with a band-aid applied to hide the fact that it actually did not get much faster.

The generations gains haven’t been as great as past generations, but it’s getting silly to claim that GPUs aren’t getting faster for gaming.

Intentionally ignoring frame generation and DLSS up scaling also feels petty. Using those features to get 150-200fps at 4K is actually a very amazing experience, even if the purists turn their noses up at it.

The used GPU market is relatively good at calibrating for relative gaming performance. If new GPUs weren’t actually faster then old GPUs wouldn’t be depreciating much. Yet you can pick up 3000 series GPUs very cheaply right now (except maybe the 3090 which is prized for its large VRAM, though still cheap). Even 4000 series are getting cheap.

reply
vrighter
52 minutes ago
[-]
"Guessing what a pixel's color might be were one to actually do the work and render it" is not the same as actually rendering it. No, upscaling doesn't count.

Doing it for a whole screenful of pixels, for the majority of frames (with multi-frame generation) is even less of it.

reply
toast0
50 minutes ago
[-]
I dunno, the kiddo went from a 1650 Super to a 3060 and it's a lot nicer looking, I don't think frame gen and what not is enabled. Sure, that's up a notch on the SKU list and tons more VRAM. The 1650 Super was working with most of the games he tried, but Marvel Rivals was terrible (haven't seen him play it with the new card though)

It does help that he has a small screen and regular DPI. Seems like everyone wants to run with 4x the pixels in the same space, which needs about 4x the GPU.

reply
mr_toad
3 hours ago
[-]
Some people do put off buying cellphones and laptops when they know a new model will come out every year.
reply
tikhonj
2 hours ago
[-]
The overall trend has been the opposite though, hasn't it? People used to buy a new phone (or new laptop/etc) every couple of years because the underlying tech was improving so quickly, but now that the improvements have slowed down, they're holding onto their devices for longer.

There was an article[1] going around about that recently, and I'm sure there are more, but it's also a trend I've seen first-hand. (I don't particularly care for the article's framing, I'm just linking to it to illustrate the underlying data.)

[1]: https://www.cnbc.com/2025/11/23/how-device-hoarding-by-ameri...

reply
aleph_minus_one
1 hour ago
[-]
> Some people do put off buying cellphones and laptops when they know a new model will come out every year.

Don't confuse technical deflation with the Osborne effect:

> https://en.wikipedia.org/wiki/Osborne_effect

reply
raw_anon_1111
1 hour ago
[-]
The Osborne affect has been heavily disputed over the years - it says so in your own citation.

But at least with iPhones, there is a deflationary affect because Apple has since the 3GS in 2009, kept the old phone around and reduced the price. For instance my son wanted an iPhone 16 Plus. I told him to wait until the 17 was announced and he bought one cheaper from T-Mobile

reply
Ray20
3 hours ago
[-]
> But you don't put off buying a computer because it will be cheaper next year.

Why not? Sounds like a pretty reasonable strategy.

> Nobody seems to be putting off buying GPUs

Many people doing exactly that.

reply
detourdog
1 hour ago
[-]
Back when I was first started wrestling with this issue it the question was “how much faster will the daily photoshop operations be with a new computer?”

Now a new computer barely does anything faster for me.

reply
vrighter
1 hour ago
[-]
"But building the same functionality has undoubtedly become simpler."

I disagree with this statement. It has become simpler, provided you don't care about it actually being correct, and you don't care about whether you really have tests that test what you think you asked for, you don't care about security, and other things.

Building the same thing involves doing the things that LLMs have proved time and again that they cannot do. But instead of writing it properly in the first place, you now need to look for the needle in the haystack that is the subtle bug that invariable get inserted by llms every single time I tried to use them. Which requires you to deeply understand the code anyway. Which you would have gotten automatically (and easier) if you were the one writing the code in the first place. developing the same thing at the same level of quality is harder with an LLM.

And the "table stakes" stuff is exactly the thing I would not trust an LLM with for sure, because the risk of getting it wrong could potentially be fatal (to the company, not the dev. Depends on his boss' temperament) with those.

reply
raw_anon_1111
1 hour ago
[-]
And subtle bugs don’t get inserted by humans? Did security flaws in software just start happening after LLMs were introduced?
reply
vrighter
41 minutes ago
[-]
So? The point is that humans do it much less often.

Let's say there are 10 subtasks that need to be done.

Let's say a human has 99% chance of getting each one of them right, by doing the proper testing etc. And let's say that the AI has a 95% chance of getting it right (being very generous here).

0.99^10 = a 90% chance of the human getting it to work properly. 0.95^10 = only a 60% chance. Almost a coin toss.

Even with 98% success rate, the compounding success rate still goes down to about 81%.

The thing is that LLM's aren't just "a little bit" worse than humans. In comparison they're cavemen.

reply
raw_anon_1111
3 minutes ago
[-]
So humans do it much less often yet we have 30 years of evidence to the contrary? Humans still can’t figure out how to write code not subject to sql injection after 25 years or how to write code and commit it to GitHub without exposing admin credentials
reply
JacobiX
2 hours ago
[-]
In the end, the article says:

> writing functioning application code has grown easier thanks to AI.

> It's getting easier and easier for startups to do stuff.

> Another answer might be to use the fact that software is becoming free and disposable to your advantage.

For me, the logical conclusion here is: don't build a software startup!

reply
cootsnuck
1 hour ago
[-]
Yup. I'm starting to wonder if the startup space has a pretty big blind spot not realizing that how easy it is to build mostly/semi functioning software is not a unique advantage...

I left an AI startup to do tech consulting. What do I do? Build custom AI systems for clients. (Specifically clients that decided against going with startups' solutions.) Sometimes I build it for them, but I prefer to work with their own devs to teach them how to build it.

Fast forward 3+ years and we're going to see more everyday SMBs hiring a dev to just build them the stuff in-house that they were stuck paying vendors for. It won't happen everywhere. Painful enough problems and worthwhile enough solutions probably won't see much of a shift.

But startups that think the market will lap up whatever they have to offer as long as it looks and sounds slick may be in for a rude surprise.

reply
raw_anon_1111
1 hour ago
[-]
Of course it still makes sense to have a startup. Not because you will ever find a decent enough market. But if you are well connected enough you can find a VC and play with other people’s money for awhile.

You aren’t doing it to get customers, it’s for investors and maybe a decent acquisition

reply
jackar
3 hours ago
[-]
The impulse to make the comparison makes sense. The reality is a bit different, probably leans in the right direction but is buffered by learning. I’ll explain. There is no upside to delaying purchasing if you think things will get less expensive. There is however upside in building today even if you have to rebuild tomorrow, and that upside is in learning the problem space. Specifically, what is likely to be trivialized and what truly requires domain knowledge. Horizontal apps? Little domain knowledge to encode. Vertical app? More domain knowledge to encode. Separately, there are more ways to differentiate than distribution alone, see verifier’s law. Problems that are challenging to verify are challenging for AI to trivialize.
reply
Sevii
1 hour ago
[-]
Good explanation for why hiring stopped. If AI is improving rapidly why hire engineers now that you might not need in 6-12 months?
reply
locknitpicker
46 minutes ago
[-]
> Good explanation for why hiring stopped. If AI is improving rapidly why hire engineers now that you might not need in 6-12 months?

I'm not so sure that's the reason. I mean, to believe LLMs replace engineers you first need to believe engineers spend the bulk of their time typing frantically churning out code in greenfield projects. That's not compatible with reality. Although LLMs excel at generating new code from scratch, that scenario is the exception. Introducing significant changes to existing projects still requires long iterations which ultimately end up consuming more development time than actually rolling out the changes yourself.

The truth if the matter is that we are not observing an economic boom. The US is either stagnant or in a recession, and LLMs are not their cause. In an economic downturn you don't see spikes in demand for skilled workers.

reply
sgt101
2 hours ago
[-]
The models themselves represent the biggest deflation cases I've ever seem.

The charged cost of a frontier model is ~200x lower than 2 years ago, and the ones we are using now are much better - although measuring that and how much is challenging. Building a "better than GPT-4" model is also vastly cheaper than building GPT-4 was... perhaps 1/100th?

reply
Glemkloksdjf
3 hours ago
[-]
I'm still ahead of a company starting later:

I have the legal structure, i know my collegues, i have potentially employees and more capacity.

The problem is not that a startup is starting after you but you do not give yourself time to keep an eye on AI and not leveraging it when its helpful.

We leverage AI and ML progress constantly and keep an eye on advances. Segment Anything? Yepp we use it. Claude? Yes Sir!

reply
skeeter2020
2 hours ago
[-]
>> First, models getting better makes AI-based applications easier to build, because they can be simpler.

Don't conflate easy with simple. I'd argue they are actually easier and far more complex.

reply
moralestapia
1 hour ago
[-]
With re. to the first two paragraphs, it's crazy how someone can be so massively brainwashed.
reply
Joker_vD
5 hours ago
[-]
> when prices go down instead of up. It is generally considered harmful: both because it is usually brought on by something really bad (like a severe economic contraction)

Or, you know, technological improvements that increase efficiency of production, or bountiful harvests, or generally anything else that suddenly expands the supply at the current price level across the economy. Thankfully, we have mechanisms in place that keep the prices inflating even when those unlikely events happen.

reply
marcosdumay
5 hours ago
[-]
Deflation is about all prices going down. Just a few decreasing is normal.

Anyway, WTF, economics communication has a huge problem. I've seen the article's explanation repeated in plenty of places, it's completely wrong and borderline nonsense.

The reason deflation is bad is not because it makes people postpone buying things. It's because some prices, like salaries or rent just refuse to go down. That causes rationing of those things.

reply
pjc50
3 hours ago
[-]
See "price stickiness" and what is simplified as "menu reprinting costs"; there's usually a cost associated with changing prices, and a cost associated with renegotiating prices for everything that's not being sold on a spot market. People cannot buy housing at spot, and while spot-labour pricing is definitely a thing for some services it's so socially destabilizing for anything skilled that most workforces operate on salary.

The reverse of this is that high inflation tends to cause a lot of strikes, because salaries refuse to go up and very high levels of inflation need salary repricing every month or even week.

reply
igleria
3 hours ago
[-]
In Argentina I've learned from a young age that prices take the elevator, but salaries take the stairs.

It got old really quick having to negotiate with the boss every 6 months.

reply
HPsquared
3 hours ago
[-]
Rent and salaries don't like going down because of debt. Debts are denominated in currency units and go up with inflation (interest rates have a component to correct for inflation) but they don't decrease if the currency gains value over time (this would need negative interest rates). I suppose that's something that could be done with regulation.
reply
gus_massa
4 hours ago
[-]
I agree. It's super common that the price of vegetables goes up and down arround the year, in particular due to the harvest season.
reply
jdasdf
5 hours ago
[-]
>It's because some prices, like salaries or rent just refuse to go down.

a common argument, but one that doesn't bear out in the absence of regulation enforcing that.

reply
readthenotes1
3 hours ago
[-]
"One of the main problems is that if people expect prices to keep going down, they'll delay purchases and save more, because they expect that they'll be able to get the stuff for less later."

That is why we are all waiting to buy our first personal computers and our first cell phones.

Economists have managed to be ludicrous for a very long time and yet we still trust them.

reply
einpoklum
2 hours ago
[-]
> Desktop app.... though Electron and Tauri have made it easier

Ugh. I don't like that kind of 'desktop' apps. Huge bloat with a blip of actual app.

reply
api
3 hours ago
[-]
There’s a fun version of this in futurist space travel speculation.

Let’s say you have the a fusion rocket and can hit 5% the speed of light. You want to migrate to the stars for some reason.

So do you build a generational ship now, which is possible, or… do you wait?

Because if you build it now someone with a much better drive may just fly right past you at 20% the speed of light.

In this one the answer is to plot it out under the assumption there is no totally undiscovered major physics that would allow, say, FTL, and plot the curves for advancement against that.

So can we do this with software? We have the progress of hardware, which is somewhat deterministic, and we know something about the progress of software from stats we can make via GitHub.

The software equivalent of someone discovering some “fantasy” physics and building a warp drive would be runaway self-improving AGI/ASI. I’d argue this is impossible for information theoretical reasons, but what if I’m wrong?

reply
darkerside
4 hours ago
[-]
Does anyone else agree with this the premise of this article? Is it sensible to put off building things now because it will get even cheaper and faster later?

Maybe the time value of time is only increasing as we go.

reply
blueflow
4 hours ago
[-]
Actually yes. I wanted to get into UI programming with GTK 2 and right now im waiting for GTK $n to stabilize so i can commit to it.

Knowing that GTK $n-1 will soon be obsolete is enough reason to not put effort into learning it.

reply
bootsmann
3 hours ago
[-]
I think you're right. The author is quite wrong on many aspects in my view. One of the central mistake he makes is that creating a profitable startup is mostly a matter of shipping good product i.e.

> Used to be, you had to find a customer in SO much pain that they'd settle for a point solution to their most painful problem, while you slowly built the rest of the stuff. Now, you can still do that one thing really well, but you can also quickly build a bunch of the table stakes features really fast, making it more of a no-brainer to adopt your product.

reply
dvh
4 hours ago
[-]
I agree. I had several projects lined up and I delayed one because it used same tech as another significantly smaller project, so I learned the tech on the smaller simpler project and then used the knowledge on the bigger project. It was beneficial to not do the bigger project first.
reply
hahajk
4 hours ago
[-]
The conclusion that you should wait to build anything is an illustration of the danger of economic inflation that the author started with. I'm not sure why he thinks the economic version is toxic but the technological version is a good idea though.

The answer to should we just sit around and wait for better technology is obviously no. We gain a lot of knowledge by building with what we have; builders now inform where technology improves. (The front page has an article about Voyager being a light day away...)

I think the more interesting question is what would happen if we induced some kind of 2% "technological inflation" - every year it gets harder to make anything. Would that push more orgs to build more things? Everyone pours everything they have into making products now because their resources will go less far next year.

reply
philipallstar
1 hour ago
[-]
> I think the more interesting question is what would happen if we induced some kind of 2% "technological inflation" - every year it gets harder to make anything. Would that push more orgs to build more things? Everyone pours everything they have into making products now because their resources will go less far next year.

Government bonds already do this for absolutely everything. If I can put my money in a guaranteed bond at X%/year then your startup that's a risky investment has to make much better returns to make it worth my while. That's why the stock market is always chasing growth.

reply
Ray20
3 hours ago
[-]
> it will get even cheaper and faster later

Yeah, and will be done by somebody else. I think this is the main problem, and if you get rid of it, you'll have a completely sensible strategy. I mean there are many government contractors who, through corrupt connections, can guarantee that work will be awarded to them, and very often doing just that.

reply