(1) I don't think a tech company having a monopoly is necessary for a tech company to stop caring about their customers and focus on hype instead. Plenty of public tech companies do this, just to chase stock price and investors.
(2) It's weirdly the opposite mindset that Bezos talked about in that famous old video where he talks about "creating the best experience for the customer" is his business strategy. Interesting. I think ultimately companies may be misguided -- because, in fact, ChatGPT is succeeding because they created a better search experience. And hype bandwagoners may fail because, long-term, customers don't like their products. In other words, this is a bad strategy.
(3) what's weird about AI -- and I guess all hype trains -- is how part of me feels like it's hype, but part of me also sees the value in investing in it and its potential. The hype train itself and the crazy amount of money being spent on it almost defacto means it IS important and WILL be important. It's like a market and consumer interest has been created by the hype machine itself.
The DotCom bubble is an instructive historical example. Pretty much every wild promise made during the bubble has manifested, right down to delivering pet food. It's just that for the bubble to have been worthwhile, we would essentially have had the internet of 2015 or 2020 delivered in 2001.
(And because people forget, it is not too far off to say that would be like trying to deliver the internet of 2020 on machines with specs comparable to a Nintendo Wii. I'm trying to pick a game console as a sort of touchpoint, and there probably isn't a perfect comparison, but based on the machines I had in 2000 the specs are roughly inline with a Wii, at least by the numbers. Though the Wii would have murdered my 2000-era laptop on graphics.)
I don't know that the AI bubble will have a similar 20-year lag, but I also think it's out over its skis. What we have now is both extremely impressive, but also not justifying the valuations being poured into it in the here & now. There's no contradiction there. In fact if you look at history there's been all sorts of similar cases of promising technologies being grotesquely over-invested in, even though they were transformative and amazing. If you want to go back further in history, the railroad bubble also has some similarities to the Dot Com bubble. It's not that railroad wasn't in fact a completely transformative technology, it's just that the random hodgepodge of a hundred companies slapping random sizes and shapes of track in half-random places wasn't worth the valuations they were given. The promise took decades longer to manifest.
It depresses me to think how much of the 2020 Internet (or 2025 Internet) that is actually of value really ought to be able to run on hardware that old.
Or so I imagine, anyway. I wonder if anyone's tried to benchmark simple CSS transitions and SVG rendering on ancient CPUs.
Ever remember waiting something like hour to watch a 60-second movie preview over dialup?
I get a reminder every time I load a modern website in an area with very poor reception. Appears to not load at all —- not due to lack of connectivity but rather due the speeds and latencies being too slow for the amount of crap being fetched.
GPRS and EDGE were many times faster than dialup — must have been a dream — but now utterly unusable.
Yes, now it feels like something like smartphones came of age overnight and was always inevitable. But it really took more than a decade to reach the level of integration and polish that we now take for granted. UI on phone apps was terrible, speeds were terrible, screens resolutions were terrible, processing was minimal, battery didn't last, roaming charges/3g coverage, etc. For years, you couldn't pinch to zoom on an iPhone, stuff like that.
All these structural problems were rubbed away over time and eventually forgotten. But so many of these small tweaks needed to take place before we could "fill in the blanks" and reach the level of ubiquity for something like an Uber driver using their phone for directions.
I mean, we could totally have done that. There's nothing stopping you from delivering an experience like modern Amazon or Facebook or whatever in server-rendered HTML4. CSS3 and React get you fancy graphics and animations, and fast, no-repaint page transitions, but that's pretty much all they get you; we had everything else 25 years ago in MSIE6.
You could have built a dynamically-computed product listing grid + shopping cart, or a dynamically-computed network-propagated news feed with multimedia post types + attached comment threads (save for video, which would have been impractical back then), on top of Perl CGI-bin scripts — or if you liked, a custom Apache module in C.
And, in fact, some people did! There existed web services even in 1998 that did [various fragments of] these things! Most of them built in ASP or ColdFusion, mind you, and so limited to a very specific stack; but still, it was happening!
It was just that the results were all incredibly jank, with no UX polish... but not because UX polish would have been impossible with the tools available at the time. (As I said, HTML4 was quite capable!)
Rather, it was because all the professional HCI people were still mostly focused on native apps (with the few rare corporate UX vanguards "doing web stuff", working on siloed enterprise products like the MSDN docs); while the new and growing body of art-school "web design" types were all instead being trained mainly on the application of vertically-integrated design tools (ActiveX, Flash, maybe web layout via Photoshop 9-way slice export).
Meanwhile on the client side, web technologies had a lot of implicit defaults assuming pages on sites rather than apps and experiences. For example, we didn’t originally have a way for JS to preserve back/forward buttons functionality when navigating in a SPA without using hash tags in the URL. Without CSS features for it, support for RTL and LTR on the same website was basically nonexistent. I won’t even get started on charset, poorer support for dates that persists to this day, limited offline modes in a time when being offline was more common, and how browsers varied tremendously across platforms and versions back then with their own unique set of JS APIs and unique ideas of how to render webpages.
It took the original acid test and a bunch more tests that followed before we had anything close to cross browser standards for newer web features. I still remember the snowman hack to get IE to submit forms with UTF-8 encoding, and that wasn’t as bad as quirks mode or IE 5.
Actually maybe I disagree with most of this post. Don’t get me wrong, I can see how it could have been done, but it’s reductive to the extreme to say the only reason web services were jank is because UX polish didn’t exist. If anything, the web is the reason UX is so good today - apps and desktop platforms continuously copied the web for the past 28 years, from Windows ME with single-click everywhere to Spotify and other electron apps invading the OS. I’m not going to devalue the HIG or equivalent, but desktop apps tended to evolve slowly, with each new OS release, while web apps evolved quickly, with each new website needing to write its own cross platform conventions and thus needing its own design language.
Funny enough, no "AI" prophet is mentioning that, in spite of it being the most useful thing about LLMs.
What I wonder is how long it will last. LLMs are being fed their own content by now, and someone will surely want to "monetize" it after the VC money starts to dry up a bit. At least two paths to entshittification.
Sometimes I think these things are more like JPEGs for knowledge expressed as language. They're more AM (artificial memory) than AI (artificial intelligence). It's a blurry line though. They can clearly do things that involve reasoning, but it's arguably because that's latent in the training data. So a JPEG is an imperfect analogy since lossy image compressors can't do any reasoning about images.
No.
> but it's arguably because that's latent in the training data.
The internet is just bigger than what a single human can encounter.
Plus a single human isn't likely to be able to afford to pay for all that training data the "AI" peddlers have pirated :)
It's true that something interesting is happening. GP did not dispute that. That doesn't make it reasoning, and many people still believe that words should have meaning in order to discuss things intelligently. Language is ultimately a living thing and will inevitably change. This usually involves people fighting the change and no one know ahead of time which side will win.
If Claude 4 provides a detailed, logical breakdown in its "reasoning" (yeah, that usage is overloaded), then we could say that there was logical inference involved. "But wait!", I already hear someone saying, "That token output is just the result of yet another stochastic process, and isn't directing the AI in a deterministic, logical way, and thus it is not actually using logic; it's just making something that looks convincingly like logic, but is actually a hallucination of some stochastic process". And I think this is a good point, but I find it difficult to convince myself that what humans are doing is so different that we cannot use the word "reasoning".
As a sidenote, I am _very_ tired of the semantic quagmire that is the current AI industry, and I would really appreciate a rigorous guide to all these definitions.
I agree. However, they can clearly do a reasonable facsimile of many things that we previously believed required reasoning to do acceptably.
Therefore whenever they produce output that looks like the result of those things, we must either be deceived by a reasonable facsimile, or we simply misapprehended their necessity in the first place.
But, do we understand the human brain as well as we understand LLMs?
Obviously there's something different, but is it just a matter of degrees? LLMs have greater memory than humans, and lesser ability to correlate it. Correlation is powerful magic. That's pattern matching though, and I don't see a fundamental reason why LLMs won't get better at it. Maybe never as good as (smart) humans are, but with their superior memory, maybe that will often be adequate.
I think it should be abundantly clear that what ChatGPT does when you ask it to play chess is fundamentally different from what Stockfish does. It isn't just weak and doesn't just make embarrassing errors in generating legal moves (like a blindfolded human might); it doesn't actually "read" and it generates post-hoc rationalization for its moves (which may not be at all logically sound) rather than choosing them with purpose.
There are "reasoning models" that improve on this somewhat, but cf. https://news.ycombinator.com/item?id=44455124 from a few weeks ago, and my commentary there https://news.ycombinator.com/item?id=44473615 .
Yes, your "no" must be more upbeat! Even if it's correct. You must be willing to temper the truth of it with something that doesn't hurt the feelings of the massive.
> Does it mean that we’ve reached AGI? No. Does it mean the process reflects exactly what humans do? No.
But here it's fine to use a "No." because these are your straw men, right?
Is it just wrong to use a "No." when it's not in safety padding for the overinvested?
Neither are wide-eyed claims stemming from drinking too much LLM company koolaid. Blatantly mistaken claims don't need more than a curt answer.
Why don't I go ahead and claim chatGPT has a soul, so to then get angry if my claim is dismissed?
You are missing the forest for the trees by dismissing this so readily.
LLMs can solve IMO-level math problems, debug quite difficult bugs in moderately sized codebases, and write prototypes for very unique and weird coding projects. They solve difficult reasoning problems, and so I find it mystifying that people still work so hard to justify their belief that they're "not actually reasoning". They are flawed reasoners in some sense, but it seems ludicrous to me to suggest that they are not reasoning at all when they generalise to new logical problems so well.
Do you think humans are logical machines? No, we are not. Therefore, do we not reason?
No, but we are conscious, and we know we are conscious, which doesn't require being a logical being too. LLMs on the other hand aren't conscious and there's zero evidence that they are. Thus, they don't reason, since this, unlike logic, does require consciousness.
Why not avoid re-definining things into a salad mix of poor logic until you can pretend that something with no evidence in its favor is real.
... who can also type at superhuman speeds, but has no self-awareness, creativity or initiative.
Or perhaps because Google created a worse search experience.
And shortly enough, AI is not even going to help with searching – it will eat its own s*t and offer that as answers. Unless the AI giants find some >99.99% way to filter out their own toxic waste from the training data.
A big part of hype as a business strategy is to convince potential customers that you intend to create the best experience for them. And the simplest approach to that task is to say it outright.
> in fact, ChatGPT is succeeding because they created a better search experience.
Sure. But they don't market it like that, and a large fraction of people reporting success with ChatGPT don't seem to be characterizing their experiences that way. Even if you discount the people explicitly attempting to, well, chat with it.
This hurts the article, I think. I don't disagree with his point about companies caring too much about stockholders. But this anti-LLM envangelism just comes across as ludditism.
Is there any research out there suggesting LLMs help programmers get stuff done? I can't say I follow the research closely but I have not seen any.
Googling for [ai llm productivity research] and looking at the first page of results I can't find much interesting evidence. One report says users asked to create a basic web server in JS complete the task 55% faster using an LLM. One report measures "perceived productivity". Students given text writing tasks are faster. One report measures productivity as lines of code written. The rest seem to just be projecting theoretical productivity gains.
Has anyone measured any improved productivity?
I can see this report from METR that is actually measuring something: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
> Core Result > > When developers are allowed to use AI tools, they take 19% longer to complete issues—a significant slowdown that goes against developer beliefs and expert forecasts. This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.
But surely someone must have also found results showing productivity gains. I assume I was just not looking hard enough.
I am a happy Copilot Pro user since 2021, still.
> However, we see positive speedup for the one developer who has more than 50 hours of Cursor experience, so it’s plausible that there is a high skill ceiling for using Cursor, such that developers with significant experience see positive speedup.
Any research will be limited by what the researchers control for?
In my personal experience, when I get a well written bug report or hands on detailed logs, my instinct that I feed it to an Agent and let it figure it all out has never disappointed me. It runs in the background while I work on things I instinctively know the Agent wouldn't do a good job of. How did I develop those instincts? By using Agents for like 3 days. These things (especially, for code completion) are ridiculously effective for the programming languages & code bases I work in, if nowhere else.
> Has anyone measured any improved productivity? ... I am a happy Copilot Pro user since 2021, still.
Whether productivity is tanking or not, I will find it incredibly hard to stop using LLMs/Agents just because a metric or three indicates I'd be better off without them. I must note though, it might be too soon to put a mark on productivity as it is a function of how well new technologies are integrated into processes and workflows, which typically happens over decades and not months/years.
This comment by simonw for reference: https://news.ycombinator.com/item?id=44522772
1/ They tested on 16 experienced developers. 2/ They only used cursor. No mention of other tools at the time. Cursor is notoriously bad on bigger codebases, while even Windsurf (competition at the time) would have shown different results. 3/ They only allowed training for 30 mins on cursor. 4/ As simonw mentions, the ones who improved had some previous experience with cursor. 5/ Yet, the study is definitively mentioned as AI actively ruining productivity and none of the other factors.
Ideally capitalism would yield better product or services, for a lower price. That's no longer happening. We're getting shittier products and we're paying more. But somehow we convince ourselves that it's still good, because the stock market is going up, corporate profits are going up.
If there was ever any doubt that the hype is the product, then please explain the Tesla stock, 100% hype driven, there is zero correlation between the stock price and how the company is actually doing.
we live in the age of lies. you can either keep debunking them to deaf ears, or join the bandwagon and maybe make some money by fooling someone else. the whole stock market feels like a ponzi scheme now.
Even in that setup, people can try to game the market. They can make something that looks like a good saddle and sell it to you and then it falls apart not too long afterward. They can get you to agree to a price but then tell you the stirrups aren't included even though they're attached to the demo model. They can ask for half payment up front while they custom make your item, then skip town.
And mechanisms sprung up to prevent this: regulation. Some are market-internal (reputation) and some are enforced (people can report you to the authorities for selling fraudulent goods, and you can be jailed or whatever).
The problem is mainly that nowadays companies have turned the majority of their innovation energy towards this kind of market-gaming meta-activity. It's no longer about goods, services, buyers, sellers, or any of those things. It's just about finding new ways to manipulate the market itself.
This is what the article seems to be saying, and I agree. I'm not sure I'd call it "hype", though. It's not that "the hype is the product", it's that the market activity is not oriented towards products at all. Products have become like abstract proxy tokens that are moved around to simulate what we think of as market activity, but all the real activity is happening in the meta-market.
But even in the cases where we do have a free market, we're often seeing one company fiddle with quality, maybe drop the price a little, then the rest quickly follow and price goes right back up across the board.
capitalism works best for everybody on average when free markets are competitive, but when they are not, markets still work, they just work better for some, worse for others but better than nothing, and also overall worse for everybody because markets are not zero sum. The problem with a lot of what-turns-out-to-be left-wing and or populist thinking on markets is the assumption that markets are zero sum, "if there is a winner, there must be a loser", which while an attractive idea turns out to be false.
same is true of the completely overblown idea that people are not rational. people are not perfectly rational, but when it comes to parting with their money they are much more rational than they are not. If it were not true, people wouldn't be living rationally measurably better lives today than 100, 200, etc. years ago. there are many other sources of noise in measuring that swallow irrationality up with the noise. (yes, selling gambling to gambling addicts is an irrational money printing machine, but civilization has not collapsed)
> please explain the Tesla stock, 100% hype driven, there is zero correlation between the stock price and how the company is actually doing.
They do still have the best electric cars on the market.
We are living in a strange time because there is a combination of fear and greed in the market.
Yesterday I learned that market leverage (credit) is trending towards levels of the dot-com bust and great recession (https://en.macromicro.me/collections/34/us-stock-relative/89...)
When the tide goes out, the real swimmers remain and the tourists leave.
I quite enjoy that time in tech specifically, though I'm sure other industries face similar in-and-out flows.
Has it always been this way? or am I just slow on the uptake
My theory is that we are simply too wealthy. For the average rich person (the top 20-30%), the first $70,000 goes to real companies who sell real things for money. Past that and it all is just kinda funny money. In our society there are actually a ton of people making this kind of money, so there's just this huge chunk of money that isn't really tied to anything real. If you lose $100k on cryptocurrency, or you spend $800 on some metaverse thing, it's fine, you can still buy food.
Of course this is also why people who don't make this kind of money are so baffled (and angry). Hearing that some no product company [1] was sold for billions of dollars feels insane when you're seriously considering the difference between a $5000 and a $6000 car.
20 - 30% cannot spend $100k per year as funny money, that is more like the top 5% but probably more like the top 2%.
High earners in the 20 - 30% often also live in expensive areas, so their dollar doesn't go as far as some of the more affordable places to live (housing, food, etc).
If you make a yearly gross salary of 100K you're already in the top 11%. With 200K you're in the top 3%. Inequality also leads to social segregation, which means that we live in bubbles where most people are "like us" and it's very difficult to see how privileged we actually are.
Read Piketty's "Capital in the XXI century" to learn more about how crazy inequal the world, but specially the US, is becoming. Phenomena like Trump are easier to understand when taking this into account.
With the median income ~$50k and many households owning two cars... much of the country is actively living beyond their means just to get to work.
We have built some sort of pseudo-luxury economy that seemingly has to be upkept by mandatory good vibes.
Maybe 0.1% to 1% of us are, but the rest of us aren't.
Funny money.
1: All stats are 2022 tax year, and are AGI per tax return, so excluding things like 401(k), based on https://www.irs.gov/statistics/soi-tax-stats-individual-stat... and defining household as "taxpaying unit"- e.g. married filing separately would count as two, at two separate levels of income
[1]: https://www.wsj.com/livecoverage/stock-market-today-dow-sp50...
income inequality is wild right now, but even lower classes are participating in some absurd sort of money grabs regularly...
the top of the market, these billion dollar companies that have perfected this sort of private equity capitalism have optimized every day luxury that can bleed anyone dry... not just the rich... wework or wish go down the tubes and it's on to the next one, founders rocket off and some unlucky investors are stuck holding the bag
every category of daily life now has some optimized extraction mechanism
Anyway, it's sorta always been this way. That AI is very much self-hyping might make this time worse, but otoh non-european economies have more ability now to make sure the fallout is mostly confined to the investors.
Poor old people don't like this kind of change because it has historically come at their expense. Young people don't like this new tech because they recognize it (to varying degrees of correctness) as pulling up the ladder on entry-level positions, decimating the social and knowledge landscape (particularly the treacherous, abyssal seas that used to be the World Wide Web), amd drawing capital away from green tech and social investment and into climate change-exacerbating energy usage and construction. But they don't have the capital to dictate investment decisions, so no one cares.
I don't think this changes without some sort of economic "catastrophe" that redistributes wealth fast enough that the current arbiters can't get their legs back under themselves in time to prevent it. That's why you keep seeing all of these weird and novel tactics to forestall even the hint of a recession, and why so many young people are practically begging for, say, a repeat of 2008.
Edit: Also, I really like this site's layout on my phone. It feels fresh and performant.
Ageism is never a good look. There are smart and stupid people in every age group.
Refusing to take responsibility isn't a good look, either. So is derailing with respectability gripes. Let's solve the problem at hand.
You want me to take responsibility for how other people in my age group vote (not that I am a boomer, I'm not)? That's your argument?
Thanks but no thanks for this ridiculous comment thread.
In the liberal worldview, private property has prior existence. The common good is understood as a concession, a derivative good composed of that which is ceded from private property. Human beings are viewed as atomized units, and society is consequently viewed as a fluctuating miasma of transactional relationships.
In the classical view, the common good has prior existence, and private property exists for the sake of the common good (we avoid a whole lot of grief and social strife by having private property; properly disposed, it is a successful means of distribution). Capital and labor are not construed as necessarily opposed. Rather, in a society in which cooperative relationships for mutual benefit are the rule (in place of a market driven by exploitation), capital and labor are friends. Both have skin in the game by assuming risk. In the classical view, workers are owed at least a family wage as a matter of justice. If we have a billionaire who fails to pay his employees adequately, then we have someone who quite literally has robbed his employees. Human relationships are not confined to merely the transactional, and we have duties toward society that precede our consent.
According to a liberal view, if a famine strikes ą region and some guy has a warehouse full of food, it would be theft for to take the food in that warehouse to survive, and theft, of course, is not morally permissible (it is absurd to claim otherwise; it's theft!). Meanwhile, according to a classical view, private property is not fixed absolutely. As you recall, it exists for the sake of the common good. So, in such a case, the food in that warehouse is not absolutely determined as private property. Private property is derived and ordered toward the common good. It would not be theft for the starving to take food from that warehouse, as the food quite literally belongs to them! (This is an extreme example, but I include it to demonstrate how the consequences of each stance play out.)
> Also notable is Gregory Kavka, who argues, “Though it is rarely noticed, Hobbes is a bit of an economic liberal, that is, he believes in some form of the welfare state and in the redistributive taxation needed to support it.”7 .... But Kavka’s analysis has two significant flaws. First, ... Hobbes dedicates arguably greater attention to the problems associated with excessive wealth. ... The second problem with Kavka’s analysis is that it ignores Hobbes’s rich moral psychology that is integral to Hobbes’s understanding of the problems associated with wealth, poverty, and inequality.
> Hobbes’s political program prioritizes peace above all. Along these lines, it is essential when considering Hobbes on economics that one understands how poverty, concentrated wealth, and inequality can obstruct peace. This is one of the fundamental lessons Hobbes likely learned in the decades leading up to the English Civil War – a war at least partly facilitated by the economic upheaval, the impoverishment of many along with the enriching of others, the concentration of wealth, and the systemic inequality. Hobbes acknowledges some of this in his own history of the Civil War, Behemoth.
> While Hobbes is surely concerned about the problems poverty pose for his commonwealth, he expresses even greater concern about concentrated wealth. His earliest discussion of wealth can be found in his Briefe of the Art of Rhetoric, written in 1637, his summary of Aristotle’s Rhetoric.84
Classical liberals are not the same people as modern libertarians or minarchists.
Tech companies are just reacting to market forces. The author invokes FB/Meta, but Zuck was blazing his own shitty trail with the metaverse until the market punished him beyond all reason (relative to the cash flow the ads business was still generating). He only jumped on the AI hype train when he was forced to. It's been a long time since fundamental analysis a la Warren Buffet was the core of prudent investing. But I think this is a result of human nature --- a large percent of people just like gambling, even if the expected value of their wagers is negative.
I also get the anger that the author feels. As a tech worker, my professional life is dictated to me by the whims of the market. But the all the attention from financiers is also a result of the immense wealth the industry has attracted (or generated, based on your perspective), so it's hard to complain without self-consciousness. Like many here, I probably live online a bit too much, and I try to see the silver lining in enshitiffication as a force that encourages me to live more in the real world, with my family members, in the present.
I read The Society of the Spectacle too young and it broke me forever :(
[0] https://en.wikipedia.org/wiki/The_Society_of_the_Spectacle
https://www.youtube.com/watch?v=YZFTaEenaHM
Richard, listen--
No, you listen to me Jack. You promised me that you would never compromise the product, so do you feel like taking some action and backing me up on this because me and my product feel pretty compromised right now
Richard, I don't think you understand what the product is. The product isn't the platform, and the product isn't your algorithm either, and it's not even the software. Do you know what Pied Piper's product is, Richard?
Is--is it me?
Oh God no no how could it possibly be you? You got fired! Pied Piper's product is its stock.