The assumption here is that, without AI, none of that capital would have been deployed anywhere. That intuitively doesn't sound realistic. The article follows on with:
>In the last two years, about 60 percent of the stock market’s growth has come from AI-related companies, such as Microsoft, Nvidia, and Meta.
Which is a statement that's been broadly true since 2020, long before ChatGPT started the current boom. We had the Magnificent Seven, and before that the FAANG group. The US stock market has been tightly concentrated around a few small groups for a decades now.
>You see it in the business data. According to Stripe, firms that self-describe as “AI companies” are dominating revenue growth on the platform, and they’re far surpassing the growth rate of any other group.
The current Venn Diagram of "startups" and "AI companies" is two mostly concentric circles. Again, you could have written the following statement at any time in the last four decades:
> According to [datasource], firms that self-describe as “startups” are dominating revenue growth on the platform, and they’re far surpassing the growth rate of any other group.
I'm just some guy with an opinion. I worked in startups for 20 years. Startups are called exciting or thriving or good bets because a tiny few are successful and even fewer are trying to compete with established companies. Capital is pumped into lots of little ones regardless of the technology de-jour or market opportunity. They generally don't last. Statistically, if you were an AI startup from 6 years ago, you're long gone and you made out with what you could scrape together on the way out. Startups are thriving by feeding off dreams of grandeur, with very few happening upon the right combination of personality, capability and market enough to last for a decade. Is that thriving or thrashing? Don't confuse the velocity of gambling with the volume of opportunity.
[0]: https://www.ycombinator.com/companies?batch=Summer%202025 (search for "AI" and it gives 120)
Meanwhile, I hear pretty much any startup not associated with AI is finding it harder to get funding.
What do you base this statement on? Is there data?
part of the problem is that the remaining set of industries is pretty tough to make dynamic using technology simply because the explosive market size isn't there for various reasons. if you wanted to disrupt aviation, for example, a plane takes tens of billions of dollars to bring to market, and an airline requires outlaying billions in capex on planes.
2. If people think they can get an abnormally high return, they will invest more than otherwise.
3. Whatever other money would've got invested would've gone wherever it could've gotten the highest returns, which is unlikely to have the same ratio as US AI investments - the big tech companies did share repurchases for a decade because they didn't have any more R&D to invest in (according to their shareholders).
So while it's unlikely the US would've had $0 investment if not for AI, it's probably even less likely we would've had just as much investment.
Google is facing a significant danger that its search advertising business is going to just disappear. If people are using AI to find stuff or get recommendations then then aren't using google. Why should a photographer continue to spend $200 a month on ads when their clients are coming from openAI? Especially when OpenAI is using the organic search results. Meta is facing the same sort of issue, if the eyeballs aren't on insta then the ad $$$ go somewhere else.
So if they have money, and can get money, they will invest it to protect their businesses - all of it.
MS and VC's are doing the opposite, they are investing with the idea of taking the attention that Google and Meta currently have, but they are also following the "I'm scared" signal that Google and Meta have put in the market.
I doubt it. Investors aren't going to just sit on money and let it lose value to inflation.
On the other hand, you could claim non-AI companies wouldn't start a new bubble, so there'd be fewer returns to reinvest, and that might be true, but it's kind of circular.
> 2. If people think they can get an abnormally high return, they will invest more than otherwise.
Sounds like a good argument for wealth taxes to limit this natural hoarding of wealth absent unreasonably good returns.
The big US software firms have the cash and they would invest in whatever the market fad is, and thus, bring it into the US economy.
I'm sorry, what? This has happened all the time, and in increasing volleys, since 2008.
This doesn't seem to align with the behavior I've observed in modern VCs. It truly amazes me the kind of money that gets deployed into silly things that are long shots at best.
For the longest time, capex at FAANG was quite low. These companies are clearly responding specifically to AI. I don't think it's realistic to expect that they would raise capex for no reason.
>a statement that's been broadly true since 2020, long before ChatGPT started the current boom
I guess it depends on your definition of "long before," but the ChatGPT release is about mid-way between now and 2020.
As for the startup vs. AI company point, have you read Stripe's whitepaper on this? They go into detail on how it seems like AI companies are indeed a different breed. https://stripe.com/en-br/lp/indexingai
They also view labor as a replaceable cost, as most accountant based companies that no longer innovate act. They forget that if you don't hire people, and pay people, you don't have any sales demand and this grows worse as the overall concentration or intensity of money in few hands grows. Most AI companies are funded on extreme leverage from banks that are money-printing and this coincided with 2020 where the deposit requirements were set to 0% effectively removing fractional banking as a system.
With this recessionary behavior, it might not be that far fetched an assumption. I'm not sure where else it would flow outside of being hoarded up in assets if there wasn't this big speculation everyone wants to take advantadge of. Especially when you consider that there's so much money flowing into AI, but AI is not as of yet profitable.
The dollars are being diverted elsewhere.
Intel a chip maker who can directly serve the AI boom, has failed to deploy its 2nm or 1.8nm fabs and instead written them off. The next generation fabs are failing. So even as AI gets a lot of dollars it doesn't seem to be going to the correct places.
USA lost mass manufacturing (screws and rivets and zippers), but now we are losing cream of the crop world class manufacturing (Intel vs TSMC).
If we cannot manufacture then we likely cannot win the next war. That's the politics at play. The last major war between industrialized nations shows that technology and manufacturing was the key to success. Now I don't think USA has to manufacture all by itself, but it needs a reasonable plan to get every critical component in our supply chain.
In WW2, that pretty much all came down to ball bearings. The future is hard to predict but maybe it's chips next time.
Maybe we give up on the cheapest of screws or nails. But we need to hold onto elite status on some item.
Definitely not! Wasn't trying to imply this.
> If we cannot manufacture then we likely cannot win the next war.
If you think a war is imminent (a big claim!), then our only chance is to partner with specialized allies that set up shop here (e.g. Taiwan, Japan, South Korea). Trying to resurrect Intel's vertically integrated business model to compete with TSMC's contractor model is a mistake, IMO.
The US completely controls critical steps of the chip making process as well as the production of the intellectual property needed to produce competitive chips, and the lithography machines are controlled by a close ally that would abide by US sanctions.
The actual war planes and ships and missiles are of course still built in the USA. Modern warfare with stuff that China makes like drones and batteries only gets you so far. They can’t make a commercially competitive aviation jet engine without US and Western European suppliers.
And the US/NAFTA has a ton of existing manufacturing capability in a lot of the “screws and rivets” categories. For example, there are lots of automotive parts and assembly companies in the US. The industry isn’t as big as it used to be but it’s still significant. The US is the largest manufacturing exporter besides China.
And the [only] way to get that explosive growth is robotics. That is the Post-Post-Industrial Revolution that we're stepping into - it is when manufacturing stops being separate from the knowledge-based economy and instead becomes a part of it aa a form of an output, specifically a physical-world form of output from the knowledge-based economy.
>The dollars are being diverted elsewhere.
The dollars are going in exactly right direction - AI. After LLM the companies like NVDA and Google are making next steps - foundational world models and robotics.
>Intel a chip maker who can directly serve the AI boom
Intel is a managers' gravy train - just like for example Sun Microsystems was 20 years ago. Forget about it.
>Intel ... has failed to deploy its 2nm or 1.8nm fabs and instead written them off. So even as AI gets a lot of dollars it doesn't seem to be going to the correct places.
The dollars go to NVDA instead of Intel. Seems exactly to correct place.
> The assumption here is that, without AI, none of that capital would have been deployed anywhere. That intuitively doesn't sound realistic.
That's the really damning thing about all of this, maybe all this capital could have been invested into actually growing the economy instead of fueling this speculation bubble that will burst sooner or later, bringing along any illusion of growth into its demise.
"Amen. And amen. And amen. You have to forgive me. I'm not familiar with the local custom. Where I come from, you always say "Amen" after you hear a prayer. Because that's what you just heard - a prayer."
At this point, everyone is just praying that AI ends up a net positive, rather than bursting and plunging the world into a 5+ year recession.
Nit: if this happened, I believe the treasury yield would plummet.
This is very common, and this happens in literally every country.
But their CAPEX would be much smaller, as if you look at current CAPEX from Big Tech, most of it are from NVidia GPUs.
If a Bubble is happening, when it pops, the depreciation applied to all that NVidia hardware will absolute melt the balance sheet and earnings of all Cloud companies, or companies building their own Data centers like Meta and X.ai
Can anyone shed light on what is going on between these two groups. I wasn't convinced by the rest of the argument in the article, and I would like something that didn't just rely on "AI" as an explanation.
In more uncertain scenarios small companies can't take risks as well as big companies. The last 2 years have seen AI, which is a large risk these big companies invested in, pay off. But due to uncertainty smallish companies couldn't capitalize.
But that's only one possible explanation!
LOL. It's paying off right now, because There Is No Alternative. But at some point, the companies and investors are going to want to make back these hundreds of billions. And the only people making money are Nvidia, and sort-of Microsoft through selling more Azure.
Once it becomes clear that there's no trillion dollar industry in cheating-at-homework-for-schoolkids, and nvidia stop selling more in year X than X-1, very quickly will people realize that the last 2 years have been a massive bubble.
And I don't know, because I have about 60 minutes a week to think about this, and also good quantitative market analysis is really hard.
So whilst it may sound like a good reposte to go "wow, I bet you make so much money shorting!" knowing that I don't and can't, it's also facile. Because I don't mind if I'm right in 12, 24 or 60 months. Fwiw, I thought I'd be right in 12 months, 12 months ago. Oops. Good thing I didn't attempt to "make money" in an endeavor where the upside is 100% of your wager, and the downside theoretically infinite.
Mind you, this is a view that exists - a few large hedge funds and sell side firms currently hold negative positions/views on these companies.
However, the fact of the matter is, fewer people are willing to take that bet than the opposite view. So it is reasonable to state that view with care.
You might be right at the end of the day, but it is very much not obvious that this bet has not (or will not) pay off.
At any point in time the world thinks that those top 10 are unstoppable. In the 90's and early 00's... GE was unstoppable and the executive world was filled with acolytes of Jack Welch. Yet here we are.
Five years ago I think a lot of us saw Apple and Google and Microsoft as unstoppable. But 5-10 years from now I bet you we'll see new logos in the top 10. NVDA is already there. Is Apple going to continue dominance or go the way of Sony? Is the business model of the internet changing such that Google can't react quick enough. Will OpenAI go public (or any foundational model player).
I don't know what the future will be but I'm pretty sure it will be different.
[1] https://www.visualcapitalist.com/ranked-the-largest-sp-500-c...
Typically, you probably need to go down to the S&P 25 rather than the S&P 10.
They are also all tech companies, which had a really amazing run during Covid.
They also resemble companies with growth potential, whereas other companies such as P&G or Walmart might’ve saturated their market already
Only 8 out of the 10 are. Berkshire and JP Morgan are not. It is also arguable whether Tesla is a tech company or whether it is a car company.
Apple is 22% of BRK’s holdings. The next biggest of their investments are Amex, BoA, Coke, Chevron.
They are not a tech company.
If you’re referencing Trump’s tariffs, they have only come into effect now, so the economic effects will be felt in the months and years ahead.
IMO this is an extremely scary situation in the stock market. The AI bubble burst is going to be more painful than the Dotcom bubble burst. Note that an "AI bubble burst" doesn't necessitate a belief that AI is "useless" -- the Internet wasn't useless and the Dotcom burst still happened. The market can crash when it froths up too early even though the optimistic hypotheses driving the froth actually do come true eventually.
Once users get hooked on AI and it becomes an indispensable companion for doing whatever, these companies will start charging the true cost of using these models.
It would not be surprising if the $20 plans of today are actually just introductory rate $70 plans.
So a big concern then? (Although not a death sentence)
Shouldn't something like Kubernetes or Android's flavor of open source be on the radar? Seems like there are plenty of large players that might turn their 4th place closed source API into a first place open ecosystem.
Even today, Azure and AWS are not really cheaper or better - for most situations, they're more expensive, and less flexible than what can be done with OS infrastructure. For companies who are successful making software, Azure is more of a kneecap and a regret. Many switch away from cloud, despite that process being deliberately painful - a shocking mirror of how it was switching away from Microsoft infrastructure of the past.
Distillation for specialisation can also raise the capacity of the local models if we need it for specific things.
So its chugging along nicely.
The benefits have just not been that wide ranging to the average person. Maybe I'm wrong but, I don't AI hype as a cornerstone of US jobs, so there's no jobs to suddenly dry up. The big companies are still flush with cash on hand, aren't they?
If/when the fad dies I'd think it would die with a wimper.
Self driving cars and intelligent robotics is the real goldmine. But we still don't seem to have the right architecture or methods.
I say that because self driving cars are entirely stagnant despite the boom AI interest and resources.
Personally i think we need a major breakthrough in reinforcement learning, computer vision (which are still mostly stuck at feed forward CNNs) and few shot learning. The tranformer is a major leap, but its not enough on its own.
In general I do not agree that the economy is overleveraged on AI just like it is not overleveraged on cyrpto currency. If the money dries up, I don't expect economy wide layoffs.
I think you might be underestimating how many non-technical people are using LLM’s daily.
That’s not correct. Did you mean something else?
1. There profits could otherwise be down.
2. The plan might be to invest a bunch up front in severance and AI Integration that is supposed to pay off in the future.
3. In the future that may or may not happen, and it'll be hard to tell, because it may pay off at the same time an otherwise recession is hitting, which smoothes it out.
It's almost as if it's not that simple.
Historically, the classic example of this is railway mania of the mid 19th century.[2] That started in 1830, with the Liverpool and Manchester Railway.[3] This was when the industrial revolution got out of beta. There were earlier railroads, but with dual tracks, stations, signals, tickets, schedules, and reasonably good steam engines, the Liverpool and Manchester worked like a real service. It was profitable. Then lots of others started building and over-building railroads, with varying degrees of success. See Panic of 1847.
It was really too early for good railroads. Volume production of steel didn't exist. Early railroads were built with wood and iron, not very well. Around 1880-1900, everything was rebuilt in steel, and got bigger, better, and safer.
Consider the early Internet. We had TCP/IP working across the US in the early 1980s, but it wasn't a big deal commercially for another 10-15 years, it wasn't everywhere until the early 2000s, and it wasn't out of bubble mode until 2010 or so.
[1] https://fortune.com/2025/08/06/data-center-artificial-intell...
But the only thing I've seen in my life that most resembles what is happening with AI, the hype, its usefulness beyond the hype, vapid projects, solid projects, etc, is the rise of the internet.
Based on this I would say we're in the 1999-2000 era. If it's true what does it mean for the future?
AI is more-or-less replacing people, not connecting them. In many cases this is economically valuable, but in others I think it just pushes the human connection into another venue. I wouldn’t be surprised if in-person meetup groups really make a comeback, for example.
So if a prediction about AI involves it replacing human cultural activities (say, the idea that YouTube will just be replaced by AI videos and real people will be left out of a job), then I’m quite bearish. People will find other ways to connect with each other instead.
For very simple jobs, like working in a call center? Sure.
But the vast majority of all jobs aren't ones that AI can replace. Anything that requires any amount of context sensitive human decision making, for example.
There's no way that AI can deliver on the hype we have now, and it's going to crash. The only question is how hard - a whimper or a bang?
I moved states and Xfinity was billing me for the month after I cancelled. I called, pressed 5 (or whatever) for billing. "It looks like your cable modem is disconnected. Power-cycling your modem resolves most problems. Would you like to do that now?" No. "Most problems can be resolved by power-cycling your modem, would you like to try that now?" No, my problem is about billing, and my modem is off-line because I CANCELLED MY SERVICE! They asked three more times (for a total of five) before I could progress. For reasons I have now forgot I had to call back several times, going through the whole thing again.
There is a name for someone who pays no attention to what you say, and none of them are complimentary. AI is, fundamentally, an inhuman jerk.
(It turned out that they can only get their database to update once a month, or something, and despite the fact that nobody could help me, they issued me a refund in a month when their database finally updated. The local people wanted to help, but could not because my new state is in a different region and the regions cannot access each other.)
Klarna would like a word.
> Anything that requires any amount of context sensitive human decision making, for example.
That describes a significant percentage of call center work.
The only reason that we can have such nice things today like retina display screens and live video and secure payment processing is because the original Internet provided enough value without these things.
In my first and maybe only ever comment on this website defending AI, I do believe that in 30 or 40 years we might see this first wave of generative AI in a similar way to the early Internet.
This bubble also seems to combine the worst of the two huge previous bubbles; the hype of the dot-com bubble plus the housing bubble in the way of massive data center buildout using massive debt and security bundling.
These things, as they are right now, are essentially at the performance level of an intern or recent graduate in approximately all academic topics (but not necessarily practical topics), that can run on high-end consumer hardware. The learning curves suggest to me limited opportunities for further quality improvements within the foreseeable future… though "foreseeable future" here means "18 months".
I definitely agree it's a bubble. Many of these companies are priced with the assumption that they get most of the market; they obviously can't all get most of the market, and because these models are accessible to the upper end of consumer hardware, there's a reasonable chance none of them will be able to capture any of the market because open models will be zero cost and the inference hardware is something you had anyway so it's all running locally.
Other than that, to the extent that I agree with you that:
> GenAI in its current form will never be reliable enough to do so-called "Agentic" tasks in everyday lives
I do so only in that not everyone wants (or would even benefit from) a book-smart-no-practical-experience intern, and not all economic tasks are such that book-smarts count for much anyway. This set of AI advancements didn't suddenly cause all cars manufacturers to suddenly agree that this was the one weird trick holding back level 5 self driving, for example.
But for those of us who can make use of them, these models are already useful (and, like all power tools, dangerous when used incautiously) beyond merely being coding assistants.
No, but GenAI in it's current form is insanely useful and is already shifting the productivity gears into a higher level. Even without 100% reliable "agentic" task execution and AGI, this is already some next level stuff, especially for non-technical people.
How do people trust the output of LLMs? In the fields I know about, sometimes the answers are impressive, sometimes totally wrong (hallucinations). When the answer is correct, I always feel like I could have simply googled the issue and some variation of the answer lies deep in some pages of some forum or stack exchange or reddit.
However, in the fields I'm not familiar with, I'm clueless how much I can trust the answer.
1. For coding, and the reason coders are so excited about GenAI is it can often be 90% right, but it's doing all of the writing and researching for me. If I can reduce how much I need to actually type/write to more reviewing/editing, that's a huge improvement day to day. And the other 10% can be covered by tests or adding human code to verify correctness.
2. There are cases where 90% right is better than the current state. Go look at Amazon product descriptions, especially things sold from Asia in the United States. They're probably closer to 50% or 70% right. An LLM being "less wrong" is actually an improvement, and while you might argue a product description should simply be correct, the market already disagrees with you.
3. For something like a medical question, the magic is really just taking plain language questions and giving concise results. As you said, you can find this in Google / other search engines, but they dropped the ball so badly on summaries and aggregating content in favor of serving ads that people immediately saw the value of AI chat interfaces. Should you trust what it tells you? Absolutely not! But in terms of "give me a concise answer to the question as I asked it" it is a step above traditional searches. Is the information wrong? Maybe! But I'd argue that if you wanted to ask your doctor about something that quick LLM response might be better than what you'd find on Internet forums.
Treat it like a brilliant but clumsy assistant that does tasks for you without complaint – but whose work needs to be double checked.
But I've seen some harnesses (i.e., whatever Gemini Pro uses) do impressive things. The way I model it is like this: an LLM, like a person, has a chance to produce wrong output. A quorum of people and some experiments/study usually arrives to a "less wrong" answer. The same can be done with an LLM, and to an extent, is being done by things like Gemini Pro and o3 and their agentic "eyes" and "arms". As the price of hardware and compute goes down (if it does, which is a big "if"), harnesses will become better by being able to deploy more computation, even if the LLM models themselves remain at their current level.
Here's an example: there is a certain kind of work we haven't quite yet figured how to have LLMs do: creating frameworks and sticking to them, e.g. creating and structuring a codebase in a consistent way. But, in theory, if one could have 10 instances of an LLM "discuss" if a function in code conforms to an agreed convention, well, that would solve that problem.
There are also avenues of improvement that open with more computation. Namely, today we use "one-shot" models... you train them, then you use them many times. But the structure, the weights of the model aren't being retrained on the output of their actions. Doing that in a per-model-instance basis is also a matter of having sufficient computation at some affordable price. Doing that in a per-model basis is practical already today, the only limitation are legal terms, NDAs, and regulation.
I say all of this objectively. I don't like where this is going; I think this is going to take us to a wild world where most things are gonna be way tougher for us humans. But I don't want to (be forced to) enter that world wearing rosy lenses.
Of course you don't trust the answer.
That doesn't mean you can't work with it.
One of the key use cases for me other than coding is as a much better search engine.
You can ask a really detailed and specific question that would be really hard to Google, and o3 or whatever high end model will know a lot about exactly this question.
It's up to you as a thinking human to decide what to do with that. You can use that as a starting point for in depth literature research, think through the arguments it makes from first principles, follow it up with Google searches for key terms it surfaces...
There's a whole class of searches I would never have done on Google because they would have taken half a day to do properly that you can do in fifteen minutes like this.
> There are some classic supply chain challenges such as the bullwhip effect. How come modern supply chains seem so resilient? Such effects don't really seem to occur anymore, at least not in big volume products.
> When the US used nuclear weapons against Japan, did Japan know what it was? That is, did they understood the possibility in principle of a weapon based on a nuclear chain reaction?
> As of July 2025, equities have shown a remarkable resilience since the great financial crisis. Even COVID was only a temporary issue in equity prices. What are the main macroeconomic reasons behind this strength of equities.
> If I have two consecutive legs of my air trip booked on separate tickets, but it's the same airline (also answer this for same alliance), will they allow me to check my baggage to the final destination across the two tickets?
> what would be the primary naics code for the business with website at [redacted]
I probably wouldn't have bothered to search any of these on Google because it would just have been too tedious.
With the airline one, for example, the goal is to get a number of relevant links directly to various airline's official regulations, which o3 did successfully (along with some IATA regulations).
For something like the first or second, the goal is to surface the names of the relevant people / theories involved, so that you know where to dig if you wish.
For one thing AI can not even count. Ask google's AI to draw a woman wearing a straw hat. More often than not the woman is wearing a well drawn hat while holding another in her hand. Why? Frequently she has three arms. Why? Tesla self driving vision can't differentiate between the sky and a light colored tractor trailer turning across traffic resulting in a fatality in Florida.
For something to be intelligent it needs to be able to think and evaluate the correctness of its thinking correctly. Not just regurgitate old web scrapings.
It is pathetic realy.
Show me one application where black box LLM ai is generating a profit that an effectively trained human or rules based system couldn't do better.
Even if AI is able to replace a human in some tasks this is not a good thing for a consumption based economy with an already low labor force participation rate.
During the first industrial revolution human labor was scarce so machines could economically replace and augnent labor and raise standards of living. In the present time labor is not scarce so automation is a solution in search of a problem and a problem itself if it increasingly leads to unemployment without universal bssic income to support consumption. If your economy produces too much with nobody to buy it then economic contraction follows. Already young people today struggle to buy a house. Instead of investing in chat bots maybe our economy should be employing more people in building trades and production occupations where they can earn an income to support consumption including of durable items like a house or a car. Instead because of the fomo and hype about AI investors are looking for greater returns by directing money toward scifi fantasy and when that doesn't materialize an economic contraction will result.
I'm not sure how up to date you are but most AIs with tool calling can do math. Image generation hasn't been generating weird stuff since last year. Waymo sees >82% fewer injuries/crashes than human drivers[1].
RL _is_ modifying its behavior to increase its own profitability, and companies training these models will optimize for revenue when the wallet runs dry.
I do feel the bit about being economically replaced. As a frontend-focused dev, nowadays LLMs can run circles around me. I'm uncertain where we go, but I would hate for people to have to do menial jobs just to make a living.
[1]: https://www.theverge.com/news/658952/waymo-injury-prevention...
We trust them because they are intrinsically and extrinsically motivated not to mess up
AI has no motivation
Otherwise, common sense, quick google search or let another LLM evaluate it.
Sure a lot of answers from llms may be inaccurate - but you mostly identify them as such because your ability to verify (using various heuristics) is good too.
Do you learn from asking people advice? Do you learn from reading comments on Reddit? You still do without trusting them fully because you have sniff tests.
LLMs produce way too much noise and way too inconsistent quality for a sniff test to be terribly valuable in my opinion
YouTube videos aren’t much better. Minutes of fluff are added to hit a juicy 10 minute mark so you can see more ads.
The internet is a dead place.
The people who use llms to write reports for other people who use llms to read said reports ? It may alleviate a few pain points but it generates an insane amount of useless noise
But once you get out of the tech circles and bullshit jobs, there is a lot of quality usage, as much as there is shit usage. I've met everyone from lawyers and doctors to architects and accountants who are using some form of GenAI actively in their work.
Yes, it makes mistakes, yes, it hallucinates, but it gets a lot of fluff work out of the way, letting people deal with actual problems.
I can't wait for the first massive medical mistakes from LLM reliance
What should I do with my ETF? Sell now, wait for the inevitable crash? Be all modern long term investment style: "just keep invested what you don't need in the next 10 years bro"?
This really keeps me up at night.
I don't know why buffet sold a lot of shares over the last few years to sit on a huge pile of cash, but I could guess.
The Job market looks like shit, people have no money to buy stuff and credit card debt is skyrocketing. When people can't buy stuff it is bad for the economy. Even if AI is revolutionary then we would need people spending money to keep the economy going, and with more AI taking jobs that wouldn't happen.
If AI doesn't work out the market is going to crash and the only companies keeping the market growing are going to wipe out all that growth.
No matter how I look at it I don't see a thriving market.
But let’s assume we can for a moment.
If we’re living in a 1999 moment, then we might be on a Gartner Hype Cycle like curve. And I assume we’re on the first peak.
Which means that the "trough of disillusionment" will follow.
This is a phase in Hype Cycle, following the initial peak of inflated expectations, where interest in a technology wanes as it fails to deliver on early promises.
AI is real just like the net was real, but the current environment is very bubbly and will probably crash.
Same thing now with AI. The capital is going to dry up eventually, no one is profitable right now and its questionable whether or not they can be at a price consumers would be willing or able to pay.
Models are going to become a commodity, just being an "AI Company" isn't a moat and yet every one of the big names are being invested in as if they are going to capture the entire market, or if there even will be a market in the first place.
Investors are going to get nervous, eventually, and start expecting a return, just like .com. Once everyone realizes AGI isn't going to happen, and realize you aren't going to meet the expected return running a $200/month chatbot, it'll be game over.
Recent deja-vus are articles like this:
"The 20-Somethings Are Swarming San Francisco’s A.I. Boom" and
"Tech Billboards Are All Over San Francisco. Can You Decode Them?"
If I recall correctly, after the 2000 bust, folks fled silicon valley abandoning their leased BMWs at the SFO airport. 101 had no traffic jams. I wonder if that will repeat this time around.
But if, or when AI gets a little better, then we will start to see a much more pronounced impact. The thing competent AIs will do is to super-charge the rate at which profits don't go to labor nor to social security, and this time they will have a legit reason: "you really didn't use any humans to pave the roads that my autonomous trucks use. Why should I pay for medical expenses for the humans, and generally for the well-being of their pesky flesh? You want to shutdown our digital CEO? You first need to break through our lines of (digital) lawyers and ChatGPT-dependent bought politicians."
Like the Internet boom, it's both. The rosy predictions of the dotcom era eventually came true. But they did not come true fast enough to avoid the dotcom bust. And so it will be with AI.
The "bust" in this scenario would hit the valuations (P/E ratio) of both the labs and their enterprise customers, and AI businesses dependant on exponential cost/performance growth curves with the models. The correction would shake the dummies (poorly capitalized or scoped businesses) out of the tree, leaving only the viable business and pricing models still standing.
That's my personal prediction as of writing.
This chart is extremely sparse and very confusing. Why not just plot a random sample of firms from both industries?
I'd be curious to see the shape of the annualized revenue distribution after a fixed time duration for SaaS and AI firms. Then I could judge whether its fair to filter by the top 100. Maybe AI has a rapid decay rate at low annualized revenue values but a slower decay rate at higher values, when compared to SaaS. Considering that AI has higher marginal costs and thus a larger price of entry, this seems plausible to me. If this is the case, this chart is cherry picking.
https://x.com/dampedspring/status/1953070287093731685
> However, this pace is likely unsustainable going forward. The sharp acceleration in capex is likely behind us, and the recent growth rate may not be maintained. Any sustained weakness in final demand will almost certainly affect future investment, as AI demand ultimately depends on business revenues and profits, which are tied to nominal GDP. Realized and forecasted capex remain elevated, while free cash flow and cash and cash equivalents are declining for hyperscalers.
IMO this should be a trigger for investors that they have not been receiving their profits, and instead the profits are being dumped into CEOs next big bets that will fuel their stockbased compensation gains. Also to blame is the government's culpability here for creating both tax incentives and a lack of laws that say profits must be returned as dividends (they can always be DRIP'd back into the company as new shares if desired, it's absurd to say its better for investors, when the alternative actually gives more choice).
In hindsight, it will be clear, and future generations (if any exist) will ask: "Why didn't you understand what was happening at the time"
My answer: Noise. Just because you can find someone who wrote down the answer at the time, doesn't mean that they really understood the answer, at least not to the extent that we will understand with hindsight.
Future history is contingent.
Children in America do not starve to death. There is no famine, economically manmade or otherwise.
This is America. We will happily allow and encourage your child to go into arbitrary amounts of debt from a young age to be fed at school.
What does that even mean and what do you want changed?
And that's a whole different problem. Cheap != inexpensive.
https://www.ers.usda.gov/topics/food-nutrition-assistance/fo...
So as many people have speculated that OpenAI has no moat - are wrong.
Just like the dot com bubble we'll need to wash out a ton of "unicorn" companies selling $1s for $0.50 before we see the long term gains.
So is this just about a bit of investor money lost? Because the internet obviously didn't decline at all after 2000, and even the investors who lost a lot but stayed in the game likely recouped their money relatively quickly. As I see it, the lesson from the dot-com bust is that we should stay in the game.
And as for GPT-5 being on the exponential growth curve - according to METR, it's well above it: https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...
because ?
And while I do want your money, we can just look at LMArena which does blind testing to arrive at an ELO-based score and shows 4.0 to have a score of 1318 while 4.5 has a 1438 - it's over twice likely to be judged better on an arbitrary prompt, and the difference is more significant on coding and reasoning tasks.
But this isn't 4.6 . its 5.
I can tell difference between 3 and 4.
We have seen it before, again and again.
What about the software? What about the data? What about the models?
Shouldn’t the customers‘ revenue also rise if AI fulfills its productivity promises?
Seems like the only ones getting rich in this gold rush are the shovel sellers. Business as usual.
Not necessarily, see the Jevons paradox.
Maybe not the profit but at least the revenue.
AI is propping up the US economy
This somewhat reflects my experience... I can totally see the back-and-forth dance taking longer in some cases.
But I also think there is more than efficiency being unlocked here. Sure, a developer might have cranked out a rough MVP in less time, but with this they're also often continuously updating a README, tests and other infrastructure as they go.
One could argue about whether that's its own footgun - relying on tests that don't really test what they should, and let critical bugs through later.
More people subscribe to/play with a $20/m service than own/admin state-of-the-art machines?! Say it ain't so /s
The problem is, $20/m isn't going to be profitable without better hardware, or more optimized models. Even the $200/month plan isn't making money for OpenAI. These companies are still in the "sell at a loss to capture marketshare" stage.
We don't even know if being an "AI Company" is viable in the first place - just developing models and selling access. Models will become a commodity, and if hardware costs ever come down, open models will win.
What happens when OpenAI, Anthropic, etc. can't be profitable without charging a price that consumers won't/can't afford to pay?
There, summed it up for you.