Take Google or Meta: Today Google makes a shit-tonne of money and to make that money they need to run some servers. The servers are extremely cheap relatively to the revenue they make running the business. This makes them a very attractive stock - the core of why SAAS looks great. Now let's assume the monopoly path. Google can win. I think they likely will win. But now they're going to spending... how many hundreds of billions constantly training new models? The cost of providing the service suddenly isn't small relative revenue they're getting. So even for them it looks awful for their valuation.
We know LLM companies have, for lack of a better word, "sidestepped" the copyright on millions of works with their "transformative fair use" arguments. Are LLMs also a way to sidestep patents?
I don't understand why he thinks OpenAI can't be one of the duopolies or become the monopoly. OpenAI's models are always the first or second best overall - usually the first. They are also leading in the consumer market by a wide margin. They also made a strategic decision that is paying off which was committing to more compute early on while Anthropic is hammered by the lack of compute.
PS. They've raised ~$200b total, not $1 trillion.
I could see people saying this in 2022, but now? No chance.
Chinese models keep demonstrating that SOTA can be approximated for a fraction of the cost. The innovation out of these companies keep showing diminishing returns, with a greater emphasis on the tooling and application layer. Having the right workflow with the right data is more important than having the right model. We could freeze AI now, and I'd bet good money that the current state of things is good enough to - not be first - but competitive for the next few years.
Even if we do end up with a oligopoly situaiton, it'll be less like Microsoft in the 90s and more like Microsoft now where they just give out windows for free, have support for WSL and the focus is on cloud services rather than their OS.
I'm constantly amazed how this AGI/monopoly narrative can be kept up so long in the West, it just doesn't make sense (unless the state creates said monopoly by forbidding competition).
In other comments people mention the "flywheel" of data and money feeding training, but there's a view that at some point the baseline open-weight models are "good enough" that the money will dry up.
baseline open-weight models are "good enough" that the money will dry up.
I take a different view. Open-weight models aren't going to be free forever. At some point, open weight model labs will also have to make money.My guess is that the industry will consolidate. The winners will absorb the losers and focus on generating revenue.
Therefore, there will be a growing gap between open and free models and the proprietary SOTA models.
The ones that are already released are, and they're already very good for most purposes and can be fine-tuned indefinitely, includin months or years down the line when processes have been optimized and things aren't as compute-heavy as they are now.
If there is consolidation by absorption, that derisks attempting to challenge the SOTA providers, and so they will keep facing attempts.
Claude is kicking ass in the niche of coding and processes.
1 trillion is a lot of money for something that's not differentiated and protected in a massive market.
Does it look like OpenAI has that in place?
Cuban thinks they don't, and won't.
Claude is kicking ass in coding but it seems like Codex is catching up fast. Claude Code's PR has taken a hit recently due to the lack of compute forcing Anthropic to dumb down the models. Codex has been gaining momentum.
Chip manufacturing aren't really differentiated either - it didn't stop TSMC from becoming the monopoly for high end chip nodes, capturing 90%+ of the advanced chip market. The reason they have is because Rock's Law makes it too expensive to build the next node unless you've generated enough revenue from the current node. I don't see why it isn't the same for SOTA models.
Machine learning has no real moat. There's no network effect, it's not hard (you can just throw money at the problem). It's not data, because we have an existence proof that general intelligence can be trained by a few humans and a shelf full of books. The compute to do it is generally available. As soon as one organization releases open weights, everyone can use it immediately, even on modest local hardware.
The ones killing on ads are Google, Meta, and Amazon.
I just don't see how ChatGPT will gobble those market shares - ads are increasingly tied to sales attribution, and it would require a complete shift of the market for ChatGPT to take over the role of those 3 players.
People will still try to look for content around the products they buy, or will shop for prices, or will look for feedback from other users of the product.