There's a good essay from Andrew Chen on this topic: Revenge of the GPT Wrappers: Defensibility in a world of commoditized AI models
"Network effects are what defended consumer products, in particular, but we will also see moats develop from the same places they came from the past decades: B2B-specific moats (workflow, compliance, security, etc), brand/UX, growth/distribution advantages, proprietary data, etc etc." [1]
Also check out the podcast with the team at Cursor/Anysphere for details into how they integrate models into workflows [2]
[1] https://andrewchen.substack.com/p/revenge-of-the-gpt-wrapper...
Moats are the logistic network that Amazon has.. ok spend $10bn over 5 years and then come at me - if I didn't sit still...
Moats are what Google has in advertising... ok, pull 3% of the market for more money than god and see if it works..
brand/ux is not a moat, it's table stakes.
1. Status symbols - my Lambo signifies that my disposable income is greater than your disposable income
2. Fan clubs - I buy Nikes because they do a better job at promoting great athleticism, and an iPhone to pay double for hardware from 3 years ago
3. Visibility bias - As a late adopter I use whatever the category leader is (i.e. ChatGPT = AI, Facebook = the Internet)
What you describe sounds more like market power resulting from a monopoly
Technology enables UX. When the underlying technology is commodity—which is often the case—it's easy for competitors to copy the UX. But sometimes UX arises from the tight marriage of design and proprietary technology.
Good UX also arises from good organization design and culture, which aren't easy to copy. Think about a good customer support experience where the first agent you talk with is empowered to solve your issue on the spot, or there's perfect handoff between agents where each one has full context of your customer issue so you don't have to repeat yourself.
Except for the technical advantage of M-series macs that's like all of the Apple moat. Apple brand and UX is what is selling the hardware.
They make the UX depend on the number of Apple devices you have, so a little bit of network effect. But that's mostly still UX.
Lots of people, these days, just use ChatGPT to search the web these days. I've actually never understood google search ads, as I have never clicked on one, even by accident 10+ years. If I want to buy something I search within Amazon for it.
YouTube however, yeah, that is a stellar advertising platform.
If your visibility on current state of AI is limited to hallucinogenic LLM prompts -- it's worth digging a bit deeper, there is a lot going on right now
I am almost in disbelief that LLMs are the thing that reached the "tipping point" for most companies to magically care for ML. The amount of products, that could have been built properly 5 years ago, that exist now in a slower form because of "reasoning" LLMs, is likely astonishing.
I'd hope that the very first commercially successful "AI media" be it a 1 minute commercial or a 10 minute TV segment or whatever brings the lawsuits. I really want to know if i can feel any vindication about arguing about this for the last 3 decades of my life. (IP specifically)
more to your member-berries, whole swaths of interesting research disappeared, either abandoned or bought and closed sourced. Genetic Algorithms, artificial life, stuff with optics, 3-atom thick transistors (hey, IBM patented that, but microsoft also did basically the same thing with their STP qubits - everything has to be arranged at atomic widths or whatever. IBM also built a USS Enterprise (unsure if D i am not a huge fan) out of atoms, forget if to scale. in like 2003. Microsoft spent 17 years playing catch-up with *the* hardware people.)
yeah. is the conclusion that moneyed interests suck?
ML generally is for pattern recognition in data. That includes anomaly detection in financial data, for example. It is used in fraud detection.
Image ML/AI is used in your phone's facial recognition, in various image filtering and analysis algorithms in your phone's camera to improve picture quality or allow you to edit images to make them look better (to your taste, anyway).
AI image recognition is used to find missing children by analysing child pornography without requiring human reviewers to trawl through it - they can just check the much fewer flagged images.
AI can be used to generate captions on videos for the deaf or in text to speech for the blind.
There are tons of uses of AI/ML. Another example: video game AI. Video game upscaling. Chess and Go AI: NNUE makes Chess AI far stronger and in really cool creative ways which have changed high level chess and made it less drawish.
And even if there was, the fast follower to the Bitter Lesson is the Lottery Ticket Hypothesis: if you build a huge, general model to perform my task, I can quickly distill and fine tune it to be better, cheaper, and faster.
"better" = "smaller, more specialized, domain-specific"
I still think of AI as the analogy of databases is perfect. No database is set up for the necessary applications where they get deployed. The same is true for LLMs except for some very broad chatbot stuff where the big players already own everything.
And if AI is just these chat bots, the technology is going to be pretty minor in comparison to database technology.
In the course of building my side project (storytveller), I’ve found that newer models tend to do worse at storytelling. I’ve tested basically every model under the sun that is available for use in production, and one stands out - now it may be that others will come along and not do the research that I’ve done to choose the same model, and thus my application will be better than theirs in part because of the AI model I choose.
Having “AI” as part of your application will not matter as much, that I agree with, but having “the right AI” will.
Of course,the user experience will definitely matter as well, as will the marketing and other criteria - another point of agreement with the article. But that does not diminish the fact that, if your application does not involve testing benchmarks, there is a good chance that a model that may not be the newest could still be the best for your particular use case, so you should not just blindly choose the latest shiniest model as this article sort of implies.
The hammer does matter.
Once an algorithm/technique is discovered, it becomes a free library to install.
Data and user-base are still the moat. The traffic that ChatGPT stole from Google is the valuable part.
1) Lots of players enter at the start because there are no giant walled gardens yet. 2) Being best in class will require greater and greater capex (like new process nodes) as things progress. 3) New classes of products will be enabled over time depending on how performance improves.
There is more there, but, with regard to this post, what I want to point out is that CPUs were pretty basic commodities in the beginning, and it was only as their complexity and scale exploded that margins improved and moats were possible. I think it will play out similarly with AI. The field will narrow as efficiency and performance improves, and moats will become possible at that point.
That is true, but the context around it currently, the fast rush to appropriate freely from others and of course without contributing/crediting anything back, not to mention the sludge that has exploded all over the internet is one of agreement with the OP. I'm more than sure that AI could be used in very intelligent ways by artists themselves though, and I don't mean in a lazy way to cut corners and pump out content but a more deliberate way where the effort is visible (but I don't just mean visual arts).
I also believe that, just like humans, AI models will be specialized so we'll have companies creating all kinds of special purpose models that have been trained with all knowledge from particular domains and are much better in particular fields. Generic AI models cannot compete there either.
If - and that's a big if - LLM-Tech turns out to be the path to true (not the OpenAI-definition) AGI, everybody will be able to get there in time, the tech is well known and the research teams notoriously leaky. If not, Another AI winter is going to follow. In either case, the only ones who are going to make a major profit there are the ones selling shovels during the gold rush - nvidia. Well, and the owners who promised investors all kinds of ridiculous nonsense.
Anyway, the most important point, in my opinion, is that it's a bad idea to believe people financially incentivized to hype their AIs into unrealistic heights.
However, it turned out to be a very difficult and time-consuming process to move from a mostly-working MVP to a system that was safe across the vast majority of edge cases in the real world. Many competitors gave up because the production system took much longer than expected to build. However, today, a decade or more in to the race, self-driving cars are here.
Yet even for the winners, we can see some major concessions from the original vision: Waymo/Tesla/etc have each strictly limited the contexts you can use self-driving, so it's not a 100% replacement for a human driver in all cases, and the service itself is still very expensive to run and maintain commercially, so it's not necessarily cheaper to get a self-driving car than a human driver. Both limitations seem like to be reduced in the years ahead: the restrictions on where you can use self-driving will gradually relax, and the costs will go down. So it's plausible that fleets of self-driving cars are an everyday part of life for many people in the next decade or two.
If AI development follows this path, we can expect that many vendors will run out of cash before they can actually return their massive capital investment, and a few dedicated players will eventually produce AIs that can handle useful subsets of human thoughtwork in a decade or so, for a substantial fee. Perhaps in two decades we will actually have cost-effective AI employees in the world at large.
In a limited fashion, though. We don't have generalized fully autonomous vehicles just yet.
(As always, the task of the hype man isn’t to maintain the bubble indefinitely, but just long enough for him to get his money out.)
There are more fundamental issues at play, where I see stock price fairly divorced from actual, tangible value, but the line still goes up because people are going to keep buying tulips forever, right?
It sucks because I think investing in the stock market takes away from dynamic investment in innovative startups and real R&D, and shift capital towards shiny things.
I read the first one when it was posted here too and I don't get their point. It's a lot of words, but what are you trying to say?
If you just slap a ChatGPT backend onto your product, your competitors will do it too and you gain nothing without some additional innovation.
(2) On the other hand, Cursor's value is essentially gluing the two things together. If your data is already in the castle (e.g. my codebase and historical context of building it over time is now in Cursor's instance of Claude) then the software is very sticky and I likely wouldn't switch to my own instance of Claude. The author also addresses this noting that "how data flows in and out" has value, which Cursor does.
> The AI Code Editor - Built to make you extraordinarily productive, Cursor is the best way to code with AI.
Cursor is literally a VS Code fork + AI.
> unless you’re building or hosting foundation models, saying that AI is a differentiating factor is sort of like saying that your choice of database is what differentiated your SaaS product: No one cares.
Cursor is doing exactly what they say "no one cares" about.
It's bad writing (and thinking).
It seems cursor did a bunch of things right, from choosing to base it on an already popular editor, to the vision and specific ways they have integrated AI, to the flexibility of which models to use. No doubt there was some "early mover" advantage too.
Certainly the AI isn't their moat since it's mostly using freely available models (although some of their own too I believe), and it remains to be seen how much of a moat of any kind or early-mover advantage they really have. The AI-assisted coding market is going to be huge, and presumably will attract a lot more competition.
I'm old enough to remember when the BRIEF (basic reconfigurable interactive editing facility) editor (by Underware) took the world by storm, but where is it now?
Any other former BRIEF users/fans out there ?!
First mover advantage.
They are not safe against Microsoft, who have the resources to copy every feature that Cursor has into VS code and can afford to offer it for "free" for a very long time and Microsoft also has access to the exact same models as Cursor.
So not only that tells you there is no moat, but offering the best tools and models for free is exactly what Microsoft's modern definition of "Extinguish" is from their EEE strategy.
Cursor was released after Copilot
Rest assured, right now people are filing claims to the same old stuff, only now "with AI" tacked on. And rest assured, the rubber-stamping machine in the USPTO's basement is running 24/7 approving them.
Many key pieces of AI technology, like transformers, have patents. If you start trying to enforce your “…with AI” patent against Google, they’re just going to turn around and sue you for using their patented technology.
Start there, no matter the tool.
No. Stop! Please! I want my UX in an app to do the damn thing I precisely intend it to do, not what it believes I intend to do - which is increasingly common in UX design - and I hate it. It's a completely opaque black box and when the "magic" doesn't work (which it frequently does not, especially if you fall outside of "normal" range) - the UX is abysmal to the point of hilarity.
So if the UX feels increasingly framed in terms of what the "developers" see/want/believe to be profitable, and less from the actual user's perspective, that's because the UX was sketched by hustlers who see software development explicitly as a "React grind to increase MRR/ARR."
Granted, React would not be too helpful with the core video player engine, but actual video apps have lots of other features like comments, preview the next videos, behind-the-scenes notes, etc.
And then, if it's two-way video, you have a whole new level of state to track, and you'd definitely need to roll your own to make a good one.
This is pretty weird epistemological phrasing. It's a bit like saying "I want to know the truth, not just what I believe to be the truth!"
Which seems… pretty reasonable to me. Both involve the other party substituting some vaguely patronizing judgment of theirs for the first party’s.
Basically all applications these days are like this. Rather than assume users are sentient, intelligent beings capable of controlling devices and applications in order to achieve some goal, modern app design seems to be driven by a philosophy which views the operators of applications as imbeciles that require constant hand-holding and who must be given as little control and autonomy as possible. The analog world becomes more appealing every day because of this.
Nontechnical users do not have that mental model: they base their estimation of what a control should do on what problem they believe that control solves. The discrepancy starts with misalignment between the user’s mental model of the problem, and how software solves the problems. In that hypothetical system where some condition is preventing something from happening and there are one or two things you can do to mitigate it, the nontechnical user doesn’t give a fuck if and how something failed if there’s a different possible approach won’t fail. They just want you to solve their problem with the least possible amount of resistance, and don’t have the requisite knowledge to know why the program is telling them, let alone how it relates to the larger problem. That’s why developers often find UIs built for nontechnical users to be frustrating and seem “dumbed down”. For users concerned only with the larger problem and have no understanding of the implementation, giving them a bunch of implementation-specific controls is far dumber than trying to pivot if there’s a stumbling block in the execution and still try to do what needs to be done without user intervention. Moreover, even having that big mess of controls on the screen for more technically-sophisticated users increasing cognitive load and makes it more difficult for nontechnical users to figure out what they need to do.
It’s a frustrating disconnect, but it’s not some big trend to make terrible UIs as developers often assume. Rather, it’s becoming more common because UI and UX designers are increasingly figuring out what the majority of users actually want the software to do, and how to make it easier for them to do it. When developers are left to their own devices with interfaces, the result is frequently something other developers approve of, while nontechnical users find it clunky and counterintuitive. In my experience, that’s why so few nontechnical users adopt independent user-facing FOSS applications instead of their commercial counterparts.
First: AI requires an awful lot of resources, which in itself is a moat.
Second: having a moat doesn’t prevent your service to be attacked. See Tesla.
Third: not having a moat doesn’t prevent your service from dominating. See TikTok.
It’s something that everyone has to implement because their products will be inferior without it. But it’s not something you can use to build a monopoly easily, and since everyone has to do it there will be many people racing to the bottom pushing the price down.
superbowl salesforce Ad that friend shared with me to get my comments. I still have no idea wtf this is or what AI has to do with it.