> AI unlocks what seems to be the future: dynamic, context-dependent generative UIs or something similar. Why couldn’t my watch and glasses be everything I need?
https://www.apple.com/watch/
https://www.apple.com/apple-vision-pro/
> The other problem is that at its core, AI is two things: 1) software and 2) extremely fast-moving/evolving, two things Apple is bad at.Idk my MacBook Pro is pretty great and runs well. Fast moving here implies that as soon as you release something there's like this big paradigm shift or change that means you need to move even faster to catch up, but I don't think that's the case, and where it is the case the new software (LLM) still need to be distributed to end users and devices so for a company like Apple they pay money and build functionality to be the distributor of the latest models and it doesn't really matter how fast they're created. Apple's real threat is a category shift in devices, which AI may or may not necessarily be part of.
I'm less certain about Amazon but unless (insert AI company) wants to take on all the business risk of hosting governments and corporations and hospitals on a cloud platform I think Amazon can just publish their own models, buy someone else's, or integrate with multiple leading AI model publishers.
That makes the “category shift” difficult for Apple to execute well and difficult for competitors to gun for them. Microsoft is even worse off there because the PC OEMs relied on dying companies like Intel to deliver engineering for innovative things.
AWS, Azure, and GCP are doing the same stuff in different flavors. Google and Microsoft approach human facing stuff differently because they own collaboration platforms.
Apple and Microsoft are both flailing at the device level. Apple is ahead there as at least I can tell you what they are not doing well. Microsoft’s approach is so incoherent that it struggles to tell you what they are doing, period.
Apple could turn everything around overnight by quietly re-enabling the jailbreak community for a few years, or restoring the 2022 Hypervisor API entitlement for arbitrary VMs. Hopefully this does not have to wait for leadership changes.
Either of those actions would take the shackles off Apple's underutilized hardware and frustrated developers. The resulting innovations could be sherlocked back into new OS APIs under Apple guardrails, whence they could generate revenue via App Store software. Then retire the jailbreaks and silently thank OutsideJobs for uncredited contributions to Apple upstream.
At present, the only industry participants maximizing usage of Apple hardware are zero-day hoarders. Meanwhile, every passing day allows Qualcomm, Nvidia and Arm-generic/Mediatek to improve their nascent PC hw+OS stacks, whittling away at Apple's shrinking hardware lead.
I'm not sure Tim Cook is the guy to overrule that based on a vision of the future.
If Pixel phones (with inferior hardware) can run Debian Linux VMs with external USB-c display, so can Apple tablets/phones. Apple and Google app stores have similar business incentives and antitrust constraints.
The problem that you and others with similar interests are running into is that you’re asking Apple to spend perhaps tens of millions of dollars to make a change that, frankly, almost nobody wants or cares about. I don’t want it or care about it whatsoever, nor does my grandma. That’s why this all plays out in court and in countries that want to stick a finger in the eye of American tech companies.
Anti-trust concerns tend to just be multi-billion dollar corporations (Apple, Meta, Epic, Netflix, etc.) arguing over who gets the slice of your wallet. None of these companies lower prices when they win court battles, experiences don’t get better, and as Apple in particular loses more and more control over the App Store they lose the ability, however flawed, to collectively bargain on behalf of regular folks against developers [1].
Can anyone point to a single major technology product/service/app, like Spotify or something where after Apple has ceded control over the App Store the company has lowered prices, or perhaps instituted tougher privacy controls than Apple has demanded on the App Store?
Is there a single example?
[1] Items like forced private Sign in with Apple, or disclosing how data is used, don’t and won’t exist on “the Meta App Store” because as a single person or small group you’d rather have access to Facebook and you’ll give up data for it. But Apple can listen to users and then force Meta to comply with those demands, however flawed the situation may be and however self-serving Apple’s interests may be.
Epic?
VBucks are still $8 aren't they?
Do you wholeheartedly believe that this counts as lowering prices or providing improvements? Are the Vbucks still $8 or no?
Can you elaborate? I don't see what you're seeing.
What is the story with Copilot as an on device feature of Windows? How dos that relate to an “AI PC”? In my business, what is Copilot (on the PC) do? How about Copilot Chat? How do they both relate to Copilot for Office 365?
Answer: I have no fucking idea. It’s a big soup of stuff with the same name that dumps everything in a bowl that the company makes. In a business, you’re going to make product decisions within your enterprise than fundamentally change the products based on your privacy and security needs and what countries you are operating in.
Apple has articulated a vision/framework for what they are delivering on device, with outside 1st party help and with 3rd parties. They’ve laid out how they are accessing your proprietary data. They have also failed to deliver.
It’s complicated and difficult - I say fail in the “fail fast” sense, not as an insult. Where are the line(s) between Excel as a component of Windows, as a web service and as a node on the office graph?
If I need AI help integrated with the product to write Excel formulas, I think the way to get that from Microsoft is with Copilot for Office 365, which also accesses all of my data on the graph and can potentially leak stuff with web grounding. (Which for companies means you need to fix SharePoint governance and do lots of risk assessment #godbless)
I just go to ChatGPT.
Pretty sure that's the product requirement that drove MS Purview (previously: MS data protection?).
No business wants to take the time to do data classification. No business is going to do cool new stuff with sensitive data.
... Therefore, flip Microsoft Purview on and when you leak data you now have someone to point the finger at. And can do cool stuff.
Amazon is capturing massive amounts of the value in AI via AWS. They'll be fine. But for real I don't see a reason why Alexa is not using a good LLM now. Could just be infinitely better...
If they could gate it behind a "start chat session" or something, I would be more excited. Doing it by cannibalizing how well basic "play radio/start time/read from audible" worked for the longest time, everything they do that causes friction there is frustrating, to the extreme.
Theres absolutely no reason why plugging in an LLM would break any of those features but asking generic questions would be 100x better than "Searching the web for a shitty Quora or Alexa answers question."
For example, I tried Google's Gemini a while ago instead of Google Assistant on my phone and it was unable to do basic things like 'open the Signal app' and would instead go on a big tangent about how it can't open the Signal app for me, but that I could open the Signal app by finding it on my home screen or if I don't have it installed I can download it from the play store etc.
I also don't typically ask generic questions. Ever, that I can remember.
Again, I don't want to dislike the idea. If people are really getting value from it, I would like them to continue to do so. But it seems to be a more expensive way to service use cases that were working just fine.
Large language models are too slow to use as real-time voice assistants. ChatGPT voice only barely works because they have to use a much worse (but faster) model to do it.
Alexa would "a higher order infinity" better if it wasn't spying on you ...
I think AI could be commoditized. Look at DeepSeek stealing OpenAI's model. Look at the competitive performance between Claude, ChatGPT, Grok, and Gemini. Look at open weight models, like Llama.
Commoditized AI need used via a device. The post argues that other devices, like watches or smart glasses, could be better posed to use AI. But...your point stands. Given Apple's success with hardware, I wouldn't bet against them making competitive wearables.
Hardware is hard. It's expensive to get wrong. It seems like a hardware company would be better positioned to build hardware than an AI company. Especially when you can steal the AI company's model.
Supply chains, battery optimization, etc. are all hard-won battles. But AI companies have had their models stolen in months.
If OpenAI really believed models would remain differentiated then why venture into hardware at all?
They could manage years of AI-missteps while cultivating their AI "marketplace", which allows the user to select a RevShare'd third party AI if (and only if) Apple cannot serve the request.
It would keep them afloat in the AI-space no matter how far they are behind, as long as the iPhone remains the dominant consumer mobile device.
The only risks are a paradigm shift in mobile devices, and the EU which clearly noticed that they operate multiple uneven digital markets within their ecosystem...
What if [Japan|EU|US DOJ|South Korea] passes a law preventing OEMs from claiming user data as their property? If Apple really tries to go down the road of squeezing pre-juiced lemons like this, I think they're going to be called out for stifling competition and real innovation.
This is exactly what they've done: They offer SageMaker (and similar capabilities) for hosting smaller models that fit into a single instance GPU, and they have Bedrock that hosts a metric crap-ton of AWS and third party models. Many of the model architectures are supported for hosting fine-tuned versions.
3) massive scam
> https://www.apple.com/watch/
(I am mostly going to comment on the Watch issue, as I have one.)
Apple makes a watch, yes. But is it an AI watch? Will they manage to make it become one? Intel made all kinds of chips. Intel's chips even could be used for mobile devices... only, Intel never (even still, to today) made a great mobile chip.
I have an Apple Watch--and AirPods Pro, which connect directly to it--with a cellular plan. I already found how few things I can do with my Watch kind of pathetic, given that I would think the vast majority of the things I want to do could be done with a device like my watch; but, in a world with AI, where voice mode finally becomes compelling enough to be willing to use, it just feels insane.
I mean, I can't even get access to YouTube Music on just my watch. I can use Apple's Music--so you know this hardware is capable of doing it--but a lot of the content I listen to (which isn't even always "Music": you can also access podcasts) is on YouTube. Somehow, the Apple Watch version of YouTube access requires me to have my phone nearby?! I can't imagine Google wanted that: I think that's a limitation of the application model (which is notoriously limited). If I could access YouTube Music on my watch, I would've barely ever needed my iPhone around.
But like, now, I spend a lot of time using ChatGPT, and I really like its advanced voice mode... it is a new reason to use my iPhone, but is a feature that would clearly be amazing with just the watch: hell... I can even use it to browse the web? With a tiny bit of work, I could have a voice interface for everything I do (aka, the dream of Siri long gone past).
But, I can't even access the thing that already works great, today, with just my watch. What's the deal? Is it that OpenAI really doesn't want me to do that? These two companies have a partnership over a bunch of things--my ChatGPT account credentials are even something embedded into my iPhone settings--so I'd think Apple would be hungry for this to happen, and should've asked them, thrown it in as a term, or even done the work of integrating it for them (as they have in the past for Google's services).
This feels to me like Apple has a way they intend me to use the watch, and "you don't need to ever have your phone with you" is not something they want to achieve: if they add functionality that allows the Watch to replace an iPhone, they might lose some usage of iPhones, and that probably sounds terrifying (in the same way they seem adamant that an iPad can't ever truly compete with a MacBook, even if it is only like two trivial features away).
Apple is focusing on a privacy-first approach with smaller models that run locally. Amazon is tying it's models to an AWS subscription and incentivizing use by offering discounts, making it cheaper to use their models over GPT, Opus, etc.
It is probably cheaper to simply integrate with OpenAI or Anthropic or whoever might unseat them in the future, than spend $50B on training a model. Not only is it cheaper, but it also gives them the flexibility to ride the wave of popularity, without ceding hardware or software sales.
The way I see it, models were always predicated on openness and data-sharing. That, too, will be the competitive downfall of those who poured billions into creating said models.
They won't stay caged up forever. Ultimately the only thing OpenAI has between itself and it's competitors is some really strong computers.
Well... anybody can buy strong computers. This is, of course, assuming you don't believe the promise of ever-increasing cognition and eventual AGI, which I don't. The people going fast aren't going to be the winners. The second movers and later on will be. They get all the model, with 1/100th the cost.
Ultimately models, right now, for most consumers, are nothing more than novelties. Just look at Google Pixel - I have one, by the way. I can generate a custom image? Neat... I guess. It's a cool novelty. I can edit people out of my pictures? Well... actually Apple had that a couple years ago... and it's not as useful as you would think.
It's hard to see it because we're programmers, but right now, the AI products are really lacking for consumers. They're not good for music, or TV, or Movies, or short-form entertainment. ChatGPT is neat for specific usecases like cheating on an essay or vibe coding, but how many people are doing that? Very, very few.
Let me put it this way. Do I think Claude Code is good? Yes. Do I think Claude Code is the next Madonna? No.
There's a lot of difference between OpenAI and, let's say, Facebook (Llama). The difference between them is not only strong computers. There's architectural differences to the models.
It will be a sign of a maturing market when we see vendors actually say "bad for X" and pulling away from general-purpose messaging.
You see it a little bit with the angling for code-specific products, but I think we're nowhere near a differentiated market.
This is the real death knell people should focus on. Apple buried their AI R&D to rush complete flops like Vision Pro out the door. Now that the dust has settled, the opportunity cost of these hardware ventures was clearly a mistake. Apple had more than a decade to sharpen their knives and prepare for war with Nvidia, and now they're missing out on Nvidia's share of the datacenter market. Adding insult to injury, they're probably also ~10 years behind SOTA in the industry unless they hire well-paid veterans at great expense.
Apple's chronic disdain for unprofitable products, combined with boneheaded ambition, will be the death of them. They cannot obviate real innovation and competition while dropping nothingburger software and hardware products clearly intended to bilk an unconscious userbase.
Think of how important it is for any AI model company to be the go-to model on the iPhone. Google pays Apple billions to be the default search engine on the iPhone.
Am I supposed to keep waiting until that changes one day?
Unlike OpenAI, Apple doesn't need to charge a subscription to get AI-based revenue. It just needs to properly integrate it into its products that billions of people are already using, to make those products more useful so people continue buying them. At that point most users don't care what model is powering it - could be GPT, Claude, Mistral etc.
And also to hop off without any penalty if/when the wave collapses.
Edit: Yes it exists, seems to be built off qwen2.5 coder. Not sure it proves the point I thought it was, but diffusion LLMs still seem neat
Why? Because everyone else is doing it (and not making a profit btw)?
Something about incentive for people to buy a phone that looks and acts identical to a 5 year old phone otherwise
Source?
So is everyone else, to be fair. Chat is a horrible way to interact with computers — and even if we accept worse is better its only viable future is to include ads in the responses. That isn't a game Apple is going to want to play. They are a hardware company.
More likely someday we'll get the "iPhone moment" when we realize all previous efforts were misguided. Can Apple rise up then? That remains to be seen, but it will likely be someone unexpected. Look at any successful business venture and the eventual "winner" is usually someone who sat back and watched all the mistakes be made first.
Why? We interact with people via chat when possible. It seems pretty clear that's humanity's preferred ineraction model.
We don't know what is better for this technology yet, so it stands to reason that we reverted to the lowest common denominator again, but there is no reason why we will or will want to stay there. Someone is bound to figure out a better way. Maybe even Apple. That business was built on being late to the party. Although, granted, it remains to be seen if that is something it can continue with absent of Jobs.
That's a good supporting argument, but I don't think McDonald's adequately represents more complex discussions.
It is better than nothing. It is arguably the best we have right now to make use of the technology. But, unless this is AI thing is all hype and goes nowhere, smart minds aren't going to sit idle as progression moves towards maturity.
"What burgers do you have?"
(expands to show a set of pictures)
"I'll have the thing with chicken and lettuce"
"What burgers do you have?"
(Thinking...) (4 seconds later:)
(expands to show a set of pictures)
"Sigh. I'll have the thing with chicken and lettuce"
(Thinking...) (3 seconds later:)
> "Do you mean the Crispy McChicken TM McSandwich TM?"
"Yes"
(Thinking...) (4 seconds later:)
> "Would you like anything else?"
"No"
(Thinking...) (5 seconds later:)
> "Would you like to supersize that?"
"Is there a human I can speak with? Or perhaps I can just point and grunt to one of the workers behind the counter? Anyone?"
It's just exasperating, and it's not easy to overcome until local inference is cheap and common. Even if you do voice recognition on the kiosk, which probably works well enough these days, there's still the round trip to OpenAI and then the inference time there. And of course, this whole scenario gets even worse and more frustrating anywhere with subpar internet.
But, now, remember, unlike humans, AI can do things like materialize diagrams and pictures out of "thin air" and can even make them interactive right on the spot. It can also do a whole lot of things that you and I haven't even thought of yet. It is not bound by the same limitations of the human mind and body. It is not human.
For what reason is there to think that chat will remain the primary mode of using this technology? It is the easiest to conceive of way to use the technology, so it is unsurprising that it is what we got first, but why would we stop here? Chat works, but it is not good. There are so many unexplored possibilities to find better and we're just getting started.
Chat is like the command line, but with easier syntax. This makes it usable by an order of magnitude more people.
Entertainment tasks lend themselves well to GUI type interfaces. Information retrieval and manipulation tasks will probably be better with chat type interfaces. Command and control are also better with chat or voice (beyond the 4-6 most common controls that can be displayed on a GUI).
I kinda disagree with this analogy.
The command line is precise, concise, and opaque. If you know the right incantations, you can do some really powerful things really quickly. Some people understand the rules behind it, and so can be incredibly efficient with it. Most don't, though.
Chat with LLMs is fuzzy, slow-and-iterative... and differently opaque. You don't need to know how the system works, but you can probably approach something powerful if you accept a certain amount of saying "close, but don't delete files that end in y".
The "differently-opaque" for LLM chatbots comes in you needing to ultimately trust that the system is going to get it right based on what you said. The command line will do exactly what you told it to, if you know enough to understand what you told it to. The chatbot will do... something that's probably related to what you told it to, and might be what it did last time you asked for the same thing, or might not.
For a lot of people the chatbot experience is undeniably better, or at least lets them attempt things they'd never have even approached with the raw command line.
Exactly. Nobody really wants to use the command-line as the primary mode of computing; even the experts who know how to use it well. People will accept it when there is no better tool for the job, but it is not going to become the preferred way to use computers again no matter how much easier it is to use this time. We didn't move away from the command-line simply because it required some specialized knowledge to use.
Chatting with LLMs looks pretty good right now because we haven't yet figured out a better way, but there is no reason to think we won't figure out a better way. Almost certainly people will revert to chat for certain tasks, like people still use the command-line even today, but it won't be the primary mode of computing like the current crop of services are betting on. This technology is much too valuable for it to stay locked in shitty chat clients (and especially shitty chat clients serving advertisements, which is the inevitable future for these businesses betting on chat — they can't keep haemorrhaging money forever and individuals won't pay enough for a software service).
Mac, iPad and iPhone, eventually Watch and Vision. Which makes sense since Apple is first and foremost a hardware company.
Aws is making strides but in a different area.
But it's complicated because commodities don't carry brand weight, yet there's obviously a brand power law. I (like most other people) use ChatGPT. But for coding I use Claude and a bit of Gemini, etc. depending on the problem. If they were complete commodities, it wouldn't matter much what I used.
A part of the issue here is that while LLMs may be trending toward commodity, "AI" isn't. As more people use AI, they get locked into their habits, memory (customization), ecosystem, etc. And as AI improves if everything I do has less and less to do with the hardware and I care more about everything else, then the hardware (e.g. iPhone) becomes the commodity.
Similar with AWS if data/workflow/memory/lock-in becomes the moat I'll want everything where the rest of my infra is.
People like screens. They like seeing IG pictures,they like scrolling through TikTok, they like seeing pictures/videos their friends/family send/post. I doubt many people will want to see pictures/videos on a watch screen or in glasses (which still have a ways to go).
Also I don't buy the premise of this article that Apple is deciding to take a backseat in AI, they were late to the party but they are trying (and failing it seems) to build foundational models. Reaching for OpenAI/Anthropic/etc while they continue to work on their internal models makes a lot of sense to me. It acknowledges they are behind and need to rely on a third-party but doesn't mean they won't ever use their own models.
Unless something changes (which is absolutely possible) it does seem we are headed towards LLMs being comedities. We will see what OpenAI/Ive end up releasing but I don't see a near-future where we don't have screens in our pockets and for that Google and Apple are best placed. With the GPT-5 flop (It's 4.6 at best IMHO) I have less concerns with LLMs growing as quickly as predicted.
You can't get a consumer-grade GPU with enough VRAM to run a large model, but you can do so with macbooks.
I wonder if doubling down on that and shipping devices that let you run third party AI models locally and privately will be their path.
If only they made their unified memory faster as that seems to be the biggest bottleneck regarding LLMs and their tk/s performance.
You can if you're willing to trust a modded GPU with leaked firmware from a Chinese backshop
We may care about running LLMs locally, but 99% of consumers don't. They want the easiest/cheapest path, which will always be the cloud models. Spending ~$6k (what my M4 Max cost) every N years since models/HW keep improving to be able to run a somewhat decent model locally just isn't a consumer thing. Nonviable for a consumer hardware business at Apple's scale.
A pair of MaxSun/Intel Arc B60 48GB GPUs (dual 24GB B580's on one card) for $1200 each also outperforms the M4 Max.
The tangible hardware you point out is $2,400 for two niche-specific components vs the Apple hardware which benefits more general use cases.
please point me to the laptop with these
Of course nobody knows how this will eventually play out. But people without inside information on what these big organizations have/possess, cannot make such predictions.
It is? I haven't seen anything about this.
looks like there will be several good options "soon"?
But nvda isn't that far behind, and has already moved to regain some space with their PRO6000 "workstation" GPUs. You get 96GB of VRAM for ~7.5k$, which is more than a comparable RAM mac, but not 30k you previously had to shell for top of the line GPUs. So you get a "prosumer" 5090 with a bit more compute and 3x VRAM, in a computer that can sell for <10k$ and beat any mac at both inference and training, for things that "fit" in that VRAM.
Macs still have the advantage for larger models, tho. The new DGX spark should join that market soon(tm). But they allegedly ran into problems on several fronts. We'll have to wait and see.
What matters for the future is what killer apps can be built on this commodity (tokens)?
Right now we've got content/text generation and ... nothing else?
Software operators. LLMs can operate any kind of software (check MCP). If you reduce this to just "text generation", what else is left?
But they can't be trusted to do so in a reliable and sane way. Until the issues with hallucination and prompt adherence are resolved, including the issues with context injection, LLMs aren't any more useful for general software operation than `/dev/urandom` is.
Every time we get a "Gemini deleted my production database!" story everyone goes "well obviously you shouldn't have trusted it with production access". I don't play russian roulette with my CLI, why would I add a layer of uncertainty on top?
AI chatbots in pdf viewers!
Oh wait.. we already have that and it's useless.
"I see you're trying to kill children, may I recommend the following methods:
- zyklon B
- hammer
- shotgun"
From milquetoast write-by-committee text generation engines that allow shitty writers to pump out terrible business boilerplate in an email client to customer service representatives tacked onto storefronts that don't do anything except waste your time, using anything AI feels like wading through a waist-deep swamp that ten thousand cows have shit in.
In a way, Apple having terrible AI is a plus to me. It recognizes that there's a cute puppy in a photo in my photo library and that's what I need, and want, it to do.
> Interestingly, Intel still reached new highs for a decade after missing mobile before it all collapsed
That's because their problems weren't due to missing mobile but rather taking too much risk on a fab tech transition that they then fumbled. This put them permanently behind and they were unable to catch up.
> Amazon’s AWS is predicated on the idea of commoditized infrastructure at scale where price is the priority
Since when does AWS compete on price? AWS is predicated on the idea of many proprietary services running on commodity hardware, and charging high prices for the privilege of not spending time on sysadmin work.
Your comment on Intel is correct, but it's also true that TSMC could invest billions into advanced fabs because Apple gave them a huge guaranteed demand base. Intel didn’t have the same economic flywheel since PCs/servers were flat or declinig.
That's a good clarification on Amazon, running on commodity hardware with competitive pricing != competing on price alone. It would have been better to clarify this difference when pointing out that they're trying the same commodity approach in AI.
If you get something this mundane wrong from the start I don't know how I could trust anything else from the post either.
Apple could simply sell earpods with an AI voice interface, the UI/UX and solution space is evolving too quickly for a company like Apple to sensibly try to define how it will use the technology yet.
See Google and Microsoft's failed integrations of AI with search for an example of how it should not be done.
The more I use AI the more I think I don't need a laptop and would settle for a comfortable VR setup as I do far less typing of symbols.
We more and more turn into cyborgs, wearing all kinds of processors and sensors on our bodies. And we need this hardware and the software that runs on it to be trustworthy.
Being the hardware producer and gatekeeper for trustworthy software to run on it is probably big enough of a market for Apple.
Even more so if their business of getting 15% to 30% of the revenue apps generate on the iPhone continues.
It has yet to be seen what type of company becomes most valuable in the AI stack. It could very well be that it does not operate LLMs. Similar to how the most valuable company in the hospitality industry does not operate hotels. It just operates a website on which you book hotels.
It’s also an opportunity to disrupt… build hardware specifically for ai tasks and reduce it down to just an asic.
I could be wrong and we could be seeing the ladders being pulled up now, moats being filled with alligators and sharks, and soon we'll have no choice but to choose between providers. I hope we can keep the hacking/tinkering culture alive when we can no longer run things ourselves.
But also, we are seeing models leapfrog each other; the best model this week is often not the best model next month and certainly not next quarter. I still see merit to the idea that cloud providers should focus on being the place where companies put their data, and make it easy for companies to bring models and tools to that data.
I agree with the article that Apple is probably cooked if the continue their current path for another couple of years.
The phone is where the data is and Ai will find its usefulness in that data (and interface).
But your comment about the phone could have been about horses, or the notepad or any other technology paradigm we were used to in the past. Maybe it'll take a decade for the 'perfect' AI form factor to emerge, but it's unlikely to remain unchanged.
This blog post only really makes sense if you wholesale buy into the concept that Generative AI is going to do everything its most ardent boosters claim it will, on the timeline they say it will, none of which has really bore out as true thus far. For anything less, Apple and Amazon’s strategy of commoditization of models themselves makes sense.
That being said, do I have nitpicks over their respective strategies? You betcha. Amazon is so focused on an Apple-like walled-garden approach for enterprise compute that they run the risk of being caught up in shifting tides of geopolitics and nationalism along with increased attention on cost reductions. Apple, on the other hand, is far too exposed with the iPhone being the center of their ecosystem that a fundamentally new device could easily render them the next Nokia.
Between the two, Apple at least seems to be keen innovating at its own pace and hoovering up competition that hits the true moonshots - not that I expect that strategy to keep working as antitrust scrutiny mounts against Big Tech. AWS, by comparison, is seemingly taking the Microsoft and VMware approach of just adding yet another service icon to the catalog and letting customers figure it out for themselves, which, well, just go ask the old guard of Big Tech how that strategy worked out for them when everyone ran into the public cloud.
Neither strategy has long (or even mid) term viability, but AI almost certainly won’t be the tech that pierces their moat.
It’s a smart approach. Get the CAPEX done while there’s appetite and political will for it.
Barring some major methodology breakthrough, the next decade for AI will be about building software and hardware that actually makes use of AI, beyond Chatbots, which Apple and Amazon are positioned to do as well as anyone
My guess is that they're also adapting to the changing ecosystem, and since they move very slowly, the trends seem archaic (like Apple Intelligence featuring image and summary generation 12-18 months after it was found to be novel by anyone).
I'm hoping that they lean in hard on their intents API forming a sort of "MCP" like ecosystem. That will be very powerful, and worth spending extra time perfecting.
leaks seem to indicate they're working on some home AI assistant, which kind of lends to their typical lagging timeline with the goal of quality
Voice input isn't suitable for many cases, and physical input seems generally superior to AR -- I've used a Vision Pro, and it's very impressive, but it's nowhere near the input-performance of a touchscreen or a mouse and keyboard. (To its credit: it's not aiming for that.)
Unless the argument is that you will never have to be precise, or do something that you don't want everyone within earshot to know about?
Also, a "dynamic, context-dependent generative UI" sounds like another way to describe a UI that changes every time you use it depending on subtle qualities of exactly how you reached it this time, preventing you from ever building up any kind of muscle-memory around using it.
cost is not the priority with AWS. To quote my collaborator, "I just scaled up to 600 PB for our event"
When I think AWS I think speed, scale-up, scale-out, capacity & dynamic flexibility are more key.
AWS is not the "cheapo commodity option", nor is Azure
The problem is with your expectations. Apple is no longer winning all the time, so relatively, it feels like it's losing. And Amazon is the quiet beast lurking, it's just doing a poor job at marketing.
I don't think they are missing out on anything. Everyone wants products from Apple or Amazon, only some power users and managers want "AI".
Apple is in a very favorable position with its control of arguably the most important platform of all. They can force app developers to allow Apple AI to automate their apps, and prevent other AI providers from doing the same, and they make a strong privacy argument about why this is necessary.
They continue building a distributed data-acquisition and edge data-processing network with their iPhone sales, where the customer keeps paying for both hardware and traffic.
They constantly take measures to ensure that the data their customers generate is only available to Apple and is not siphoned away by the users themselves.
The moment they finish the orchestration of this distributed worldwide iOS AI-cluster, they will suddenly have both the data and the processing at comparatively low operational cost.
The biggest risk? A paradigm-shift where the Smartphone is no longer the main device (smart glasses being a major risk here), and some other player takes the lead here.
Apple now only cares about reveneue and retirement of all those who made Iphone and Macs great. They are rich so they don't need to innovate big until they are like Intel now. But they try creating toys like Vison Pro, and self driving car that was coming for a decade. Just all for the fun of it.
Old company with old leader and 0 hunger for succees. Opposite of all big AI startups today.
Which AI products are Apple lacking that put them so far behind?
As opposed to what exactly? If the author really believes that a watch and glasses are preferred over a smartphone, I don't know what to say. Also, note that the author doesn't know what this new form factor will look like. In that case, what's the point in declaring that Apple and Amazon will miss the boat?
I think a much bigger threat to Apple is a smartphone with an AI driven operating system. But what do I know?
What we gonna see from Apple, IMHO, is a horde of smaller models working on different tasks on the devices itself, plus heavier stuff is running on "personal cloud compute" instances.
We'll not see Apple AI besides a sticker, it'll just try to do its work at the background.
Oh, people balk at auto-word-complete in the latest macOS, but I find it very useful. Spending little time here and there and it counts.
Are phones and AI on the opposite ends of some axis where success in one precludes success in the other? Does the use of AI reduce the use of compute and data? I have my own opinions on the topic, but beyond the eye-catching title this article didn’t inform one way or the other.
An LLM controls all my house lights right now.
It's funny how the unexpected is way more impressive. I tried out the voice commands on my new car (BYD) and after it correctly interpreted and implemented my request, I politely said "thank you!" (as I always do, it's important to set a good example) and the car responded "you're welcome!". 100% could just have been a lucky setup for a scripted response... but...
Intel lost the hardware fight. But it lost all along as the most profitable part is software or who control the ecosystem. Microsoft and Apple. Not Intel.
It is amd cannot beat nvidia but the ecosystem of Ai (but not game) is what the lost is.
Now what ecosystem apple and Amazon lost … can it get to avoid nvidia controlling their Ai ecosystem. I doubt very much.
If you earnestly believed that dynamic generative UIs are the future, surely you'd be betting on Apple.. The company with fully integrated hardware already capable of cost effectively running near-frontier generative models on the end user hardware.
Apps would expose their functionality to the Smart Siri. Maybe there's already something like this with the shortcuts. Maybe I could give Claude Code style blanket permissions to some actions, some ones I would like to review.
I'd have to disagree with this. There are a number of general consumer interfaces and none of them have much of a moat or brand loyalty.
Also, Apple is in the best position by far to make consumer hardware that can do inference on-device. No other company even comes close.
https://www.theregister.com/2025/08/18/generative_ai_zero_re...
Because this is not a science fiction future, but a corporate one, where neither do those magical UIs exist, nor do you have enough power in wearables for them to be everything you need?
The macbook could come with some models, and brew ecosystem would supply others.
I think they’ll be fine. Amazon is an investor in Anthropic and Apple has an agreement with OpenAi. I’d consider that as an inorganic growth.
The real question is how do we continue the grift? AI's a huge, economy-sustaining bubble, and there's currently no off-ramp. My guess is we'll rebrand ML: it's basically AI, it actually works, and it can use video cards.
AI is a great feature funnel in terms of like, "what workflows are people dumping into AI that we can write purpose-built code for", but it has to transition from revenue generator to loss leader. The enormity of the bubble has made this very difficult, but I have faith in us.
The real, intrinsic value of AI is essentially zero compared to the hype and tech-biz cargo culting. If I were John Apple, I would simply sit back and wait while all your competition dump all their money into the AI bonfire. Once the dust settles all the hyped-up ai startups are dead, you can come in and pick up whatever worked best and have a stellar AI product with no real cost. Assuming such a product can exist, it still isn't clear.
I don't think companies not literally setting billions of dollars on fire is a bad thing.
Facebook has a data advantage and a use base but at the end of the day they will always need to make Nvidia or a cloud provider rich to run/train their models.
They are doing something different and it might fail but I’m glad different large tech companies are trying different strategies.
I for one am a fan of Apple’s local-first approach. If they can get it to work it aligns more with my ideals. If it fails, at least they kept things interesting.
Maybe they won't be so big in monetary value anymore? So what? There are endless states between huge and nothing, actually most organizations live there and most serve people well.
The deeper issue, in my view, is that Apple is violating its own philosophical belief, articulated by Steve Jobs himself: that Apple should always own the primary technology behind its products (i.e., multi-touch, click-wheel). This is central to Apple's "we need to make it ourselves" approach.
Camera lenses are commodities. AI models are foundational. Apple's own executive leadership likened models to the internet, and said, well surely we wouldn't try to build and own the internet! This misplaces AI as infrastructure when in fact it's foundational to building and serving useful applications.
They further see "chat" (and by extension) as an app, but I think it's more like a foundational user interface. And Apple's certainly always owned the user interface.
When Siri was first announced, there was excitement that voice might be the next paradigm shift in user interface. Partly because Siri is so bad for a decade now, and partly because people didn't feel like talking to their screens, Apple may have learned a very unhelpful lesson: that Siri is just a feature, not a user interface. In this age though, chat and voice are more than features, and yet Apple doesn't own this either.
Apple should not buy Perplexity. Perplexity is smoke and mirrors, and there's nothing defensible about its business. Apple cannot get the talent and the infrastructure to catch up on models.
So what then?
OpenAI is not for sale. Anthropic is likely not for sale, but even if it were, Apple wouldn't buy it: Anthropic is very risky to Apple's profit margin profile and Apple can't unlock Anthropic's bottleneck (capacity).
In fact, to Apple's advantage, why not let the VCs and competitors like Microsoft and Amazon and Google light their money on fire while this industry takes shape, provided you have an option in the end to sell a product to the consumer?
The best option, in my view, is Search Deal Redux: partner deeply with Google and Gemini. Google very obviously wants as much distribution as possible, and Apple has more hardware distribution than any company in history.
Partnership is the only path because, yes, Apple missed this one badly. There is one area where I agree with Tim Cook, though: being late doesn't guarantee you lose, even in AI.
I don't. The foundational interface hasn't been created yet. Let's be honest, chat isn't great. It is the best we have right now to leverage the technology, yes, but if it is our future — that we cannot conceive of anything better, we've failed miserably. We're still in the "command-line age". No doubt many computing pioneers thought that the command-line too was foundational, but nowadays most users will never touch it. In hindsight, it really wasn't important — at best a niche product for power users.
> Apple may have learned a very unhelpful lesson: that Siri is just a feature, not a user interface.
That is the right lesson, though. Chat sucks; be that through voice, typing, or anything else. People will absolutely put up with it absent of better options, but as soon as there is a better option nobody is preferring chat. This is where Apple has a huge opportunity to deliver the foundational interface. Metaphorically speaking, they don't need to delver the internet at all, they just need to deliver the web browser. If they miss that boat, then perhaps there is room for concern, but who knows what they have going on in secret?