Apple Intelligence for iPhone, iPad, and Mac
1092 points
1 month ago
| 172 comments
| apple.com
| HN
TechnicolorByte
1 month ago
[-]
Have to say, I was thoroughly impressed by what Apple showed today with all this Personal AI stuff. And it proves that the real power of consumer AI will be in the hands of the platform owners where you have most of your digital life in already (Apple or Google for messaging, mail, photos, apps; Microsoft for work and/or life).

The way Siri can now perform actions based on context from emails and messages like setting calendar and reservations or asking about someone’s flight is so useful (can’t tell you how many times my brother didn’t bother to check the flight code I sent him via message when he asks me when I’m landing for pickup!).

I always saw this level of personal intelligence to come about at some point, but I didn’t expect Apple to hit it out of the park so strongly. Benefit of drawing people into their ecosystem.

Nevermind all the thought put into private cloud, integration with ChatGPT, the image generation playground, and Genmoji. I can genuinely see all this being useful for “the rest of us,” to quote Craig. As someone who’s taken a pessimistic view of Apple software innovation the last several years, I’m amazed.

One caveat: the image generation of real people was super uncanny and made me uncomfortable. I would not be happy to receive one of those cold and impersonal, low-effort images as a birthday wish.

reply
ethbr1
1 month ago
[-]
> I always saw this level of personal intelligence to come about at some point, but I didn’t expect Apple to hit it out of the park so strongly. Benefit of drawing people into their ecosystem.

It's the benefit of how Apple does product ownership. In contrast to Google and Microsoft.

I hadn't considered it, but AI convergence is going to lay bare organizational deficiencies in a way previous revolutions didn't.

Nobody wants a GenAI feature that works in Gmail, a different one that works in Messages, etc. -- they want a platform capability that works anywhere they use text.

I'm not sure either Google or Microsoft are organizationally-capable of delivering that, at this point.

reply
TreetopPlace
1 month ago
[-]
"AI convergence is going to lay bare organizational deficiencies in a way previous revolutions didn't`'

Your quote really hit me. I trust Apple to respect my privacy when doing AI, but the thought of Microsoft or Google slurping up all my data to do remote-server AI is abhorrent. I can't see how Microsoft or Google can undo the last 10 years to fix this.

reply
harry8
1 month ago
[-]
> "I trust Apple..."

I'm actually a little gobsmacked anyone on this forum can type those words without physically convulsing.

The even more terrible part is I'm sure it's common. And so via network externalities the rest of us who do NOT trust any of these companies on the basis that all of them, time and again, have shown themselves to be totally untrustworthy in all possible ways, will get locked into this lunacy. I now can't deal with the government without a smartphone controlled by either google or apple. No other choice. Because this utter insanity isn't being loudly called out, spat upon, and generally treated with the withering contempt that these companies have so richly and roundly earned this decision is being made for all society by the most naive among us.

reply
derefr
1 month ago
[-]
I don't think the GP meant "trust" as in "I think Apple has my best interests at heart."

Rather, I think they meant "trust" as in "Apple is observably predictable and rational in how they work toward their own self-interest, rarely doing things for stupid reasons. And they have chosen to center their business on a long-term revenue strategy involving selling high-margin short-lifetime hardware — a strategy that only continues to work because of an extremely high level of brand-image they've built up; and which would be ruined instantly if they broke any of the fundamental brand promises they make. These two factors together mean that Apple have every reason to be incentivized to only say things if they're going to mean them and follow through on them."

There's also the much simpler kind of "trust" in the sense of "I trust them because they don't put me in situations where I need to trust them. They actively recuse themselves from opportunities to steal my data, designing architectures to not have places where that can happen." (Of course, the ideal version of this kind of trust would be a fully-open-source-hardware-and-software, work-in-the-open, signed-public-supply-chain-ledger kind of company. You don't get that from Apple, nor from any other bigcorp. Apple's software is proprietary... but at least it's in your hand where you can reverse-engineer it! Google's software is off in a cloud somewhere where nobody can audit changes to it.)

reply
rekoil
1 month ago
[-]
For me it's more "I think Apples business interests more closely align with my wishes as a customer" as opposed to any other megacorp.
reply
WorldMaker
1 month ago
[-]
At the heart of it: I feel like I'm Apple's customer in a way that I never feel like Google's customer (in everything they do it always seems like their real customers are Ad Buyers, even when you are ostensibly paying for services). (And Microsoft is in the middle and all over the map where some divisions treat you like a customer and others don't depending on the prevailing winds and the phase of the moon.)
reply
dfxm12
1 month ago
[-]
What leads you to feel this way?

They are anti right to repair & they have their walled garden on their mobile devices. Their vertically integrated model also leads to unusually high prices. This website in particular would also directly feel the pain of Apple killing apps to implement themselves later, the greedy apple store cut and also not allowing use of hardware features that Apples themselves can use. Consumers feel this indirectly (higher prices, less competition).

Also, don't get it twisted, Apple is still collecting all of your data, even if you ask them not to [0].

0 - https://mashable.com/article/apple-data-privacy-collection-l...

reply
WorldMaker
1 month ago
[-]
There's absolutely several axes in play here. You have very different concerns than I do, and that's valid.

Their vertically integrated model leads to very good customer service. I don't pay extra for Apple Care and I still get treated like an adult if I show up to an Apple Store with some need.

Even when Apple makes a mistake and collects more data than they should, I don't expect that data to influence ads that I see or to be sold to the highest bidder. (As a developer myself, I find that I can be quite lenient on app internal telemetry.) I can also see that ad review is barely a small side hustle to them in their Quaterly Reports and I can also see that most of their ad revenue is from untargeted campaigns. (Microsoft is a bigger ad company than Apple. Google is an ad company deep into its DNA at this point with everything else a side hustle.)

There is a beauty to a well maintained walled garden. Royalty invested a lot of money into walled gardens and Apple maybe doesn't treat you exactly like royalty, but there's a lot of similar respect/dignity there in their treatment of customers, even if they want you to trust them not to touch the flowers or dig below the walls too much. They want you to have a good time. They want their garden to be like a Disney World, safer than the real world.

You may not appreciate those sorts of comforts and that's fine. Plenty of people prefer the beauty and comfort of a walled garden than the free-for-all of a public park (or the squatter's rights of an abandoned amusement park if you don't mind playing unpaid mechanic more often than not). There's a lot of subjective axes to evaluate all of this on.

reply
dfxm12
1 month ago
[-]
I don't expect that data to influence ads that I see or to be sold to the highest bidder.

You should temper your expectations: https://gizmodo.com/apple-iphone-france-ads-fine-illegal-dat...

France’s data protection authority, CNIL, fined Apple €8 million ... for illegally harvesting iPhone owners’ data for targeted ads without proper consent.

If ads are a small percentage of Apple's quarterly reports, that isn't sound reasoning; after all, they make a lot of money in general in a lot of areas.

reply
disgruntledphd2
1 month ago
[-]
And if ads are leading to services growth (which they are), you should expect them to do more, scummier ad related things over time.

Fundamentally, ads is an incredibly high margin business with lots of room for growth (particularly when you own the platform and can handicap your competitors) so over time, all tech companies will become ads companies.

reply
rekoil
1 month ago
[-]
Holy shit that is so depressing...
reply
rekoil
1 month ago
[-]
Yeah that's pretty much how I feel as well.
reply
Octoth0rpe
1 month ago
[-]
> high-margin short-lifetime hardware

I don't think this applies to their watch or tablet business where the limiting factor on lifetime in the market is security/os updates. Most alternatives in that space have significantly worse support cycles.

This used to be true of their phones as well, but the android market seems to be catching up in ways that tablets/wearables have not (see google's 7 year commitment for pixels).

Not sure if it applies to general purpose. Certainly there are non mac computers that we can throw linux on and use for 10+ years and there are examples of apple laptops getting cut off earlier than I'd like (RIP my beloved 12" macbook), but there are often some pretty serious tradeoffs to machines older than 7 years anyway. Also, I'm not sure if apple's strategy re: support lifecycles on products after the AS migration have shifted. It wouldn't surprise me if the first gen m1 products get 10 years of security updates.

reply
caycep
1 month ago
[-]
it's not that I blindly trust apple; it's more that they're the one FAANG company where I am the actual customer and their incentives align/depend on keeping me happy. Google/MS could care less how I feel; and I am well aware that I am most certainly not their customer.
reply
NBJack
1 month ago
[-]
> it's more that they're the one FAANG company where I am the actual customer and their incentives align/depend on keeping me happier

Do they though? Battery performance that 'lies' to you intentionally, planned obsolescence, locked in ecosystems, overtly undercutting the alternatives, marketing that hypes up rather bland features...I admit I don't see your point.

Apple, if anything, seem about as user hostile as Microsoft is these days.

reply
audunw
1 month ago
[-]
> Battery performance that 'lies' to you intentionally, planned obsolescence ...

Everything is relative. Apple generally supports their devices with OS updates for longer than most Android phone makers. Their incentives here are well aligned: they get a decent profit from Apple Store no matter how long you use their phone.

I think a lot of the reporting on Apples actions is very click-baity and lack nuance. Take the case when Apple throttled CPU performance of their phones when the battery got old and degraded. It was reported as a case of planned obsolescence, but it was in fact the exact opposite: by limiting the power consumption of the CPU they avoided unexpected phone shutdown due to battery voltage going too low during bursts of high power consumption. A phone that randomly shuts down is borderline use-less. A phone that is slower can at least be used for a while longer. Apple didn't have to do this. They would have spent less R&D money, and had a much lower chance of bad PR backlash, if they just simply did nothing. Yet they did something to keep old phones useful for longer.

> locked in ecosystems

That's a fine balance. Creating a good ecosystem is part of what makes Apple so user friendly. And it's a lot harder to create open ecosystems than having closed ones. Especially when you factor in security and reliability. If Apple diverted resources to making their ecosystems more open I think their ecosystem integration would have been significantly worse, which would have made them lose the thing most users considers Apples primary advantage.

Apple is a mixed bag. They were one of the first to go all-in on USB-C. Sometimes they push aggressively for new open standards that improve user experience. Yet they held on to Lightning for far too long on their phones. But here you get back to the planned obsolescence factor: there's a HUGE amount of perfectly fine Lightning accessories out there that people/companies are using with iPhones. If they killed Lightning too fast I can guarantee you they would have gotten a lot of hate from people who couldn't use their Lightning accessory anymore. With laptops that wasn't a big issue. Adapters are significantly less convenient to use with phone accessories.

reply
skydhash
1 month ago
[-]
Apple is tinker-hostile, but they’re great at getting-things-done for the majority of people. It’s frustrating when you have the knowledge to build custom workflows, but the happy path and the guardrails work great for many.

Microsoft have no consistency and Google wants you to pray at the altar of advertising.

reply
AnAfrican
1 month ago
[-]
Shouldn't Microsoft be somewhere between Google and Apple ? After all, they do rely on you buying their software in a way Google does not.
reply
realfeel78
1 month ago
[-]
Who do you think is Netflix's customer if not you?
reply
tl
1 month ago
[-]
> I'm actually a little gobsmacked anyone on this forum can type those words without physically convulsing.

Apple tells a pretty compelling lie here. Rather than execute logic on a server whose behavior can change moment to moment, it executes on a device you "own" with a "knowable" version of its software. And you can absolutely determine no network traffic occurs during the execution of the features from things announced this week and going back a decade.

The part that Apple also uploads your personal information to their servers on separate intervals both powering their internal analytics and providing training data is also known, and for the most part, completely lost on people.

reply
ducadveritatem
1 month ago
[-]
Are you claiming Apple uses personal user data (e.g someone’s photos or texts) as training data for their server-side models? That’s a massive claim and there are some journalists you should definitely shoot a message to on signal if you have proof of that and aren’t just blowing smoke.
reply
tl
1 month ago
[-]
Apple's claim (per public statements) is:

- They upload your data to their servers. This is a requirement of iCloud and several non-iCloud systems like Maps.

- Where analytics is concerned, data is anonymized. They give examples of how they do this like by adding noise to the start and end of map routes.

- Where training is concerned, data is limited to purchased data (photos) and opted-in parties (health research).

My point is that Apple's code executing on device can be verified to execute on device. That concept does not require trust. Where servers are involved and Apple does admit their use in some cases, you trust them (as much as you trust Google) their statements are both perfectly true and ageless. Apple transitions seamlessly between two true concepts with wildly different implications.

reply
wraptile
1 month ago
[-]
Apple's marketing and branding is truly impressive when even Hackernews crowd, who'd you assume are very tech savvy, are eating it up all of the propaganda.
reply
kstrauser
1 month ago
[-]
“Wow, so many of these people neck deep into tech, privacy, and law disagree with me. It must be because they’re all suckers.”
reply
wraptile
1 month ago
[-]
Despite your snark - you'll never win an argument where trusting a for profit corporation is some sort of win over transparent secure system. Yes Apple might be better of the lesser evils but is this really where us as a privileged class of people who actually understand all this give up and give in? This is sad.
reply
kstrauser
1 month ago
[-]
You’ll also never win an argument when assuming that the people you disagree with haven’t thought this through and come up with a different conclusion. The snark is from the implication that we’re either clueless or blind to it. The more likely explanation is that we have different priorities, and that we’re viewing the question from a different angle. That doesn’t make us ignorant or unprincipled, any more than you disagreeing with me makes you naive or unserious.

We have different ideas. That’s all. There’s no need to look down on each other for it.

reply
umanwizard
1 month ago
[-]
There’s not really any transparent secure system that competes with Apple.
reply
vundercind
1 month ago
[-]
Yeah, at this point, for me, it’s “use Apple stuff” or “barely use computers in my personal life”. I did the Linux and (later) Android tinkering thing for a good long while, and I’m over it. Losing all the features and automation and integration I get with no time lost, for a bunch of time consuming and janky DIY that still wouldn’t get me all of it, isn’t something I’ll do these days. I’d just avoid computers.
reply
kstrauser
1 month ago
[-]
I’ve been diagnosed with ADHD. I lost so much time to screwing around with a million config options. Recompiling with slightly different flags to make things 2% faster. Keeping my system bleeding-edge up to date. All that nonsense was fun but it was a way to avoid getting started on what I was suppose to be doing.

Going back to a Linux desktop would be the end of me. I know it.

reply
newdee
1 month ago
[-]
Same. Aside from a few self hosted services, I use Apple for my phone and work machines because I just want it to fucking work all the time. My parents understand that if they want IT support from me that they must use a Mac or iPhone - because then I rarely have to help them with anything.

The one exception I made recently was to dump Windows and move to a Fedora immutable build after seeing how capable the Steam Deck (and Linux) was for all the games I play. I’ll get shot of that if it causes me grief though - or just stop playing games on my PC.

I just don’t have the energy to mess about with it all these days and Apple is the 2nd best option in lieu of that.

reply
ikety
1 month ago
[-]
Tried that route, and while it is quite viable in 2024, I reverted to dual booting. I can turn on my computer to work or to play games, and those two environments are completely separate. Nvidia terrible corporation, but similar with Apple in the #justworks category. Getting much better on linux recently, but you lose stuff like the new hdr feature, and occasionally have to worry about anticheat
reply
newdee
1 month ago
[-]
Yeah, I’m lucky in that I’m pretty stable now in which games I’m playing and they all run flawlessly via Proton or Lutris. It seems to mostly be the controversial kernel-level anticheats which don’t play ball or where devs don’t enable the Linux support flags for things like EAC which cause a problem. For those, I kinda just have to suck it up and play something else.

I did have a ton of issues with NVidia in the same environment, but after putting a Radeon card in it has been smooth sailing. That’s to be expected I guess.

I doubt I would have tolerated this even a few years ago though and would have ended up like yourself with a dual boot setup.

reply
fsflover
1 month ago
[-]
Define "competes". Sent from my GNU/Linux phone Librem 5.
reply
dingnuts
1 month ago
[-]
Uh, friend, this is still just an internet technology enthusiast forum. Popular opinion here is equally as reliable as Reddit. If you are taking hn upvotes as some kind of expert input, you're in for a rough time.
reply
kstrauser
1 month ago
[-]
No argument from me. I was replying to someone who couldn't believe the readers here, "who'd you assume are very tech savvy", didn't agree with their opinions.
reply
jrm4
1 month ago
[-]
"Wow, so many of these people disagree with me, it might be because they have a huge dangerous blind spot because of a lack of knowledge and/or experience and/or have trouble seeing things from the outside"

...is a thing I experience on a regular basis (and that I only really gained confidence in once I actually saw the mistakes cause problems, e.g. password managers)

reply
adrian_b
1 month ago
[-]
I would give multiple upvotes to this, were it possible.

I do not have either Google or Apple accounts and I do not intend to ever open such accounts (despite owning some Android smartphones and having owned Apple laptops).

Because of this, I am frequently harassed by various companies or agencies, which change their interaction metods into smartphone apps and then deprecate the alternatives.

Moreover, I actually would be willing to install such apps, but only if there would be some means to download them, but most of them insist on providing the app only in the official store, from which I cannot install it, because I have no Google account.

I have been forced to close my accounts in one of the banks that I use, because after using their online banking system in the browser for more than a decade, including from my smartphone, they have decided to have a custom app.

In the beginning that did not matter, but then they have terminated their Web server for online banking and they have refused to provide directly their app, leaving the Google store as the only source of it.

I have been too busy to try to fight this legally, but I cannot believe that their methods do not break any law. I am not an US citizen, I live in the European Union, and when an European bank (a Societe Generale subsidiary) refuses to provide its services to anyone who does not enter in a contractual relationship with a foreign US company, such discrimination cannot be legal.

reply
LamaOfRuin
1 month ago
[-]
I sympathize with the plight, as I have also occasionally tried to fight this fight.

However, to quibble with your last analysis, you're almost certainly entering an agreement with the EU registered legal entity of a multinational company, and you almost certainly already had to do that to obtain the hardware, run the OS, use the browser, etc. The degree to which any of those contracts are enforceable is another matter.

reply
adrian_b
1 month ago
[-]
Even if Google were treated as a local company, that does not change anything.

I find unbelievable that a bank has the arrogance of conditioning their services on whether their customers accept to do business or not with some third party.

I see no difference between the condition of having a Google account and for instance a condition that I should buy my car from Audi or from any other designated company, instead of from wherever I want. It is none of my bank's business what products or services I choose to buy or use (outside of special circumstances like when receiving bank credits).

reply
PaulRobinson
1 month ago
[-]
Could you provide an alternative model where you get what you want, that is economically viable for vendors and manufacturers to invest in, and that does not require me to teach my parents how to sysadmin their phones to keep them safe?

I trust Apple more than I trust Google to not share my data with a large group of corporate entities who want to sell me things I do not wish to buy.

I believe both - and if required, organizations like Mozilla, Ubuntu, Redhat/Oracle, whoever - to comply with law enforcement requests made of them to hand over any data relating to me that they might hold. I'm OK with that. I think Apple has less of that data than Google, and works actively to have less of it. Google works actively to increase the amount of data they have about me.

I think even if you had a functional device using entirely open software, that any organisation you share that data with or use to communicate with using that device - including cloud service providers, network providers, and so on - would also comply with law enforcement.

"Ah!", you say, "But I get to choose which crypto to use! I know it won't have backdoors!". To which I will reply you are unlikely to have read and truly understood the source code to the crypto software you're using, and that such software is regularly shown to have security issues. It's just not true that open source means that all bugs become shallow, and the "many eyes" you're hoping for to surface these issues are likely employed at, err, Apple, Google, Redhat, Ubuntu, Mozilla...

I look at the landscape and I conclude that true open source environments have a ton of issues, Google/Android have far more (for my taste), and that I am more confident in Apple than I am in either myself (even as an experienced tech expert), or Google, or Microsoft, to keep my data private to me to the greatest extent legally permissible.

Do I think "legally permissible" should be extended? Sure. Do I wish a multi-billionaire would throw 50% of the net worth at making open source compete on the same level? Yeah, cool. Do I think any of that is realistic in the next 5 years? No. So, I make my bets accordingly, eyes wide open, balancing the risks...

reply
umanwizard
1 month ago
[-]
Do you have any examples of Apple being untrustworthy to back up your rather extreme reaction?
reply
adrian_b
1 month ago
[-]
You should remember that in December 2023 it was revealed that the "Apple Silicon" CPUs have some undocumented testing features, which have unbelievably remained enabled in the Apple devices for many years until being notified by the bug finders, instead of being disabled at the end of production.

Using the undocumented but accessible control registers, all the memory protections of the Apple devices could be bypassed. Using this hardware backdoor, together with some software bugs in the Apple system libraries and applications, for many years, until the end of 2023, it has been possible to remotely take complete control of any iPhone, with access to its storage and control of the camera and microphone, in such a way that it was almost impossible for the owner to discover this (the backdoor bugs have been discovered only as a consequence of analyzing some suspicious Internet traffic of some iPhones that were monitored by external firewalls).

It is hard to explain such a trivial security error as not disabling a testing backdoor after production, for a company that has claimed publicly for so long that they take the security of their customers very seriously and that has provided a lot of security theater features, like a separate undocumented security processor, while failing to observe the most elementary security rules.

It is possible that the backdoor was intentional, either inserted with the knowledge of the management at the request of some TLA, or by a rogue Apple employee who was a mole of such a TLA, but these alternative explanations are even worse for Apple than the explanation based on negligence.

reply
saagarjha
1 month ago
[-]
I don't think this demonstrates untrustworthiness.
reply
talldayo
1 month ago
[-]
Sure. Next you'll say that POPCOUNT and the Intel Management Engine are actually perfectly trustworthy too.
reply
neonsunset
1 month ago
[-]
Wait, what's wrong with BMI1 instructions?
reply
immibis
29 days ago
[-]
Allegedly added to the instruction set by request of the NSA, who uses it for things. https://vaibhavsagar.com/blog/2019/09/08/popcount/
reply
neonsunset
29 days ago
[-]
Good thing they did that then, it’s a very useful instruction!
reply
saagarjha
29 days ago
[-]
popcount is actually a secret instruction to detect dissidents who use Hacker News. Wake up sheeple
reply
mixmastamyk
1 month ago
[-]
Forgot the "screeching minority" who values privacy quote already?

https://www.howtogeek.com/746588/apple-discusses-screeching-...

reply
sho
1 month ago
[-]
> all of them, time and again, have shown themselves to be totally untrustworthy in all possible ways

Sorry, but this seems like a very vague claim to me. Can you specifically point out a time where Apple proved itself untrustworthy in a way that impacts personal privacy?

When Apple says they treat my data in a specific way, then yes I do trust them. This promise is pretty central to my usage of them as a company. I'd change my mind if there was evidence to suggest they're lying, or have betrayed that trust, but I haven't seen any, and your post doesn't provide any either.

reply
throwaway0223
1 month ago
[-]
It depends on what you'd consider "untrustworthy", but some (myself included) feel it's hypocritical for Apple to position itself as a privacy conscious choice, and use its marketing / PR machine to give the impression it only makes money on devices/subscriptions, when they're silently managing an ads-funded cash cow, with billions of dollars that go directly to the bottom line, as pure profit.

Here's a few pointers, to get you up to speed [1-5]. Of course there's nothing wrong with monetizing their own user base and selling ads based on their 1PD (or, in the case of Safari, monetizing the search engine placement). But I find it ironic that they make a ton of money by selling ads based on the exact same practices they demonize others for -- user behavior, contextual, location, profile.

[1] https://searchads.apple.com/

[2] Apple’s expanding ad ambitions: A closer look at its journey toward a comprehensive ad tech stack - https://digiday.com/media-buying/apples-expanding-ad-ambitio...

[3] Apple’s Ad Network Is The Biggest Beneficiary Of Apple’s New Marketing Rules: Report -- https://www.forbes.com/sites/johnkoetsier/2021/10/19/apples-...

[4] Apple Privacy Suits Claim App Changes Were Guise to Boost Ad Revenue - https://www.hollywoodreporter.com/business/business-news/app...

[5] Apple is becoming an ad company despite privacy claims - https://proton.me/blog/apple-ad-company

reply
dwaite
1 month ago
[-]
> they're silently managing an ads-funded cash cow, with billions of dollars that go directly to the bottom line, as pure profit

Advertising isn't anti-privacy. Apple's fight was with tracking by third parties without user knowledge or consent. That is independent of, but often used for, advertising purposes.

This is different from say Google determining ads on Youtube based on what you are watching on Youtube.com, and from Amazon or Apple promoting products based on your product searches solely within their respective stores.

reply
codedokode
1 month ago
[-]
> Advertising isn't anti-privacy.

Advertising works much better when there is no privacy.

reply
WorldMaker
1 month ago
[-]
Tracking-based Ad targeting is blip in the history of advertising and goes against previous decades of "common sense" in advertising that the best ads cast the widest net and catch the eye of people you (and they) don't even know are potential targets.

I hope this current fad dies and people return to that older marketing "common sense". Over-targeting is bad for consumers and bad for advertisers, the only people truly benefiting seem to be Google and Meta.

reply
nozzlegear
1 month ago
[-]
Your truism doesn’t refute their point.
reply
fragmede
1 month ago
[-]
The fact that Advanced Data Protection on iCloud wasn't forced is sus.
reply
ZaoLahma
1 month ago
[-]
As someone who has to help my father with his personal tech as his mental health deteriorates (several brain tumors), I'm thrilled every time I find something that ISN'T locked down behind pin codes, passwords or other authentication methods that he no longer remembers or can communicate.

His current state really has made me think about my own tech, about what should be locked down and what really should not be - things that we lock down out of habit (or by force) rather than out of necessity.

reply
dclowd9901
1 month ago
[-]
Given the rate at which the elderly find themselves swindled out of money due to scams, hacks or any other method of invasion, I really don’t think loosening controls makes the most sense.

Might be interesting if companies offered the ability for someone to be a “steward” over another when it came to sensitive choices (like allowing new logins, sending money, etc). Of course that itself is a minefield of issues with family members themselves taking advantage of their elderly members. But maybe power of attorney would have to be granted?

reply
ZaoLahma
1 month ago
[-]
What I hinted at was more granularity in how we treat different types of data, or other accesses, in response to the idea of being forced to turn on "Advanced Data Protection on iCloud".

Rather than putting all of our personal data and accesses under a thick virtual fire blanket, perhaps it is perfectly fine if some of it isn't protected at all, or is protected in ways that could be easily circumvented with just a tiny bit of finagling.

This is now how I'm approaching my own digital foot print, that some not secret things are nowadays wide open, unencrypted and you just need to know where to look to access all of it.

reply
WorldMaker
1 month ago
[-]
Relatedly, I think a lot of us under-estimate/under-appreciate physical security in our threat models. A desktop tower that never leaves my house and would be a pain for anyone but a dedicated burglar to steal maybe doesn't need the same sort of security/encryption/authentication requirements for physical access in person that a phone or laptop might need. Certainly there are plenty of fears of people targeting me specifically and getting physical access to my house, but there are also more legal protections from some of those. Threat models are all about trade-offs and physical security/physical access restrictions trade-offs can be under-appreciated as places to make choices that can be in your favor.
reply
dclowd9901
1 month ago
[-]
I understand what you mean but I think maybe your example wasn’t terrific given I think the elderly are actually frequent and vulnerable targets for crims. I’ve actually had scenarios where my parents were unable to log into an account and when I asked why they needed to, it was to give some “support specialist” information they were asking for. Is it a pain in the ass to help your parents install a mobile app sometimes? Yeah I guess. I’m just glad someone didn’t drain their bank account on them.

There is sometimes a point to inconvenience in that it requires time and assessment.

reply
seec
1 month ago
[-]
Yeah, the thing about "security" is that there is a lot more chance that it will come to bite you in the ass later down the road than being successful (actually prevent an issue). I have some funny stories about unrecoverable drives because of forgotten encryption keys.

For most people the only security they need is actually access to their money, everything else is mostly irrelevant, nobody really cares about weird habits or whatever.

reply
sho
1 month ago
[-]
Not when you understand the tradeoffs being made. If you enable Advanced Data Protection and lose or forget your password, Apple cannot help you recover it. It makes sense that it's opt-in and users make a conscious choice to make that trade-off.
reply
matwood
1 month ago
[-]
Have you ever done tech support?
reply
suneater921
1 month ago
[-]
Yeah, you’re right. Apple’s approach to privacy is like one of those fairytale genies. On paper, and in many technical aspects, class-leading, but useless because anyone powerful and/or determined enough to hurt you will be able to use the backdoors that they willingly provide.

End to end encryption? Sure, but we’re sending your location and metadata in unencrypted packets.

Don’t want governments to surveil your images? Sure, they can’t see the images - but they’ll send us hashes of illegal images, and we’ll turn your images into hashes, check them against each other, and report you to them if we find enough.

Apple essentially sells unbreakable locked doors while being very careful to keep a few windows open. They are a key PRISM member and have obligations under U.S. law that they will fulfil. Encryption backdoors aren’t needed when the systems that they work within can be designed to provide backdoors.

I fully expect that Apple Intelligence will have similar system defects that won’t be covered properly, and will go forgotten until some dissident gets killed and we wonder why.

For a look at their PR finesse in tricking media, see this, over the CSAM fiasco that has been resolved, in Apple’s favour.

https://sneak.berlin/20230115/macos-scans-your-local-files-n...

reply
realfeel78
1 month ago
[-]
> Sure, they can’t see the images - but they’ll send us hashes of illegal images, and we’ll turn your images into hashes, check them against each other, and report you to them if we find enough.

> I fully expect that Apple Intelligence will have similar system defects

Being able to scan devices for CSAM at scale is a "defect" to you?

reply
parl_match
1 month ago
[-]
Yes, it is a defect. For many reasons

- it's anti-user: a device spying on you and reporting back to a centralized server is a bad look

- it's a slippery slope: talking about releasing this caused them to get requests from governments to consider including "dissident" information

- it's prone to abuse: within days, the hashing mechanism they were proposing was reverse engineered and false positives were embedded in innocent images

- it assumes guilt across the population: what happened to innocent by default?

and yes, csam is a huge problem. And btw, apple DOES currently scan for it- if you share an album (and thus decrypt it), it is scanned for CSAM.

reply
ladzoppelin
1 month ago
[-]
Yeah but Google and MS have the same problems.. What your talking about is the reality of using a computer connected to the internet since 2003.
reply
seec
1 month ago
[-]
But they don't bullshit about it as much and their offerings are much cheaper and it's easier to not have to pay as much (either with data or money).

There is just a general hypocrisy about Apple that is hilarious.

reply
NBJack
1 month ago
[-]
This is true, but your examples aren't directly trying to pretend they are the better alternatives for that. Apple is doing its best to paint itself as some golden company when reality dictates they are no better (if honestly worse in some categories).
reply
jajko
1 month ago
[-]
Don't expect balanced objective opinions on Apple on HN, that was never ever the case. Some of it are tech enthusiasts, some are maybe employees or investors, some is paid PR.

Nothing wrong there per se, its just good to realize it.

reply
dyauspitr
1 month ago
[-]
What government needs you to have a smartphone from Apple or Google?
reply
jay_kyburz
1 month ago
[-]
The Australian Government required you to have an app called MyGovID to do you business taxes and other administrative tasks. This app is only Apple or Android, there is no web interface.
reply
kolinko
1 month ago
[-]
That’s crazy. Why no web interface? In Poland for taxes we have a web interface that is mobile and stationary, and for many other things it’s a choice between an app and web and paper.
reply
jay_kyburz
1 month ago
[-]
The tax part is all web, it just mandatory 2 factor authentication to login that requires the app.
reply
kolinko
1 month ago
[-]
Ahl. In our case, we have mobile app 2fa, but also an sms 2fa, and authentication through bank login - it’s quite neat that the government struck deal with a bunch of banks, and they serve as identity providers too.
reply
kolinko
1 month ago
[-]
Not a requirement, but in Poland a ton of administration things can be done from a dedicated iphone/android app - including using your official ID. It is optional though, and you can alway do the same stuff (ID aside) from the web, or using paper and going places in person.
reply
nozzlegear
1 month ago
[-]
> Because this utter insanity isn't being loudly called out, spat upon, and generally treated with the withering contempt that these companies have so richly and roundly earned this decision is being made for all society by the most naive among us.

Ah yes, blame the simple-minded plebes who foolishly cast their noses up at Windows Phone. If only Ballmer were still in charge, surely he'd have saved us from this horrible future of personal, privacy-respecting AI at the edge...

reply
cchance
1 month ago
[-]
Have to agree, apple seems to put a really strong emphasis above all else on your shit is your shit and we don't want to see it.
reply
m463
1 month ago
[-]
But this is not true. that's the thing.

Apple is very intrusive. Macos phones home all the time. ios gives you zero control (all apps have internet access by default, and you cannot stop it)

Apple uses your data. you should be able to say no.

And as for your data, they do other things too, a different way. Everything goes to icloud by default. I've gotten new devices and boom, it's uploading everything to icloud.

I've seem privacy minded parents say no, but then they get their kid an iphone and all of their stuff goes to icloud.

I think apple should allow a personal you-have-all-your-data icloud.

reply
dwaite
1 month ago
[-]
> Apple is very intrusive. Macos phones home all the time.

The platform is heavily internet-integrated, and I would expect it to periodically hit Apple servers. There are a lot of people claiming to be security researchers reporting what Little Snitch told them. There are drastically fewer who would introspect packets and look for any gathered telemetry.

I really haven't seen evidence Apple is abusing their position here.

> Everything goes to icloud by default. I've gotten new devices and boom, it's uploading everything to iCloud.

You need to enable iCloud. You are prompted.

Also, a new device should have next to nothing to upload to iCloud, as its hard disk is still in the factory configuration.

> I think apple should allow a personal you-have-all-your-data iCloud

They have desktop backup. Maybe they should allow third party backup Apps on iPhone, although I suspect data would be encrypted and blinded to prevent abuses by third parties, and recovery would be challenging because today recovery is only possible on a known-state filesystem. The recovery aspect is what really has limited it to the handful of approaches implemented directly by Apple.

reply
sgarland
1 month ago
[-]
A key difference is that Apple isn’t then selling the info it has on you to advertisers.

I don’t think any large tech company is morally good, but I trust Apple the most out of the big ones to not do anything nefarious with my info.

reply
cstejerean
1 month ago
[-]
None of the tech companies are selling your data to advertisers. They allow advertisers to target people based on the data, but the data itself is never sold. And it would be dumb to sell it because selling targeted ads is a lot more valuable than selling data.

Just about everyone else other than the tech companies are actually selling your data to various brokers, from the DMV to the cellphone companies.

reply
wcfields
1 month ago
[-]
> None of the tech companies are selling your data to advertisers.

First-hand account from me that this is not factual at all.

I worked at a major media buyer agency “big 5” in advanced analytics; we were a team of 5-10 data scientists. We got a firehose on behalf of our client, a major movie studio, of search of their titles by zip code from “G”.

On top of that we had clean roomed audience data from “F” of viewers of the ads/trailers who also viewed ads on their set top boxes.

I can go on and on, and yeah, we didn’t see “Joe Smith” level of granularity, it was at Zip code levels, but to say FAANG doesn’t sell user data is naive at best.

reply
cstejerean
1 month ago
[-]
> we didn’t see “Joe Smith” level of granularity, it was at Zip code levels

So you got aggregated analytics instead of data about individual users.

Meanwhile other companies are selling your name, phone number, address history, people you are affiliated with, detailed location history, etc.

Which one would you say is "selling user data"?

reply
tsunamifury
1 month ago
[-]
They absolutely are. And they give it to governments upon request.

Their privacy stories are marketing first.

reply
itissid
1 month ago
[-]
I don't think they sell it like Google or Samsung. For example Apple does not have a location intelligence team dedicated to driving revenue for store brands or targeting users that go there using precise geo location data.

Google and Samsung do.

reply
jhanschoo
1 month ago
[-]
Give me a source that they are selling your data, not targeted ads.
reply
nativeit
1 month ago
[-]
reply
freetanga
1 month ago
[-]
I _trust_ Google to attempt to do so, and fail sadly along the way…

They went from “Don’t be evil” to a cartoonish “Doctor Evil” character in a decade.

reply
dwaite
1 month ago
[-]
> And they give it to governments upon request.

So in other words, "companies operating within a nation are expected to abide by the laws of that nation"?

Apple structures their systems to limit the data they can turn over by request, and documents what data they do turn over. What else do you believe they should be doing?

reply
tsunamifury
1 month ago
[-]
Actually under US rule of law you don’t just turn over things upon request.

Much like every other tech company you test the request.

Apple never does.

reply
nativeit
1 month ago
[-]
> Apple never does.

Citation needed?

reply
rootusrootus
1 month ago
[-]
They are selling data to advertisers? I would like to know more about that.
reply
mgiampapa
1 month ago
[-]
Google isn't. They are the advertising engine and sell to advertisers for reach, just like Facebook does.

I trust Apple about as far as I can throw them too. They are inherently anti-consumer rights everywhere in their ecosystem. The "Privacy" angle is just PR.

reply
etempleton
1 month ago
[-]
I would say it is PR as much as it is a strategic differentiation. Their business model is too sell products and services directly to consumers. This is different from Microsoft who is selling to businesses who need data protection, but actually want to be able to monitor their employees and Google who wants to leverage your data / behavior to allow advertisers to effectively target you with ads.

None of the big companies expressly sell your information. Not because they are altruistic, but because it is an asset that they want to protect so they can rent to the next person.

reply
tsunamifury
1 month ago
[-]
Yo Apple is an ad company as well now. They do both.
reply
etempleton
1 month ago
[-]
They all do a little bit of everything. Google sells devices too, but they are not predominantly a physical device company.
reply
mgiampapa
1 month ago
[-]
At 3 trillion Dollars of market cap they are a capitalistic hellscape and do everything in their power to benefit their own interests, which are not yours.
reply
tsunamifury
1 month ago
[-]
It’s amazing how many here don’t understand that.
reply
nativeit
1 month ago
[-]
There are several very nuanced comments much farther up this chain who clearly do understand that, and lay out their informed reasoning for why they have chosen to use Apple devices for themselves. It’s amazing how many here seem to have ignored them.
reply
jfoster
1 month ago
[-]
Anyone who disagrees with you about this should buy a Mac and try not enabling iCloud. There's constant nags and as far as I could find, no way to turn them off.
reply
musicale
1 month ago
[-]
1) Have you tried installing Linux? ;-)

2) I have booted macOS VMs without iCloud. I'm not sure of the nags though. I believe signing out of iCloud will prevent iCloud from contacting Apple.

https://support.apple.com/en-us/104958

reply
m463
1 month ago
[-]
1) yes:)

2) that is entirely NOT true. You should install little snitch and see what happens even if you NEVER sign into icloud. note that the phone home contact is not immediate, it happens in the background at random intervals from random applications.

just some random services blocked by little snitch on a mac:

accountsd, adprivacyd, airportd, AMPLibraryAgent, appstoreagent, apsd, AssetCacheLocatorService.xpc, cloudd, com.apple.geod.xpc, com.apple.Safari.SafeBrowsing.Service, commerce, configd, familycircled, mapspushd, nsurlsessiond, ocspd, rapportd, remindd, Safari, sntp, softwareupdated, Spotlight, sutdentd, syspolicyd, touristd, transparencyd, trustd, X11.bin

(never signed into an apple id)

reply
nativeit
1 month ago
[-]
Tell me more about how dastardly it is that Safari communicates with Apple servers. Type it from your browser that doesn’t communicate directly with its developers.
reply
nativeit
1 month ago
[-]
Judging from a lot of these comments, most of the folks here are reading/commenting via telnet.
reply
dyauspitr
1 month ago
[-]
I’ve never used iCloud since it came out. I can’t think of a single nag. Where do you see it on your iPhone or Mac?
reply
jfoster
1 month ago
[-]
There's several of them. The most annoying for me was getting intermittent notifications to sign in to iCloud.

There's also this one: https://discussions.apple.com/thread/250727947

I eventually just gave in to stop the nags.

reply
Takennickname
1 month ago
[-]
I have an iPad (not iPhone or Mac). If you don't set up Icloud, there's always an annoying bright red circle in settings that tells you to "finish setting up your iPad".

Doesn't have to be bright red, or even there at all.

reply
kstrauser
1 month ago
[-]
Last time I had that on a laptop I was going to wipe soon afterward and didn’t want to fully set up, I clicked the “finish setting up” link and canceled out. Voila, red circle gone.
reply
jfoster
1 month ago
[-]
Gone until a few days or a week later, when it comes back.
reply
dyauspitr
1 month ago
[-]
Yes, that’s the only one I’ve seen. But it’s not much of a nag.
reply
musicale
1 month ago
[-]
> all apps have internet access by default, and you cannot stop it

Technically you can by turning off wi-fi and disabling cellular data, bluetooth, location services, etc. for the app.

To your point though, wi-fi data should also be a per-app setting, and it is an annoying omission. macOS has outgoing firewalls, but iOS does not (though you could perhaps fake it with a VPN.)

reply
krrrh
1 month ago
[-]
> Apple is very intrusive

> Apple uses your data.

> they do other things too, a different way

What specifically do you mean? Their frankly quite paranoid security and privacy white papers are pretty comprehensive and I don’t think they could afford to lie in those.

> Apple should allow a personal you-have-all-your-data iCloud

Advanced Data Protection[0] applies e2ee for basically everything, with the exception email, and doesn’t degrade the seamless multi-device experience at all. For most people this is the best privacy option by a long shot, and no other major platform can provide anything close.

They’ve hampered product experience for a long time because of their allergy against modelling their customers on the cloud. The advent of AI seems to have caught them a bit off guard but the integrated ecosystem and focus on on-device processing looks like it may pay off, and Siri won’t feel 5 years behind Google Assistant or Alexa.

[0] https://support.apple.com/en-ca/102651

reply
FireBeyond
1 month ago
[-]
> What specifically do you mean? Their frankly quite paranoid security and privacy white papers are pretty comprehensive and I don’t think they could afford to lie in those.

A couple of years ago Apple was busted when it was discovered that most Apple first-party apps weren't getting picked up by packet sniffer or firewalls on macOS.

Apple tried deflecting for a while before finally offering up the flimsy claim that it "was necessary to make updates easier". Which isn't a really good explanation when you're wondering why TextEdit.app needs a kernel network extension.

reply
kalleboo
1 month ago
[-]
What actually happened was Apple removed support for kernel extensions that these firewall apps used.

The user-mode replacement APIs allowed by sandboxed apps had a whitelist for Apple's apps, so you couldn't install some App Store firewall app that would then disable the App Store and screw everything up.

After the outrage, in a point release a few months later, they silently emptied out the whitelist, resolving the issue.

They never issued any kind of statement.

reply
FireBeyond
1 month ago
[-]
So their "fix", as described here, removed protection from "having the App Store disabled and everything screwed up"?

That makes no sense.

Even if it did, the app the would need protection is the App Store, not every single Apple app. In many cases, the fix for the worst case scenario would be "remove firewall app".

Also, given that TextEdit was not an AppStore app, for but one example, but a base image app.

> They never issued any kind of statement.

Shocking. I've had at least two MBPs affected by different issues that were later subject to recall, but no statement there. radar.apple.com may well be read by someone, but is largely considered a black hole.

reply
thomassmith65
1 month ago
[-]
The lack of an iOS setting to deny specific apps network access is absurd. It doesn't feel like much of a privacy-focused platform when every day in my network logs I see hundreds of attempted connections from 'offline' iOS apps.
reply
parl_match
1 month ago
[-]
For what it's worth, those platform investments are the difference between Apple being applauded for this, and Microsoft being pilloried for Recall's deficiencies.
reply
jeswin
1 month ago
[-]
> I trust Apple to respect my privacy when doing AI...

Depends on where you are. Apple will bend over backwards when profits are affected, as you can see in China.

Ironically, the only time a large company took a stand at the cost of profits was in 2010 when Google pulled out of China over hacking and subsequently refused to censor. Google has changed since then, but that was the high watermark for corporates putting principles over profits. Apple, no.

reply
dwaite
1 month ago
[-]
> Google pulled out of China over hacking and subsequently refused to censor

My impression is that they had little chance to survive in a Chinese market, competing with a severely limited product against state-sponsored search products while also being a victim of state-sponsored cyberattacks.

It was the morally correct decision, but I don't know if they were leaving any money on the table doing so. I suspect the Google of today would also decide not to shovel cash into an incinerator.

reply
bayindirh
1 month ago
[-]
When enrolling physical security keys to my accounts, only Google's process requested extra, identifiable fields in my key, generating a warning in Firefox, which can anonymize these fields.

Google wants to track even my physical security key across sites to track me.

How can I trust their AI systems with my data?

reply
dwaite
1 month ago
[-]
The attestation on (FIDO certified) security keys is a batch attestation, and meant to correspond to a a batch size of at least 100,000.

So they were effectively asking for the make and model.

There are non-certified authenticators which may have unfortunate behaviors here, such as having attestations containing a hardware serial number. Some browsers maintain a list and will simply block attestations from these authenticators. Some will prompt no matter what.

There is also a bit of an 'Open Web' philosophy at play here - websites often do not have a reason to make security decisions around the make and models of keys. Having an additional prompt in a user conversion path discourages asking for information they don't need, particularly information which could be used to give some users a worse experience and some vendors a strong 'first-mover' market advantage.

In fact, the motivator for asking for this attestation is often for self-service account management. If I have two entries for registered credentials, it is nice if I have some way to differentiate them, such as knowing one of them is a Yubico 5Ci while the other is an iPhone.

Many parties (including Google) seem to have moved to using an AAGUID lookup table to populate this screen in order to avoid the attestation prompt. It also winds up being more reliable, as software authenticators typically do not provide attestations today.

reply
bayindirh
1 month ago
[-]
Both devices are Yubikey 5 series, and none of the other services asked for anything similar, or triggered any warnings.

Moreover, none of the service providers auto-named my keys with make/model, etc.

> If I have two entries for registered credentials, it is nice if I have some way to differentiate them, such as knowing one of them is a Yubico 5Ci while the other is an iPhone.

First, Google doesn't differentiate the security keys' name even if you allow for that data to be read, plus you can always rename your keys to anything you want, at any of the service providers I enrolled my keys, so it doesn't make sense.

Moreover, Firefox didn't warn me for any other services which I enrolled my keys, and none of them are small providers by any means.

So, it doesn't add up much.

reply
4m1rk
1 month ago
[-]
Google is not trusted because it was an AI company and needed your data. Apple just joined the club.
reply
anileated
1 month ago
[-]
Since Apple is building ChatGPT integration into its devices, it’s clear that Apple’s users’ data is going to be slurped by Microsoft via ClosedAI servers now.

It’s unlikely latency would permit them to proxy every request to fully mask end-user IPs (it’s unclear what “obscured” means), and they would probably include device identifiers and let Microsoft maintain your shadow profile if that could improve ChatGPT output (it may not require literally storing your every request, so denying that is weasel phrasing).

reply
kstrauser
1 month ago
[-]
I’ve been browsing with Private Relay since the day it became available. What’s this intolerable latency you’re talking about?
reply
anileated
1 month ago
[-]
Browsing is not the same as using a personal assistant.

First, it takes much less compute to serve a page than to run an LLM query. LLMs are slow even if you eliminate all network.

Second, your expectations when browsing are not the same as when using a personal assistant.

Right now even when I simply ask Siri to set a timer it takes more than a couple of seconds. Add an actual GPT in the mix and it’s laughable.

In any case, even with a private relay, Apple’s phrasing does not deny sending device identifiers and allowing ClosedAI/Microsoft to build your shadow profile (without storing requests verbatim).

reply
kstrauser
1 month ago
[-]
Nope, you’re moving the goalposts. You were talking about the latency of making a network call. I pointed out that Apple’s current proxying architecture has low latency for web browsing, with orders of magnitude larger requests moving through it. We’re not going to bring GPT slowness into the mix because that’s not what we were discussing.
reply
anileated
1 month ago
[-]
No, I meant the cumulative latency that increases with every hop. You can’t fool physics. Not proxying is just faster and in case of an already super-slow server these seconds matter to any UX designer worth their salt.
reply
highwaylights
1 month ago
[-]
Ironically, I feel like Apple might have lost me as a customer today. It won't matter to Apple, obviously, but so much of what they showed today I just felt was actively pushing me out of the ecosystem.

I first bought some devices for myself, then those devices got handed off to family when I upgraded, and now we're at a point where we still use all of the devices we bought to date - but the arbitrary obsolescence hammer came down fairly hard today with the intel cut-off and the iPhone 15+ requirement for the AI features. This isn't new for Apple, they've been aging perfectly usable devices out of support for years. We'll be fine for now, but patch support is only partial for devices on less-than-latest major releases so I likely need to replace a lot of stuff in the next couple of years and it would be way too expensive to do this whole thing again. I'll also really begrudge doing it, as the devices we have suit us just fine.

Some of it I can live without (most of the AI features they showed today), but for the parts that are sending off to the cloud anyway it just feels really hard to pretend it's anything other than trying to force upgrades people would be happy without. OCLP has done a good job for a couple of homework Macs, I might see about Windows licenses for those when they finally stop getting patches.

I'd feel worse for anyone that bought the Intel Mac Pro last year before it got taken off sale (although I'm not sure how many did). That's got to really feel like a kick in the teeth given the price of those things.

reply
hmottestad
1 month ago
[-]
From rumours of Apple buying lots of GPUs from Nvidia not that long ago I think management got a nice little scare when OpenAI released GPT-3.5 and then GPT-4. It takes several years to bring a CPU to market. Apple probably realised far too late that they needed specific features in their SOCs to handle the new AI stuff, so it wasn’t included in anything before the A17 Pro. For the M1, M2 and M3 I believe that Apple is willing to sacrifice heat and battery to achieve what they want. The A17 Pro is probably very efficient at running LLMs so it can do so in a phone with a small battery and terrible thermal performance. For their Macs and iPads with M1, M2, M3 they will just run the LLMs on the AMX or the GPU cores and use more power and produce more heat.

Could also be a memory problem. The A17 Pro in the iPhone 15 Pro comes with 8 GB of memory while everything before that has 6 GB or less. All machines with the M1 or newer come with at least 8 GB of memory.

PS: The people who bought the Intel Mac Pro after the M1 was released knew very well what they were getting into.

reply
morvita
1 month ago
[-]
It worth noting that the power of the neural engine doubled between the A16 and A17 chips (17 vs 35 TOPS, according to Wikipedia), while the A15 to A16 was a much more modest increase (15.8 to 17 TOPS). So it does seem like they started prioritizing AI/ML performance with the A17 design.
reply
jwr
1 month ago
[-]
Apple started including the neural engine back with A11 Bionic. In 2017.
reply
talldayo
1 month ago
[-]
And at .6 TOP/second of performance, that Neural Engine is practically useless today. You can go buy a $50 Rockchip board with an NPU 10x faster.

Which introduces a funny aspect, of the whole NPU/TPU thing. There's a constant stairstepping in capability; the newer models improving only obsoletes older ones faster. It's a bit of a design paradox.

reply
jwr
1 month ago
[-]
Yes. But I was responding to "Apple probably realised far too late". I think they were in fact way ahead of everyone else, it's just that the hardware of 2017 can't keep up with the demands of today.
reply
hmottestad
1 month ago
[-]
It was specifically the LLM stuff. Their neural engines were never designed for running LLMs. The question is if the new neural engine in the A17 Pro and M4 actually have the required features to run LLMs or not. That’s at least what I suspect.
reply
talldayo
1 month ago
[-]
> I think they were in fact way ahead of everyone else,

This would be a lot easier to argue if they hadn't gimped their Neural Engine by only allowing it to run CoreML models. Nobody in the industry uses or cares about CoreML, even now. Back then, in 2017, it was still underpowered hardware that would obviously be outshined by a GPU compute shader.

I think Apple would be ahead of everyone else if they did the same thing Nvidia did by combining their Neural Engine and GPU, then tying it together with a composition layer. Instead they have a bunch of disconnected software and hardware libraries; you really can't blame anyone for trying to avoid iOS as an AI client.

reply
easyThrowaway
1 month ago
[-]
I'm genuinely wondering why the NU was added in first place, I can't think of any app that made egregious use of that outside of the Gallery and Photo App. They didn't even allow any access from third parties in a few initial iterations.
reply
hmottestad
1 month ago
[-]
On-device voice to text for Siri. Facial recognition in the photos app. Text recognition in photos. Scene recognition and adaptation in the camera app. And FaceID.

Not all those were available from day, except FaceID.

reply
Takennickname
1 month ago
[-]
You have no idea what you're talking about. This is painful to read.
reply
methodical
1 month ago
[-]
While I mostly agree with your point of Apple being rather aggressive with forced upgrading, I don't think the device requirements for these features were based solely on the desire to push out people with older devices, but rather due to the hardware requirements for a lot of the ML/AI features being based on the Apple Silicon, at least for the Mac side of things. As to why they drew the line at the iPhone 15, perhaps it's a similar reason regarding performance requirements. While obviously, I'm not intimately knowledgeable of their basis for the device requirements, I'd wait a few more years to see how the device requirements for these new features cascade. If they continue requiring newer and newer devices, only supporting the trailing generation or so, then I'd agree wholeheartedly with your sentiment.
reply
guhcampos
1 month ago
[-]
I'm with you here. As a proud owner of an iPhone 13 Mini, I refuse to switch to anything bigger than that, but I do concede that any moderately useful AI pipeline will require more power than my aging phone is capable to provide.
reply
dwaite
1 month ago
[-]
I'll always slightly regret not getting a Mini, but 2020 was a really bad year for it to launch (when I didn't feel like the extra work needed to see one in person) and 2021 I actually needed a better camera.

In retrospect though, it may be best that I don't know what I missed.

reply
Aperocky
1 month ago
[-]
It'll get you when the battery life drops to 4 hours screen time.
reply
jomohke
1 month ago
[-]
Yes, it's not arbitrary at all — they're only offering it on devices with at least 8GB of memory.

The iPhone 15 Pros were the first iPhones with 8GB. All M1+ Macs/iPads have at least 8GB of ram.

LLMs are very memory hungry, so frankly I'm a little surprised they support such low memory requirements (especially knowing that the system is running other tasks, not just ML). Microsoft's Copilot+ program has a 16GB minimum.

reply
themadturk
1 month ago
[-]
It's odd...I've gotten along fine without AI in my iPhone 13, and it will continue to work just as I have come to expect with the new iOS.

The new AI features will be available on the iPad Air I just ordered, and on my M1 MacBook Air, and I'll be able to play with them there until I'm ready to upgrade my phone. I think these new features sound great, but I'm not in any hurry to adopt them wholesale.

reply
gumby
1 month ago
[-]
> I think these new features sound great, but I'm not in any hurry to adopt them wholesale.

And if you don’t like them you don’t have to use them. I don’t use Siri and it doesn’t bother me that Apple includes it on all their machines.

reply
hansvm
1 month ago
[-]
> I don't use Siri

That's likely true. Unless you were careful to do a lot more than just disabling it, it does use you though, slurping up quite a bit of data.

reply
whoiskevin
1 month ago
[-]
reference? Proof? How is it slurping data if I have it turned off and don't even use it?
reply
hansvm
1 month ago
[-]
You have to disable the data slurping for each app. The main toggle just governs whether it responds to voice commands. Previous discussion [0].

Separately, the data Siri sends isn't held to the same differential privacy standards as some of Apple's other diagnostics. They just give you a unique ID and yolo it [1]. Unless personalized device behaviors are somehow less identifiable than all the other classes of data subject to deanonymization attacks (demographics, software/hardware version fingerprints, ...), that unique ID is just to be able to pretend to the courts that they tried (give or take Hanlon's razor).

[0] https://news.ycombinator.com/item?id=39927657

[1] https://www.apple.com/legal/privacy/data/en/ask-siri-dictati...

reply
flemhans
1 month ago
[-]
How?
reply
hansvm
1 month ago
[-]
reply
evantbyrne
1 month ago
[-]
I'm not following. What business did they lose if you weren't planning to upgrade? Maybe there is a misunderstanding of what's being gated in the release. I have an iPhone 13, and it was not a surprise to see that AI upgrades would require new hardware. Maybe I'll get a 16 if reviews confirm that it's good.
reply
shrew
1 month ago
[-]
iOS18 will still be available for older devices right? From the looks of the preview, it’ll go back to phones from 2018 which is fairly standard for Apple. And I’d imagine older iOS versions will continue to receive security updates for several years after they’re dropped from the latest version.

What is it about this release that has lost your support? Specifically gating the Apple Intelligence stuff to the most modern hardware?

reply
cchance
1 month ago
[-]
I mean, your pushed out to what? Lol your acting like android doesn't obsolete the shit out of their past cycle phones. I don't really get what you wanted them to do here, they're deploying AI in the OS and ecosystem where they can and the features that the hardware supports are being brought in, i don't see where they're implementing features that the hardware supports and are blocking "because"... I don't think anywhere they clarified what part of the cloud tools wont work on older versions. But at the end of the day old hardware is old... its not gonna support everything especially on generational shifts like how much better arm was over intel, or the fact that NPU's don't just manifest inside of old silicon
reply
dwaite
1 month ago
[-]
I'm confused. My understanding is that they didn't drop support for all Intel Macs in Sequoia. My 2018 MBP for example is still supported. The last Intel Mac Pro in your example is also still supported.

My MBP hasn't hasn't been _fully_ supported for many years. The M1-specific features started rolling out in 2021 - ability to run iPad apps being the most obvious one, the ability to get the globe view in maps being the most questionable. IIRC, my MBP did not yet have a M1 Pro/Max version available for sale when they announced the M1-specific features.

The point being, having AI features unavailable doesn't make the Mac unusable any more than it makes an iPhone 15 unusable. Those parts should continue to operate the way they do today (e.g. with today's Siri).

reply
graeme
1 month ago
[-]
Apple isn’t magic and can’t defy physics. The chips on the older devices aren’t powerful enough to run the new features.

Hardware matters again now in a way it hasn’t for a couple of decades.

reply
umanwizard
1 month ago
[-]
Requiring a new device for new features is not the same thing as removing support for older devices.
reply
iknowstuff
1 month ago
[-]
What? Your phone does everything it did when you bought it and will keep receiving important updates for years to come. How entitled are you to expect to receive every upcoming feature? And where else are you gonna get that? Lol
reply
highwaylights
1 month ago
[-]
Not sure this is accurate - a lot of devices have been culled from receiving future updates with these releases. This is not anything new this year, I get that, and I don't really mind not getting the new AI features, but having devices that will stop being supported and which can't have any other software installed because of being locked down is really not a fun situation to be on this side of.

The old Macs can still install Linux/Windows/ChromeOS Flex. iPads/iPhones not so much.

reply
tjmc
1 month ago
[-]
It will be interesting to see if there's an Osborne Effect on iPhone 15 (non-pro) sales now that the model is effectively stuck with brain-dead Siri.
reply
ManuelKiessling
1 month ago
[-]
It’s ironic how the one company that is WAY over the top wrt secrecy — not only to the public, but also and especially internally (they’ve even walled the hardware team from the software team while developing the iPhone!) — is at the same time the one company that really nails integration.
reply
Jtsummers
1 month ago
[-]
The key difference is that Apple (as an organization) appears to have an overarching roadmap (that spans multiple product lines). The secrecy is irrelevant as long as the leadership of each division is aligned (it hurts, but does not prevent success). Google, MS, and others are less organized at the top, so subdivisions of the overall org are left to plan for themselves and between each other, which leads to conflicts. Resolution may be achieved when things get pushed high enough, but only if it surfaces at the top for a leader (if such people exist in their org structure) to declare a resolution and focus for the groups involved.
reply
Spartan-S63
1 month ago
[-]
This reads like a critique of centralized versus decentralized control, but I think it’s more about lack of clear intention.

Apple has a clear intent that allows the subsequent groups to work towards and contribute to it. Google and Microsoft don’t. They have a vague idea, but not something tangible enough for subordinate leaders to meaningfully contribute to.

reply
xw390112
1 month ago
[-]
As the chess players know, ‘a bad plan is better than none at all’. https://www.chesshistory.com/winter/extra/planning.html
reply
dwaite
1 month ago
[-]
Apple is odd in other ways. For example: Calculator on iPad. Once they had a few iPad releases without a calculator, they needed a sufficient _reason_ to release a calculator app for iPad. The product was gated by the narrative.

There was also likely no team on calculator at all (are there bugs that justify a maintenance team?), so it needed a big idea like 'Math Text' to be green-lit or it would simply never come. This is despite missing calculator being an obvious deficiency, and solving it via a port to be a relatively tiny lift.

reply
xethos
1 month ago
[-]
People will scoff and say "Yeah, but all kinds of companies have internal firewalls, big deal". But no, these were literal walls that would appear over a weekend and suddenly part of the campus was off-limits to those not on the iPhone project.
reply
kolinko
1 month ago
[-]
Wow. Any place to read about that?
reply
xethos
1 month ago
[-]
I found it in Fred Vogelstein's "Dogfight: How Apple and Google Went to War and Started a Revolution" [0]. Decent read, not amazing IMO, but with fun tidbits about the iPhone / Android launches

[0] https://libgen.is/book/index.php?md5=60B714243482AE3D7B9A83B...

reply
gumby
1 month ago
[-]
Any book about the development of the iphone or articles in the business press from a few months after product introduction (if any are still online…try Fortune).
reply
kudokatz
1 month ago
[-]
I heard they did this for the Amazon Fire Phone, too
reply
bbor
1 month ago
[-]
Well tbf I’m not sure Google does project ownership… I was shocked how many seemingly important conversations ended with “well, I guess these days that functionality is owned by the community of relevant stakeholders…” (aka: owned by nobody at all). I think they’re only able to do what they’ve done through the sheer concentration of brilliant overpaid engineers, in spite of such “innovation”.

Totally agree on the AI points. Google may have incredible research, but Apple clearly is playing to their strengths here.

reply
vjulian
1 month ago
[-]
Could you please explain ‘lay bare organisational deficiencies’? I ask without skepticism.
reply
theshrike79
1 month ago
[-]
Most companies don't have an unified platform they can build this on. And even if they seem to have superficially, the internal organisation is so splintered that it'll never happen.

Like what's going on inside Google, it's getting stupidly political and infighty. If someone tries to build a comprehensive LLM that touches on gmail, youtube, docs, sheets etc, it's going to be an uphill battle.

reply
Jtsummers
1 month ago
[-]
And even if they did, there'd be five competing efforts, two would be good or at least decent, four would be deployed (not the best one though), and all would be replaced in three years.

None of them would work on-device, all would leak your data into the training set.

reply
matwood
1 month ago
[-]
And you forgot, if it was Google they would also all have their own internal chat service.
reply
theshrike79
1 month ago
[-]
And two of them would be killed within a year and/or renamed/rebranded :)
reply
sakisv
1 month ago
[-]
They did do that. Back in 2013-2015 or so. It was called Google Now, and it was a bit like magic.

It showed you contextual cards based on your upcoming trips/appointments/traveling patterns. E.g. Any tube delays on your way home from work. How early you should leave to catch your flight.

This alongside Google Inbox was among the best and most "futuristic" products.

I was glad to see today Apple implementing something similar to both of these.

reply
simonh
1 month ago
[-]
But it was a decent new Google product, hence all the past tenses.
reply
drcongo
1 month ago
[-]
Google Then™
reply
theshrike79
1 month ago
[-]
Google Now was actually a great idea, it was the only "social" media I actually understood.

No wonder they killed it.

reply
jbl0ndie
1 month ago
[-]
I've always been baffled why those two got canned. They were both really useful.
reply
camjohnson26
1 month ago
[-]
Canceling Google Inbox was when I started to move off their platform, it was their best product in years and finally got a handle on email chaos, and then they just killed it with no follow up, insane.
reply
dwaite
1 month ago
[-]
Nothing compared to the fighting between whether the Office team or the Platform team at Microsoft owned the AI 'client' work, if we were back in the Windows 7 days.

There'd be constant sabotage.

reply
ahmeneeroe-v2
1 month ago
[-]
I assume they mean: expose internal corporate silos/VP-fiefdoms that don't work seamlessly together despite being marketed under the same brand
reply
dmix
1 month ago
[-]
Google is quite notorious for having this issue from various blog posts and HN comments I've read.

Lots of middle management power groups that would prevent a cohesive top down vision from easily being adopted.

reply
yen223
1 month ago
[-]
This is the norm, not the exception for big companies.

The more I spend time in mid-large companies, the more I'm amazed that that Apple somehow managed to avoid releasing three different messaging apps that do the same thing.

reply
ljm
1 month ago
[-]
The same as any enterprise company. It’s all office politics and bureaucracy.

Make no mistake, Google is Enterprise.

reply
drevil-v2
1 month ago
[-]
Now it makes sense why Elon Musk trims the fat on his companies ruthlessly and regularly
reply
gumby
1 month ago
[-]
The phenomenon is called “Conway’s Law”: a product reflects the organizational structure.

https://en.wikipedia.org/wiki/Conway%27s_law

reply
epolanski
1 month ago
[-]
Why wouldn't Microsoft be able to?

Anyway, while I see all of your points, none of the things I've read in the news make me excited. Recapping meetings or long emails or suggesting how to write are just...not major concerns to me at least.

reply
dagw
1 month ago
[-]
Why wouldn't Microsoft be able to?

Microsoft seems to have lost all internal cohesion and the ability to focus the entire company in one direction. It's just a collection of dozens of small fiefdoms only caring about hitting their own narrow KPIs with no overall strategic vision. Just look at the mess of competing interest that Windows 11 and Edge have turned into.

reply
vundercind
1 month ago
[-]
They can’t even get marketing on the same page, such that they counter-productively confuse the hell out of their customers who might be considering giving them more money.

Quick, what’s “copilot”?

reply
prewett
1 month ago
[-]
Automates the tedious/boring parts of flying between regions in Microsoft Flight Simulator. On higher difficulty levels, can use voice recognition to accept tasks ("Copilot, we are losing fuel, find the nearest airport we can land at", "Copilot, what is the VFR frequency for the airport?", etc.) Sometimes misunderstands tasks and/or will give erroneous information, to increase fidelity to real-world situations.
reply
ethbr1
1 month ago
[-]
+ Teams, which includes a feature to build entire apps... inside Teams.
reply
Onawa
1 month ago
[-]
Oh God, when my partner started exploring using Power Apps for Teams to build a platform for running a clinical study, I was intrigued... Then horrified as I tried to help her get it setup. https://learn.microsoft.com/en-us/power-apps/teams/create-fi...
reply
ToucanLoucan
1 month ago
[-]
Maybe it's just my overly-cynical ass but when the parent comment said Teams+ lets you build apps inside Teams, I physically shuddered.
reply
ethbr1
1 month ago
[-]
'We call it Apps for Teams Live 365+!'
reply
mlinsey
1 month ago
[-]
The flip side, is that they would not have been able to execute so well with Azure etc if the Windows org had too much of a say about pushing Windows as the OS of choice everywhere. Winning in a brand new space, especially one that might be disruptive to other business units, sometimes necessitates letting the new org do its own thing.
reply
ArchOversight
1 month ago
[-]
"execute" and "well" in the same sentence when referring to Azure is a bit weird to see.
reply
tstrimple
1 month ago
[-]
Only to the ignorant. Tens of billions of dollars a year are spent on the platform. Either all of those customers are misinformed about the capabilities of Azure or you are. I’m going to rely Occam’s Razor here.
reply
__MatrixMan__
1 month ago
[-]
Yeah, it's hard to believe that VSCode and Windows are products from the same company. Very different vibes.
reply
araes
1 month ago
[-]
Is VS Code the same as Visual Studio? Super confused.

Visual Studio Code, appears to be a code editor with support for C, C#, C++, Fortran, Go, Java, JavaScript, Node.js, Python, Rust, and Julia

Visual Studio, appears to be a code editor with support for 36 languages including C, C++, C++/CLI, Visual Basic .NET, C#, F#, JavaScript, TypeScript, XML, XSLT, HTML, and CSS.

Visual Studio Code, appears to be liked by almost every user and the favorite in a bunch of online polls.

Visual Studio, appears to be unusable junk, widely hated in almost every survey, and unable to even display its own error messages correctly in 2022.

Visual Studio Code is supposedly a "related product" according to Wikipedia: https://en.wikipedia.org/wiki/Visual_Studio#Related_products

How are these related? They seem like Microsoft's internal fiefdoms again.

reply
__MatrixMan__
1 month ago
[-]
Definitely different things, I've used them both. Visual Studio is from before they went on their Microsoft <3 Open Source campaign. It has pricey licenses and is pretty much an ad for .net

It's a traditional windows application. .net/WPF I think. Configured via XML

VSCode is free, an electron app, has a plugin store with lots of niche language plugins. Configured via json.

Surely they're using VSCode to exert influence in their Microsofty way, but it feels much less like a prison.

VSCode still feels like a bit much to me (though of a monster than Visual Studio). I'm pretty happy with helix.

reply
araes
1 month ago
[-]
Cool. Thanks. Started in with the newest version of VS 2022 recently for C++, and then find out there's apparently something better in a different internal fiefdom that people actually like.
reply
sunaookami
1 month ago
[-]
reply
joking
1 month ago
[-]
> Why wouldn't Microsoft be able to?

they are irrelevant on the mobile ecosystem, a place where almost all this features are most relevant and useful

reply
ethbr1
1 month ago
[-]
I've heard Microsoft has gotten better, but I think this still rings true. https://www.reddit.com/r/ProgrammerHumor/comments/6jw33z/int...
reply
themadturk
1 month ago
[-]
>suggesting how to write

As a(n amateur) fiction writer who pays too much for ProWritingAid each year, I'd love to see if this feature is any good for fiction. I take very, very few of PWA's AI-suggested rewrites, but they often help me see my own new version of a bit of prose.

reply
kelsey98765431
1 month ago
[-]
"Anyone serious about software should be making their own hardware - Alan Kay" - Steve Jobs
reply
tstrimple
1 month ago
[-]
Microsoft is trying and I feel they are in a much stronger position than Google. The same advantage that Apple has for personal docs and images Microsoft has across business content. Seamless AI integration across teams and outlook and sharepoint and other office products offer huge platform benefits.
reply
kaba0
1 month ago
[-]
What personal data do you have on Microsoft? It doesn’t even know where my photos are as the folder structure is at the end completely arbitrary - how could it execute “call the person on this photo”, and similar level of integration?

This Apple AI presupposes their strong ecosystem, which no one has anything similar to. Google was in a good position years ago, but they are criminally unfocused nowadays.

reply
tstrimple
1 month ago
[-]
I know reading comprehension is difficult based on the overall quality of internet content these days. But I explicitly called out the business data Microsoft has in this domain. So why does Microsoft’s control of personal data factor into this at all? Do you have anything of value to add? Did you even bother to read what you are responding to? But sure. Go off on your anti Microsoft rant completely disconnected from the topic at hand.
reply
jerieljan
1 month ago
[-]
For Google in particular, this was honestly something they could've done far earlier. They had the Pixel phones, they had the Tensor stuff, and then Gemini came along.

But for some reason, they decided to just stick to feature tidbits here and there and chose not to roll out quality-of-life UI features to make Gemini use easier on normal apps and not just select Google apps. And then it's also limited by various factors. They were obviously testing the waters and were just as cautious, but imho it was a damn shame. Even summarization and key points would've been nice if I could invoke it on any text field.

But yeah, this is truly the ecosystem benefit in full force here for Apple, and they're making good use of it.

reply
smileysteve
1 month ago
[-]
Google couldn't figure out which messaging platform of theirs would succeed; imagine if the team working on hangouts or meet v1 had worked on RCS or context first.

RCS isn't fair, Google wanted the carriers to work on that, but in a disparate ecosystem they also couldn't come to a decision

reply
kaba0
1 month ago
[-]
Android CPUs are not playing in the same ballpark as Apple’s, there is always at least one generation difference between - and the core of these AI features is on-device intelligence, possibly by having managed to fit a good chunk of an LLM to an iphone. It being able to determine when to go online is the crucial part.

Also, apple bought up more ML startups than google or microsoft.

reply
dwighttk
1 month ago
[-]
How long until the EU decides that making this a platform capability is a fineable offense?
reply
ants_everywhere
1 month ago
[-]
That's actually exactly what you want. No one company should know what you do on all apps.
reply
ethbr1
1 month ago
[-]
At this point, I trust {insert megacorp} more than {insert App Dev LLC} + {insert megacorp}.

Neither is great, but at least the megacorp has a financial incentive to maintain some of my privacy.

reply
Sai_
1 month ago
[-]
I don't quite understand this comment. Are you encouraging us to use your comment as some sort of template and insert our own preferred corporate names?

Sounds like some crazy level of meta where your brilliance is applicable to any pair of mega corps...which I don't buy.

reply
tempestn
1 month ago
[-]
Neither is Apple unless one buys wholly into the Apple ecosystem. I want open ai tools that I can truly use with all my text. But I'm not holding my breath.
reply
theshrike79
1 month ago
[-]
"Open tools" and "Integration" is really hard to do.

I'd _love_ to be able to pull down my email from Fastmail, Calendars from iCal, notes from Google Notes etc to a single LLM for me to ask questions from, but it would require all of the different sources to have a proper API to fetch the data from.

Apple already has all it on device and targeted by ML models since multiple iPhone versions ago. Now they just slapped a natural language model on top of it.

reply
WheatMillington
1 month ago
[-]
>I always saw this level of personal intelligence to come about at some point, but I didn’t expect Apple to hit it out of the park so strongly

That's a little premature, let's try not to be so suckered by marketing.

reply
theshrike79
1 month ago
[-]
Apple is again going where Google (the world's largest ad company) cannot follow: 100% user privacy.

They really hammered in the fact that every bit is going to be either fully local or publicly auditable to be private.

There's no way Google can follow, they need the data for their ad modeling. Even if they anonymise it, they still want it.

reply
WheatMillington
1 month ago
[-]
They literally announced their partnership with OpenAI today, and I've seen no sign of this data being "publicly auditable" - can you share this with me?
reply
kalleboo
1 month ago
[-]
The OpenAI integration is a side-feature.

All the stuff that works on your private data is Apple models that are either on-device or in Apple's private cloud (and they are making that private cloud auditable).

The OpenAI stuff is firewalled off into a separate "ask ChatGPT to write me this thing" kind of feature.

reply
dialup_sounds
1 month ago
[-]
reply
labcomputer
1 month ago
[-]
> I've seen no sign of this data being "publicly auditable" - can you share this with me?

They announced it in the same keynote where they announced the partnership with OpenAI (and stated that sharing your data with OpenAI would be opt-in, not opt-out).

reply
cchance
1 month ago
[-]
WTF are you talking about, the guy literally said that to connect to Apple Intelligence servers the client side verifies a publically registered audit trail for the server. He then followed up saying no data on chatgpt will keep session information regarding who the data came from.

Apples big thing is privacy, i doubt they'd randomly lie about that

reply
cromka
1 month ago
[-]
This still runs on external hardware which can be spoofed at the demand of authorities. It may be private as in they themselves won’t monetize it but your data certainly won’t be safe
reply
cchance
1 month ago
[-]
Ahhh cool encryption doesn't exist, MTLS doesn't exist i forgot
reply
verandaguy
1 month ago
[-]
I can't speak towards Apple's or $your_government's trustworthiness, but MTLS wouldn't protect against an attack where Apple collaborates with a data requester.

There are people and orgs out there who (justifiably or not) are paranoid enough that they factor this into their threat model.

This is a bit academic right now, but it's also worth mentioning that in the coming years, as quantum computing becomes more and more practical, snapshots of data encrypted using quantum-unsafe cryptography, or with symmetric keys protected by quantum-unsafe crypto (like most Diffie-Hellman schemes) will be decryptable much more easily. Whether a motivated bad actor has access to the quantum infrastructure needed to do this at scale is another question, though.

reply
cromka
1 month ago
[-]
How about you Google DMA Memory Attacks, VM Escape attacks, Memory scraping and sniffing, Memory Bus Snooping and so on.

As long as the data is processed externally, no software solutions make it safe, unless you yourself are in control of the premises.

reply
Spod_Gaju
1 month ago
[-]
"100% user privacy."

That is a huge stretch and a signal as to how good Apple is with their marketing.

If they are still letting apps like GasBuddy to sell your location to insurance companies then they are no where near "100% privacy".

reply
its_ethan
1 month ago
[-]
GasBuddy is an optional app, right? Apple is very up front about what apps are going to get access to things like location, with user prompts to allow/deny. Meaning you are opting in to a lack of privacy, which is very expected behavior?

The default Apple apps (maps, messaging, safari) are solid from a privacy perspective, and I don't think you can say the same about the default apps on competitors phones.

reply
Spod_Gaju
1 month ago
[-]
I am sorry I used GasBuddy as an example since I agree it is a stretch, but still not one I disagree with.

But let's get back to Apple...if it was functioning at "100% user privacy" would it be able to give access to your data to law enforcement? As an example, I consider MullvadVPN to be 99% user privacy.

reply
its_ethan
1 month ago
[-]
reply
Spod_Gaju
1 month ago
[-]
No.

That was concerning unlocking the phone. I’m talking about the data that they store on iCloud.

reply
krrrh
1 month ago
[-]
I already linked to this article on Advanced Data Protection for iCloud (e2ee for most things) in a different comment, but it feels like a lot of people don’t know about this feature. It literally has zero effect on the user experience (except janky access to iCloud via the web, but shrug). Apple’s competitors don’t have anything close and their business models mean they probably never will.

https://support.apple.com/en-ca/102651

reply
jachee
1 month ago
[-]
The data stored on iCloud is locked with the key from the devices’ Secure Enclave. They’d have to unlock your device to get access to decrypt the iCloud data.
reply
Daneel_
1 month ago
[-]
Based on Apple's previous track record, the answer is very likely "no".
reply
Spod_Gaju
1 month ago
[-]
reply
cyberpunk
1 month ago
[-]
Why should apple be in control of what individual apps do with your location data? You explicitly grant the app access to your data, and agreed to the terms.

The difference between that and this is extremely clear is it not?

reply
Spod_Gaju
1 month ago
[-]
If I want a device that’s giving me apps on a locked in platform why shouldn’t they care about what the apps do with my information?

Imagine if we had a smart phone maker that Cared about this so we didn’t have to worry about it all the time?

reply
themadturk
1 month ago
[-]
Gas Buddy, like all 3rd party apps, has their privacy practices detailed on their App Store page. It's true that not all vendors are completely truthful with this information, but Gas Buddy (for one) appears to be pretty up-front: everything in the app is shared with the developers or others except (they say) diagnostic information. Apple set up a privacy-disclosure rule, Gas Buddy seems to be following it, and it's the user's choice whether to install Gas Buddy.

Apple has done its privacy work here; now it's up to the end user to make the final choice.

reply
Damogran6
1 month ago
[-]
It's the potential for the model. Everyone else is hoovering the internet to model everything and Apple is sticking with their privacy message and saying 'how can I model your stuff to help you.'

That's tangibly different.

reply
bboygravity
1 month ago
[-]
I beg to differ.

Example that should be super trivial: try to setup a sync of photos taken on your Iphone to a laptop (Mac or Windows or Linux) without going through Apple's cloud or any other cloud?

With an Android phone and Windows laptop (for example) you simply install the Syncthing app on both and you're done.

My point is not "Apple is worse", instead I'm just trying to point out that Apple definitely seems eager to have their users push a lot of what they do through their cloud. I don't see why their AI will be any different, even if their marketing now claims that it will be "offline" or whatever.

reply
jameshart
1 month ago
[-]
Apple is interested in providing products that they can guarantee will work, and meet actual user requirements.

"Sync my files without using Apple's cloud" is not a user requirement. Delivering features using their cloud is a very reasonable way for Apple to provide services.

Now, "Sync my files without compromising my privacy" is a user requirement. And Apple iCloud offers a feature called 'advanced data protection" [1] that end to end encrypts your files, while still supporting photo sharing and syncing. So no, you can't opt out of using their cloud as the intermediary, but you can protect your content from being decrypted by anyone, including Apple, ooff your devices.

It has the downside that it limits your account recovery options if you lose the device where your keys are and screw up on keeping a recovery key, so it isn't turned on by default, but it's there for you to use if you prefer. For many users, the protections of Apple's standard data protection are going to be enough though.

[1] https://support.apple.com/en-us/102651#:~:text=Advanced%20Da....

reply
gigel82
1 month ago
[-]
I'm a user and I require that feature. Transferring photos over a USB cable to a PC has been a feature in all portable electronics with a camera for the past 25+ years, yet Apple is still getting it wrong.
reply
jameshart
1 month ago
[-]
Wires? Oh yeah, I remember when things had wires. Good times.
reply
RunSet
1 month ago
[-]
> Wires? Oh yeah, I remember when things had wires. Good times.

Last I checked the more expensive Macbooks had three USB ports, and the cheap ones have two.

Since Macbooks no longer have ethernet ports, those USB ports are useful for plugging in the dongle when I want to connect the Macbook to an ethernet wire. Good times.

reply
labcomputer
1 month ago
[-]
> Example that should be super trivial: try to setup a sync of photos taken on your Iphone to a laptop (Mac or Windows or Linux) without going through Apple's cloud or any other cloud?

The first hit on Google makes it look trivial with iPhone too?

https://support.apple.com/guide/devices-windows/sync-photos-...

> With an Android phone and Windows laptop (for example) you simply install the Syncthing app on both and you're done.

And with iPhone you just install the "Apple Devices" app: https://apps.microsoft.com/detail/9np83lwlpz9k

reply
Damogran6
1 month ago
[-]
iCloud synchronizes all my stuff between all my devices (windows too) now. They've always been privacy-forward. I could completely see a container that spins up and AI's my stuff in their datacenter, that they don't have visibility into. The impact of them getting it wrong is pretty significant.
reply
pacifika
1 month ago
[-]
> Example that should be super trivial: try to setup a sync of photos taken on your Iphone to a laptop (Mac or Windows or Linux) without going through Apple's cloud or any other cloud?

Install jottacloud and enable the photos backup feature.

reply
Daneel_
1 month ago
[-]
I just plug my iphone into my windows laptop and use the photo import tool built into windows. It works completely fine.

I also sync my photos onto my NAS via sftp, using the Photosync app.

reply
dereg
1 month ago
[-]
Apple Intelligence stuff is going to be very big. iOS is clearly the right platform to marry great UX AI with. Latching LLMs onto Siri have allowed the Siri team to quickly atone for its sins.

I think the private compute stuff to be really big. Beyond the obvious use the cloud servers for heavy computing type tasks, I suspect it means we're going to get our own private code interpreter (proper scripting on iOS) and this is probably Apple's path to eventually allowing development on iPad OS.

Not only that, Apple is using its own chips for their servers. I don't think the follow on question is whether it's enough or not. The right question to ask is what are they going to do bring things up to snuff with NVDIA on both the developer end and hardware end?

There's such a huge play here and I don't think people get it yet, all because they think that Apple should be in the frontier model game. I think I now understand the headlines of Nadella being worried about Apple's partnership with OpenAI.

reply
wayeq
1 month ago
[-]
> allowed the Siri team to quickly atone for its sins.

Are we sure there is a Siri team in Apple? What have they been doing since 2012?

reply
dereg
1 month ago
[-]
Learning how to write llm function calls.
reply
rvnx
1 month ago
[-]
There is also this thing with Siri and Google Assistant that a lot of the answers are manually entered (the jokes, etc), so the switch to an LLM could be a massive improvement.
reply
cyberpunk
1 month ago
[-]
I don't get this at all, how does integrating siri with a llm mean you get an interpreter and allowing development?
reply
hmottestad
1 month ago
[-]
As much as I hoped for Xcode on the iPad, I still don’t think any of this AI stuff or “private cloud” is related.

Though I don’t know if I would use my iPad for programming even if it was possible, when I have a powerful Macbook Pro with a larger screen.

reply
constantcrying
1 month ago
[-]
I do believe much of what they showed was impressive. It actually seems to realize the "personal digital secretary" promise that personal computing devices throughout the decades were sold on.

The most important question to me is how reliable it is. Does it work every time or is there some chance that it horribly misinterprets the content and even embarrasses the user who trusted it.

reply
dom96
1 month ago
[-]
Yeah, reliability is the crucial bit. Like that example he showed where it checked whether he can make an appointment (by checking driving times), a lot can go wrong there and if the assistant tells you "Yes, you can" but you cannot then I can see lots of people getting angry and not trusting it for anything.
reply
discordance
1 month ago
[-]
In the context of off-device processing, it's worth keeping in mind that US surveillance laws have recently expanded in their scope and reach:

https://www.theguardian.com/us-news/2024/apr/16/house-fisa-g...

reply
ENGNR
1 month ago
[-]
For this reason, I really hope we can self-host our "private cloud" for use with apple devices. That would truly, properly allow end to end privacy. I don't trust Apple given the legislation you've just linked to, both claims obviously can't be correct.
reply
doug_durham
1 month ago
[-]
Only a diminishingly small percentage of users have the ability to do this properly. I have 40 years of development experience and I don't trust my self to set up and properly run these types of servers.
reply
ENGNR
1 month ago
[-]
Fair, but we could conceivably have an ecosystem of providers, like ProtonMail or whoever the user feels comfortable with. If it's just Apple we're headed for honeypot
reply
richardw
1 month ago
[-]
I’ve been waiting for Apple to arrive. They bring so much polish and taste.

Two features I really want:

“Position the cursor at the beginning of the word ‘usability’”

“Stop auto suggesting that word. I never use it, ever”

reply
teh_infallible
1 month ago
[-]
Apple auto suggest can be ducking annoying
reply
RheingoldRiver
1 month ago
[-]
legitimately good voice recognition would probably be the "killer feature" to get me to switch from android to iOS after all this time. I'm so frustrated with the current state of voice recognition in android keyboards, but ChatGPT's recent update is amazing at voice recognition. I type primarily by voice transcribing and I would be so happy if I could go from 70% voice 30% I need to type to 95% voice 5% I need to type.
reply
deepGem
1 month ago
[-]
One really powerful use case they demoed was that of meeting conflicts.

"Can you meet tonight at 7?" Me "oh yes" Siri "No you can't, your daughter's recital is at 7"

It's these integrations which will make life easier for those who deal with multiple personas all through their day.

But why partner with an outside company ? Even though it's optional on the device etc, people are miffed about the partnership than being excited by all that Apple has to offer.

reply
tonyabracadabra
1 month ago
[-]
The image generation is dalle 2.5 level and feels really greasy to me, beyond that I think the overall launch is pretty good! I also congratulate rabbit r1 for their timely release months before WWDC https://heymusic.ai/music/apple-intel-fEoSb
reply
thomasahle
1 month ago
[-]
The generated image of two dice (https://x.com/thomasahle/status/1800258720074490245) was dalle 1 level.

Just randomly sprinkled eyes on the sides. I wonder why they chose to showcase that.

reply
kaba0
1 month ago
[-]
What eyer are you talking about? That’s two “hand-sketched” dice, isn’t it?
reply
thomasahle
1 month ago
[-]
Did you look at the eyes/pips?

On the side with 5, they are overlapping. On the side with 4, some of them are half missing. On the side with 3, they are arranged in triangle instead of a straight line.

Not to talk about that 2 and 5 should be on opposing sides, same with 3 and 4.

It's basically like early AI being unable to generate hands, or making 6 fingers.

reply
wwalexander
1 month ago
[-]
Yeah, the image generation felt really…cheap?…tasteless? but everything else was really impressive.
reply
mholm
1 month ago
[-]
Personalization really feels like the missing link here. The images it creates are highly contextual, which increases their value dramatically. Nobody on Reddit wants to see the AI generated T. rex with a tutu on a surfboard, but in a group chat where your dancer buddy Rex is learning to surf, it’s a killer. The image AI can even use photos to learn who a person is. That opens up a ton of cool ways to communicate with friends
reply
chockablock
1 month ago
[-]
> in a group chat where your dancer buddy Rex is learning to surf, it’s a killer.

Maybe, but this class of jokes/riffs is going to get old, fast.

reply
cchance
1 month ago
[-]
It's what i expected they weren't going to open the pandoras box of realistic photogen on imessage lol, thats why the limit to illustration, cartoon etc, is there to limit the liability of it going wild, they can add more "types" later as they get things more tested, realistically its just prompts hidden behind bubbles, but allows them to slowly roll out options that they've heavily vetted.
reply
tonyabracadabra
1 month ago
[-]
I think that basically stretched the limit of what local model can achieve today, which also makes their image API almost useless for any serious generative art developers.
reply
foolofat00k
1 month ago
[-]
Fwiw I don't think "serious generative art developers" are the target audience at this point, that's probably on the order of .01% of their users
reply
pmarreck
1 month ago
[-]
> The way Siri can now perform actions based on context

Given that this will apparently drop... next year at the earliest?... I think it's simply quite a tease, for now.

I literally had to install a keyboard extension to my iPhone just to get Whisper speech to text, which is thousands of times better at dictation than Siri at this point, which seems about 10 years behind the curve

reply
QuinnyPig
1 month ago
[-]
Ooh, which keyboard extension is this?
reply
pmarreck
1 month ago
[-]
Auri AI. Note that it's not free, and the way it works is via the clipboard, so it's a bit hacky, but mostly works well.
reply
gavmor
1 month ago
[-]
> the platform owners where you have most of your digital life

Yup! The hardest part of operationalizing GenAI has been, for me, dragging the "ring" of my context under the light cast by "streetlamp" of the model. Just writing this analogy out makes me think I might be putting the cart before the horse.

reply
zer00eyz
1 month ago
[-]
The UI design part? The integration part? The iteration part?

Apple products tend to feel thoughtful. It might not be a thought you agree with, but it's there.

With other companies I feel like im starving, and all they are serving is their version of grule... Here is your helping be sure to eat all of it.

reply
rootveg
1 month ago
[-]
Whenever I read that expression I have to think about the Porsche commercial from a few years back. I guess it’s not always a bad idea :)

https://assets.horsenation.com/wp-content/uploads/2014/07/dw...

reply
paganel
1 month ago
[-]
> but I didn’t expect Apple to hit it out of the park so strongly.

No-one is hitting anything out of the park, this is just Apple the company realising that they're falling behind and trying to desperately attach themselves to the AI train. Doesn't matter if in so doing they're validating a company run by basically a swindler (I'm talking about the current OpenAI and Sam Altman), the Apple shareholders must be kept happy.

reply
kcplate
1 month ago
[-]
> No-one is hitting anything out of the park

I kind of feel like their walled garden and ecosystem might just have created the perfect environment for an AI integrated directly to the platform to be really useful.

I’m encouraged, but I am already a fan of the ecosystem…

reply
nox101
1 month ago
[-]
I have no confidence this will work as intended. The last MacOS upgrade had the horrible UX of guessing which emoji you want and being wrong 95% of the time. I don't expect this to be any better. Demos are scripted.

I also expect it to fail miserably on names (places, restaurants, train stations, people), people that are bilingual, non-English, people with strong accents from English not being their first language, etc.

reply
everdrive
1 month ago
[-]
Do you think Apple could develop an AI so powerful that it would allow me to uninstall Siri from my iPhone?
reply
citizen_friend
1 month ago
[-]
You can turn it off in settings
reply
MetaWhirledPeas
1 month ago
[-]
>The way Siri can now perform actions based on context from emails

I did not see the announcement. Can Siri also send emails? If so then won't this (like Gemini) be vulnerable to prompt injection attacks?

Edit: Supposedly Gemini does not actually send the emails; maybe Apple is doing the same thing?

reply
dudus
1 month ago
[-]
It doesn't look like it does. It seems to only write the email for you but not send. At least yet.
reply
theshrike79
1 month ago
[-]
It just writes the content, it doesn't actually send anything.

We'll find out later if there's an API to do something like that at all or are external communications always behind some hard limit that requires explicit user interaction.

reply
Loveaway
1 month ago
[-]
Some of it will undoubtly be super useful. Things like:

- Proofread button in mail.

- ChatGPT will be available in Apple’s systemwide Writing Tools in macOS

I expect once you'll get used to it, it'll be hard to go without it.

reply
iLoveOncall
1 month ago
[-]
> The way Siri can now perform actions based on context from emails and messages like setting calendar and reservations

I can't think of something less exciting than a feature that Gmail has supported for a decade.

Overall there's not a single feature in the article that I find exciting (I don't use Siri at all, so maybe it's just me), but I actually see that as a good thing. The least they add GenAI the better.

reply
theshrike79
1 month ago
[-]
The difference is that this is on-device and private. Gmail just feeds your emails to Google's servers and they do the crunching. And in the meanwhile train their systems to be better using your content.
reply
iLoveOncall
1 month ago
[-]
It changes nothing about the impressiveness (or lack thereof) of the feature.

Detecting an appointment from an email doesn't even require AI.

You're also over-indexing on the fact that some processing will be done on device. The rest will go to Apple's servers just the same as Google. And you will never know how much goes or doesn't.

reply
chipotle_coyote
1 month ago
[-]
Apple Mail has been able to detect appointments and reservations from email for years, just like Gmail -- and at least in my experience, Apple Mail pulls more useful information out of the mail when it creates the calendar entry. What they showed today is, in theory, something different. (I presume the difference is integrating it into the Siri assistant, not the mail application.)
reply
theshrike79
1 month ago
[-]
The "AI" bit (a word they didn't mention during the keynote BTW) is the processing of a natural language user command to something the existing ML model can understand.

Most of the things shown during the keynote, can already be done with older iPhones - on device, but they need to be "talked to" like a computer, not with natural language that's not completely perfect.

reply
iLoveOncall
1 month ago
[-]
> Most of the things shown during the keynote, can already be done with older iPhones - on device, but they need to be "talked to" like a computer, not with natural language that's not completely perfect.

That's only half true. If you get a text saying "Yo let's meet tomorrow at lunch?" it will offer an option to create an event from it, so even now it's possible in non-perfect scenarios.

Now the real question is: does getting the next 5% that wasn't possible justify sending potentially all you data to Apple's servers? I think the answer is a pretty resounding "fuck no".

Overall the announcement is extremely low value proposition (does anyone really use their stupid Bitmoji thing?) but asks for a LOT from the user (a vague "hey some stuff will be sent to our servers").

reply
PaulHoule
1 month ago
[-]
It is really “the app” that has to die in order for AI to show its potential in user interfaces. If you want to, say, order from a restaurant, your personal agent should order it for you, any attempt for the restaurant to “own” the consumer by putting an app in his face has to end.
reply
l1tany11
1 month ago
[-]
I don’t think I am understanding what you mean, but isn’t one of the potential use cases of AI to say: “Siri order me the thing I always get from _restaurant_ and it navigates the app for you in the background? Potentially this can be done without API integration; the AI synthetically operates the app. Maybe it “watches” how you used the app before (which options you choose, which you dismiss, etc) to learn your preferences and execute the order with minimal interaction with the user. This way annoying, bad, UI can be avoided. AI “solves” UI in this way?

Are you saying this type of scenario kills the app, or are you saying the app needs to die, replaced by an API that AIs can interact with, thus homogenizing the user experience, and avoiding the bad parts of Apps?

reply
PaulHoule
1 month ago
[-]
Preferably the latter but if the agent can use a crappy app for you (maybe using accessibility APIs in some cases) that’s better than the bad experience that “download our app” usually is.

Better yet the system should know about all the commercial options available to you and be a partner in getting food you like, taking advantage of discounts, all of that.

reply
brundolf
1 month ago
[-]
An interesting consequence: I started to think about how I'll be incentivized to take more pictures of useful information, and might even try setting up a Proton Mail proxy so I can use the iOS Email app and give Siri more context
reply
krrrh
1 month ago
[-]
I’m curious if simply running the proton mail bridge on a Mac at home would allow the native mail app feed “semantic” context across devices to iOS.
reply
brundolf
1 month ago
[-]
The native mail app would have my full inbox, so it could (presumably) do whatever local analysis it would do with a normal email account
reply
Jayakumark
1 month ago
[-]
Google is doing this as well but they are doing it on single app like gmail assuming all info is there and also across websites with agents but not cross apps like apple is doing across mails, messages, maps etc.
reply
ethbr1
1 month ago
[-]
100%. Based on what I've seen so far, unified context is king.

Which at the backend means unifying necessary data from different product silos, into organized and usable sources.

reply
cchance
1 month ago
[-]
Not to mention tied into their underlying SDK API that basically the whole system is based on, and seems they are using those same API's for the internal integrations so they can feel whats missing themselves as well.
reply
tomcam
1 month ago
[-]
I’ll be thoroughly impressed when Siri learns my wife’s name for good. Yes, I trained it, but somehow the lesson was forgotten.
reply
eppsilon
1 month ago
[-]
You can set a relationship type in her contact card. I think Siri uses that data.
reply
BoringTimesGang
1 month ago
[-]
Ah, you mean my good friend 'heart emoji wife heart emoji'
reply
dgellow
1 month ago
[-]
We will see, in practice Siri has been pretty much useless even if hyped in demos. I keep pretty low expectations
reply
uhtred
1 month ago
[-]
The willingness you seem to have to sacrifice all your privacy for a few gimmicks is astounding
reply
imabotbeep2937
1 month ago
[-]
"brother didn’t bother to check the flight code I sent him via message when he asks me when I’m landing for pickup"

Yeah but what about people going to the wrong airport, or getting scammed by taking fake information uncritically? "Well it worked for me and anyway AI will get better.". Amen.

reply
lancesells
1 month ago
[-]
Even moreso why does brother take the time to bring up Siri if he can't read the flight code? It's the same thing correct?
reply
cchance
1 month ago
[-]
You do know siri works while driving and other times when you don't want to go fumbling around?
reply
b33j0r
1 month ago
[-]
I will believe it when siri isn’t the stupidest decade old idea ever. I’m sorry if I sound anything but snarky, but they have had Star Trek abilities this whole time, nerfed for “safety” and platform product integrity —from my iPhone
reply
baby
1 month ago
[-]
I just wanted a folding iPhone
reply
Hippocrates
1 month ago
[-]
The AI/Cartoony person being sent as a birthday wish was super cringey, like something my boomer father would send me. I'm a fan of genmoji. That looks fun. Less a fan of generated clip art and "images for the sake of having an image here", and way, way less into this "here, I made a cornball image of you from other images of you that I have" feature. It's as lame as Animoji but as creepy as deepfakes.
reply
jnaina
1 month ago
[-]
Aimed at a different demographics. Peepaw and Meemaw are absolutely going to love it.
reply
wwalexander
1 month ago
[-]
Yeah, the genmoji feel like a proper Apple feature, but the full images feel cheap and pointless.
reply
cchance
1 month ago
[-]
LOL you haven't been in group chats with idiot drunk friends apparently shit like that kills, i had a friend who hates iphones, i sent a dozen bing ai images of him as a cartoon doing... things... to the phone... entire chat was dieing for days.
reply
amelius
1 month ago
[-]
What do you mean into hands of platform owners? The point of having an Apple device is that you can run stuff on your device. The user is in control, not any platforms.
reply
dialup_sounds
1 month ago
[-]
I think what they're getting at is that the platform owners have power because they can actually leverage the data that users give them to be useful tools to those users.

I would contrast this with the trend over the last year of just adding a chatbot to every app, or Recall being just a spicy History function. It's AI without doing anything useful.

reply
verdverm
1 month ago
[-]
I take it as 3rd party alternatives will have a much harder time because they have to ask the user to share their data with them. Apple / Google already have that established relationship and 3rd parties will unlikely have the level of integration and simplicity that the platformers can deliver.
reply
Tagbert
1 month ago
[-]
Apple owns the platform. The user owns the device that embodies the platform.
reply
gowld
1 month ago
[-]
Apple owns the software platform. Can I run my non-Apple Intelligence software on the data in "my" iPhone?
reply
alwillis
1 month ago
[-]
Of course. There will be plenty of APIs that 3rd parties can use access the same data Apple Intelligence has access to.
reply
jagger27
1 month ago
[-]
They did mention they’re adding support for other providers.
reply
croes
1 month ago
[-]
>Private Cloud Compute

But it runs in their cloud.

reply
Nition
1 month ago
[-]
Aside from the search and Siri improvements, I'm really not sure about the usefulness of all the generative stuff Apple is suggesting we might use here.

If you spend an hour drawing a picture for someone for their birthday and send it to them, a great deal of the value to them is not in the quality of the picture but in the fact that you went to the effort, and that it's something unique only you could produce for them by giving your time. The work is more satisfying to the creator as well - if you've ever used something you built yourself that you're proud of vs. something you bought you must have felt this. The AI image that Tania generated in a few seconds might be fun the first time, but quickly becomes just spam filling most of a page of conversation, adding nothing.

If you make up a bedtime story for your child, starring them, with the things they're interested in, a great deal of the value to them is not in the quality of the story but... same thing as above. I don't think Apple's idea of reading an AI story off your phone instead is going to have the same impact.

In a world where you can have anything the value of everything is nothing.

reply
bredren
1 month ago
[-]
I've got a fairly sophisticated and detailed story world I've been building up with my kid, it always starts the same way and there are known characters.

We've been building this up for some time, this tiny universe is the most common thing for me to respond to "will you tell me a story?" (something that is requested sometimes several times a day) since it is so deeply ingrained in both our heads.

Yesterday, while driving to pick up burritos, I dictated a broad set of detailed points, including the complete introductory sequence to the story to gpt-4o and asked it to tell a new adventure based on all of the context.

It did an amazing job at it. I was able to see my kid's reaction in the reflection of the mirrors and it did not take away from what we already had. It actually gave me some new ideas on where I can take it when I'm doing it myself.

If people lean on gen ai with none of their own personal, creative contributions they're not going to get interesting results.

But I know you can go to the effort to create and create and create and then on top of that layer on gen AI--it can knock it out of the park.

In this way, I see gen AI capabilities as simply another tool that can be used best with practice, like a synthesizer after previously only having a piano or organ.

reply
Nition
1 month ago
[-]
That's a very valid rebuttal to my comment. I think this kind of "force multiplier" use for AI is the most effective one we have right now; I've noticed the same thing with GPT-4 for programming. I know the code well enough to double check the output, but AI can still save time in writing it, or sometimes come up with a strategy that I may not have.

Maybe the fact that you did the dictation together with your child present is also notable. Even though you used the AI, you were still doing an activity together and they see you doing it for them.

reply
baapercollege
1 month ago
[-]
In fact, by allowing people to generate photos for birthday wishes, apple is elevating the bottomline not lowering the topline. The person who wants to put in the effort and send a hand-drawn image would often not want to resort to a ready-made machine creation. OTOH, the simple "HBD Mom" sender would now send "Happy Birthday Mom <genmoji>" and an image...
reply
s3p
1 month ago
[-]
Oh god.. if someone sent me Ai generated slop for my birthday I would be bothered. A simple happy birthday is fine!
reply
solarmist
1 month ago
[-]
What about things like GIPHY reactions? I’m guessing you’re not a fan of those either or using quotes from well known people. There shortcuts have existed as long as people have been writing or drawing. They just get easier and more powerful over time.

I view this as just extending that to custom reactions and making them more flexible expanding their range of uses.

reply
bradgessler
1 month ago
[-]
The best articulation of what the industry is currently calling “AI” is “augmented intelligence”—this wording captures that these are tools that can enhance intelligence, not replace it or come up with its own ideas.
reply
tkgally
1 month ago
[-]
Meta comment: This back-and-forth between Nition and bredren is one of the best exchanges I’ve read on HN recently. Thanks to both of you.
reply
jjfoooo4
1 month ago
[-]
Do you think as much creativity and effort would have gone into the story if you had access to AI from the start?
reply
bredren
1 month ago
[-]
Given current interfaces, yes. AR via smartphone or otherwise remain invasive to interpersonal communication in 2024.
reply
rising-sky
1 month ago
[-]
You could say the same thing for sending a Happy Birthday text, versus a hand written letter or card. Nothing is stopping a person from sending the latter today, and yes they are more appreciated, but people also appreciate the text. For example, if you're remote and perhaps don't have that deep of a relationship with them
reply
sensanaty
1 month ago
[-]
If a friend of mine sent me some AI generated slop for my birthday I'd be more offended than if they just sent me a text that only contains the letters "hb"
reply
Almondsetat
1 month ago
[-]
Birthday cards are slop too
reply
sensanaty
1 month ago
[-]
The messages inside of them, which are presumably not AI-generated, aren't however.
reply
Nition
1 month ago
[-]
I guess the question is, is sending an AI Happy Birthday image better than sending a Happy Birthday text?
reply
cchance
1 month ago
[-]
Nope their identical, but the AI one at least looks cool lol
reply
anon22981
1 month ago
[-]
Your analogy does not apply at all.
reply
skybrian
1 month ago
[-]
The value of a gift isn't solely on how much you worked on it or what you spent on it. It can also be in picking out the right one, if you picked something good.

Context will be more important when the gift itself is easy.

reply
nperrier
1 month ago
[-]
I would argue the same thing applies when you buy a card from Hallmark
reply
Nition
1 month ago
[-]
I sometimes think the physical world has been going through a similar time, where most of what we own and receive is ephemeral, mass-produced, lacking in real significance. We have a lot more now but it often means a lot less.
reply
rldjbpin
1 month ago
[-]
having been bombarded with forwards of "good morning" image greetings from loved ones on a daily basis, i can definitely attest to this sentiment.

ai spam, especially the custom emoji/stickers will be interesting in terms of whether they will have any reusability or will be littered like single-use plastic.

reply
cchance
1 month ago
[-]
LOL that image you painstakingly created is also forgotten not long after being given to most people, just because you know the effort that went in doesn't mean the receiving person does 99.9% of the time.

Same thing for your kid, the kid likes both stories, gives 0 shit that you used GenAI or sat up for 8 hours trying to figure out the rhyme, those things are making YOU feel better not the person receiving it.

reply
frereubu
1 month ago
[-]
I think it would be clear that the picture was drawn for the person - I imagine most people would explicitly say something like "I drew this for you" in the accompanying message. And I don't know what kind of kids you've been hanging around, but my daughter would definitely appreciate a story that I spent some time thinking up rather than "here's something ChatGPT came up with". I guess that assumes you're not going to lie to kids about the AI-generated being yours, but that's another issue entirely.
reply
cchance
1 month ago
[-]
You go into "HOW" you write a poem for your daughter? are you also explaining and rubbing in how hard you work to get her food on the table? Like wow, the amount of people here that want their "effort" calculated into the "love" cost of something is insane.

I was brought up that the thought matters, if i think to call my mom she appreciates it i don't need to make some excess effort to show her i love her or show her more love.

You read your daughter a book off your phone you got for free, is that somehow worth less than a book you went to barnes and noble and paid full price for?

reply
Nition
1 month ago
[-]
With my original bedtime story example, I was actually thinking about the kind of story you make up on the spot. Like the topic request comes at bedtime, and maybe the child even has feedback on how the story should go as you're making it up. The alternative of the parent quickly asking ChatGPT on their phone for a story on the selected topic just seems not as fun and meaningful.

I guess in Apple's example it looks like they're writing it as a document on MacOS, so I suppose they are writing it ahead of time.

reply
its_ethan
1 month ago
[-]
This is because there actually is a calculation that people do between "effort" and "love" (it's not some 1:1 ratio and you can't calculate it, it's real). At least for the vast, vast majority of people with functional interpersonal skills...

It's the difference between calling your mom and just saying "Hi mom, this is me thinking to call you. bye." vs calling her with a prepared thing to say/ask about that you had to take extra time to think about before calling. Effort went into that. You don't need to tell her "HOW" you came up with what you wanted to talk about, but there is a difference in how your call will be received as a result.

If you really believe that sending a text versus a hand written card will have no difference on how the message is interpreted, you should just know that you are in the minority.

reply
tines
1 month ago
[-]
> those things are making YOU feel better not the person receiving it

I don't think this is true at all. Love is proportional to cost; if it costs me nothing, then the love it represents is nothing.

When we receive something from someone, we estimate what it cost them based on what we know of them. Until recently, if someone wrote a poem just for us, our estimation of that would often be pretty high because we know approximately what it costs to write a poem.

In modern times, that cost calculation is thrown off, because we don't know whether they wrote it themselves (high cost) or generated it (low/no cost).

reply
cchance
1 month ago
[-]
Love is proportional to cost?!?!?!?! Holy shit thats fucking weird, it costs me 0 to love my mom, i love my mom lol, that doesn't change that fact. Broke mother that can't afford to take care of her kid doesn't not love the kid or vise versa.

If your calculating "cost" for if someone is showing nuts, i feel sad for you lol, if my wife buys or makes me something or just says "i love you" they are equivalent, I don't give a shit if she "does something for me that costs her something" she loves me she thought of me.

The thought is what matters, if you put extra worth on people struggling to do something meaning more love... thats... ya

reply
tines
1 month ago
[-]
I think I see what tripped you up in my comment. I said

> if it costs me nothing, then the love it represents is nothing.

You could read this as meaning that every action has to be super costly or else the love isn't there. I admit that it's poorly phrased and it's not what I meant.

What I should have said is that if it costs you nothing, then it doesn't in itself indicate love. It costs me nothing to say "I love you" on its own, and you wouldn't believe me if I just walked up to you in the street and said that. But your mom has spent thousands of hours and liters of blood, sweat and tears caring for you, so when she says "I love you," you have all that to back those words up. It's the cost she paid before that imbues the words with value: high cost, deep love.

Hopefully that makes more sense.

reply
robertjpayne
1 month ago
[-]
He uses cost in a context of time and effort not directly financial
reply
its_ethan
1 month ago
[-]
I can't help but be baited into responding to this comment too lol

You are obviously willfully misinterpreting what the OP meant by "cost".

You say "the thought is what matters" - this is 100% true, and "the thought" has a "cost". It "costs" an hour of sitting down and thinking of what to write to express your feelings to someone. That's what he is saying is "proportional" to love.

It "costs" you mental space to love your mom, and that can definitely happen with $0 entering the equation.

And with respect to "extra worth on people struggling to do something meaning more love" - if you spend the time to sit down and write a poem, when that's something that you don't excel at, someone will think: "oh wow you clearly really love me if you spent the time to write a poem, I know this because I know it's not easy for you and so you must care a lot to have done so anyway". If you can't see that... thats... ya

reply
raincole
1 month ago
[-]
> You say "the thought is what matters" - this is 100% true, and "the thought" has a "cost". It "costs" an hour of sitting down and thinking of what to write to express your feelings to someone. That's what he is saying is "proportional" to love.

So if you sit down and thinking of what to write to express your love to your mom for two hours, then you love your mom twice than the person who only sit down for one hour loves his mom?

It's what "proportional" means. Words have meanings.

reply
its_ethan
1 month ago
[-]
I never said that spending two hours means you love someone twice as much as spending one hour, I'm not sure where you're getting that from.

You also may be shocked to learn this, but "proportional" doesn't mean 1:1. It can mean 1:2, 5:1, or x:x^e(2*pi). All of those are proportions. Words do have meanings, and you'll note that - while I didn't even misuse the word proportional - the quotations also indicate I'm using it more loosely than it's textbook definition. You know, like how a normal person might.

I'm getting the vibe from you and the other commentator that, to you, this is about comparing how much two people love their respective mothers. That's not at all what this is even about? You can't compare "how much" two people love a respective person/thing because love isn't quantifiable.

I'm really not sure what you're even taking issue with? The idea that more time and effort from the giver corresponds to the receiver feeling more appreciation? That is not exactly a hot take lmfao

reply
Nition
1 month ago
[-]
Consider this scenario: Your friend sends you an image of some art they produced. It looks very impressive. You ask them how long it took them to create. They say oh, only a minute or so, I made it with a MidJourney prompt.

Do you feel disappointed in that answer? If yes, then surely you see that appreciation of something can be relative to effort.

reply
raincole
1 month ago
[-]
Love is somewhat related to cost, but "proportional" is definitely not the word you want.

If love is proportional to cost, then rapists and psychos who kill their SOs are the true lovers since the cost is 20 years of jail time to life sentence. Do you want to live by this standard?

reply
tines
1 month ago
[-]
Actually you prove my point; the psycho loves himself so much that he will risk prison to get what he wants (or keep others from having it), but he doesn't love his SO enough to pay the cost of letting her go.
reply
Nition
1 month ago
[-]
I don't truly agree with your take here, but let's assume you are correct and creating real things in your life only benefits you and no-one else. If you create a painting or story or piece of furniture, others prefer the more professional AI or mass-produced version.

In that scenario certainly there'll be times when using the AI option will make more sense, since you usually don't have hours to spare, and you also want to make the stories that your kid likes the most, which in this scenario are the AI ones.

But even then there's still that benefit to yourself from spending time on creating things, and I'd encourage anyone to have a hobby where they get to make something just because they feel like it. Even if it's just for you. It's nice to have an outlet to express yourself.

reply
cromka
1 month ago
[-]
What a cynical take!
reply
asimpletune
1 month ago
[-]
Their demos looked like how I imagined AI before ChatGPT ever existed. It was a personalized, context aware, deeply integrated way of interacting with your whole system.

I really enjoyed the explanation for how they planned on tackling server-enabled AI tasks while making the best possible effort to keep your requests private. Auditable server software that runs on Apple hardware is probably as good as you can get for tasks like that. Even better would be making it OSS.

There was one demo where you could talk to Siri about your mom and it would understand the context because of stuff that she (your mom) had written in one of her emails to you... that's the kind of stuff that I think we all imagined an AI world would look like. I'm really impressed with the vision they described and I think they honestly jumped to the lead of the pack in an important way that hasn't been well considered up until this point.

It's not just the raw AI capabilities from the models themselves, which I think many of us already get the feeling are going to be commoditized at some point in the future, but rather the hardware and system-wide integrations that make use of those models that matters starting today. Obviously how the experience will be when it's available to the public is a different story, but the vision alone was impressive to me. Basically, Apple again understands the UX.

I wish Apple the best of luck and I'm excited to see how their competitors plan on responding. The announcement today I think was actually subtle compared to what the implications are going to be. It's exciting to think that it may make computing easier for older people.

reply
thefourthchime
1 month ago
[-]
Until this gets into reviewers' hands, I think it's fair to say that we really have no idea how good any of this is. When it comes to AI being able to do "all kinds of things," it's easy to demo some really cool stuff, but if it falls on its face all the time in the real world, you end up with the current Siri.

Remember this ad? https://www.youtube.com/watch?v=sw1iwC7Zh24 12 years ago, they promised a bunch of things that I still wouldn't trust Siri to pull off.

reply
fckgw
1 month ago
[-]
These are all very basic commands that Siri pulls off flawlessly whenever I use it.
reply
seec
1 month ago
[-]
Most of the stuff shown are much faster to do yourself if you have your hands free and if you don't you have to pray the gods that Siri doesn't fuck up for whatever reason.

Even something as simple as setting the time, Siri will bork it at least 1 in 10 times. I know that for sure, since I worked at a friend's restaurant 2 summers ago and was heavily using Siri's timer to time french fries blanching (many batches for at least 2 hours every day or every 2 days); this dam thing would regularly use wrong time or not understand at all even though it was always the same dam time and the conditions were always similar.

On the other hand, the Google home at my cousin's place operates at my command without mistakes even though he doesn't even have the luxury of knowing my voice.

People who think Siri is good either are delusional or have special godlike skills. But considering how many hilarious "demos" I have gotten from Apple fans friends; I will say it's the former.

I myself use iPhone/Apple Watch/Macs since forever so it's not like I'm free hating. I just goddam suck like too many Apple stuff recently...

reply
fckgw
1 month ago
[-]
> People who think Siri is good either are delusional or have special godlike skills. But considering how many hilarious "demos" I have gotten from Apple fans friends; I will say it's the former.

Or maybe they just have good experiences? Why do they have to be delusional?

reply
seec
1 month ago
[-]
Because they usually are in a tech bubble where Apple is the best there is on everything and they never really tried any alternatives.

So, they think it's good and it's a delusion because it would not be objectively considered good if it was compared side by side with the competition.

I know that for sure because I have spent a lot of time with people like that and I used to be a bit like that. It's much easier to see the world in black and white for most, just like religions are with good/bad and people who really like Apple stuff are very often like that.

reply
wilg
1 month ago
[-]
I think too many people assumed that because ChatGPT is a conversation interface that that's how AI should be designed, which is like assuming computers would always be command lines instead of GUIs. Apple has done a good job of providing purpose-built GUIs for AI stuff here, and I think it will be interesting to watch that stuff get deeper.
reply
epolanski
1 month ago
[-]
> There was one demo where you could talk to Siri about your mom and it would understand the context because of stuff that she (your mom) had written in one of her emails to you... that's the kind of stuff that I think we all imagined an AI world would look like.

I can't but feel all of this super creepy.

reply
TillE
1 month ago
[-]
We're really just describing an on-device search tool with a much better interface. It's only creepy if you treat it like a person, which Apple is pretty careful not to do too much.
reply
cchance
1 month ago
[-]
Yep it's an assistant, they didnt add some weird app where you can talk to virtual granny lol
reply
iLoveOncall
1 month ago
[-]
Yep.

I remember vividly the comment on Windows Recall that said if the same was done by Apple it would be applauded. Here we are.

reply
doctor_eval
1 month ago
[-]
At the risk of sounding like an Apple apologist, Apple has a pretty good (though not perfect) track record for privacy and security.

Microsoft on the other hand… well, I understand they just pulled the recall feature after it was discovered the data wasn’t even encrypted at rest?!

reply
iLoveOncall
1 month ago
[-]
If anything Recall is MORE privacy respectful than this since everything is stored and processed on your device and you can access (and easily alter) the database, exclude specific applications, websites (for Edge for now), etc.

I'm not saying it's not an awful feature, I will disable it as soon as it is installed.

The fact that it's not encrypted at rest really is the least of my concerns (though it does show the lack of care and planning). For this to be a problem, an attacker already has all the necessary accesses to your computer to either get your encryption key or do devastating damage anyway.

> At the risk of sounding like an Apple apologist, Apple has a pretty good (though not perfect) track record for privacy and security.

"Not perfect" is enough to be concerned. I would also not be surprised that their good reputation is more due to their better ability at hiding their data collection and related scandals rather than due to any care for the user.

reply
zarzavat
1 month ago
[-]
I thought that the problem with Recall is that it takes screenshots (potentially of sensitive things like passwords, or private browsing sessions) and stores new data that you never intended to store in the first place.

This Apple AI is not storing anything new, it’s just processing the data that you already stored. As long as they pay close attention to invalidation on the index when things get deleted.

The cloud processing is a little concerning but presumably you will be able to turn it off, and it doesn’t seem much different to using iCloud anyway.

reply
seec
1 month ago
[-]
The screenshots are not storing anything new, it's just a visual trail of an already existing activity. It literally just makes it easier to browse the history, that's it. Someone motivated could just recompose activity from logs/histories of the various softwares.

The distinction is made by people who seem hell bent on trashing Microsoft for everything and glorifying everything Apple does.

reply
doctor_eval
1 month ago
[-]
I strongly disagree. My expectation of what’s on my screen is that it’s ephemeral unless I take a screen shot.

Here’s an example. I always use a random password when creating accounts for (eg) databases, but not every UI supports this, so I have a little shell script that generates one. I then copy and paste it from the terminal. Once I close the terminal window and copy something else, that password is stored only once.

With recall, it’s now stored permanently. Someone who gets access to my screen history is a step closer to getting into my stuff.

Of course there are workarounds. But the expectation I have around how my screen work informs the actions I take on it.

Here’s another example. I recently clicked on a link from HN and ended up on a page containing explicit images. At work. As soon as I realised what I was looking at, I clicked away.

How long until my visual history is to be interrogated, characterised, and used to find me guilty of looking at inappropriate material in the workplace? Such a system is not going to care about my intentions. Even if I’m not disciplined, I’d certainly be embarrassed.

reply
l33tman
1 month ago
[-]
I don't think the above poster was really referring to who does it, but that it's creepy that you're having a conversation about your mom with your phone to begin with
reply
lannisterstark
1 month ago
[-]
As opposed to what: If you hired an actual human assistant, it wouldn't be?
reply
mirsadm
1 month ago
[-]
Having other people read through my stuff and respond for me is creepy regardless.
reply
solarmist
1 month ago
[-]
This is what executive assistants do all day.

Some people view house keepers the same way. “I can’t let someone going through and touch all of my personal belongings. That’s just creepy.”

There’s a wide range of what people find creepy and also what people can and do get used to.

reply
mirsadm
1 month ago
[-]
Assistants are generally limited to people who can afford to have one. I think that's a fair assumption. Out of all those people not everyone in that group is going to have one. Which leaves a very very few people that do have one.

Why would this translate to everybody wanting to have one?

reply
solarmist
1 month ago
[-]
What does that have to do with creepiness?
reply
sensanaty
1 month ago
[-]
How many people out there are hiring personal assistants?
reply
cchance
1 month ago
[-]
This something else is it pushes people to even more heavily dive into the ecosystem, if it works how they show you really want it to understand your life, so you'll want all your devices able to help build that net of data to provide your context to all your devices for answering about events and stuff, meaning hey maybe i should get an appletv instead of a chromecast so that siri knows about my shows and stuff too.
reply
gnatolf
1 month ago
[-]
I'm just unhappy that this will mostly end up to make the moat larger and the platform lock-in more painful either way. iPhones have been going up in price, serious compute once you're deep in this will be simply extortion, as leaving the apple universe is going to be nigh impossible.

Also no competitor is going to be as good at integrating everything, as none of those have as integrated systems.

reply
rldjbpin
1 month ago
[-]
i'd be skeptical of the marketing used for the security/privacy angle. won't be surprised if there is subopena-able data out of this in some court case.

i might have missed it but there has not been much talk about guardrails or ethical use with their tools, and what they are doing about it in terms of potential abuse.

reply
ducadveritatem
1 month ago
[-]
You can read the details of their approach for the privacy/security aspects of the cloud compute portion here. https://security.apple.com/blog/private-cloud-compute/
reply
OptionOfT
1 month ago
[-]
Question I have is how deeply it is integrated in non-Apple apps. Like Signal (still no Siri support) or Outlook.
reply
solarmist
1 month ago
[-]
It sounds like the app creators need to built in the support using SiriKit and app intentions. If they're using either already a fair bit of integration will be automatic.
reply
__loam
1 month ago
[-]
Hope I can keep apples fingers from getting "deeply integrated" with my personal data.
reply
maz1b
1 month ago
[-]
Gotta say, from a branding point of view, it's completely perfect. Sometimes things as "small" as the letters in a companies name can have a huge impact decades down the road. AI == AI, and that's how Apple is going to play it. That bit at the end where it said "AI for the rest of us" is a great way to capture the moment, and probably suggests where Apple is going to go.

imo, apple will gain expertise to serve a monster level of scale for more casual users that want to generate creative or funny pictures, emojis, do some text work, and enhance quality of life. I don't think Apple will be at the forefront of new AI technology to integrate those into user facing features, but if they are to catch up, they will have to get into the forefront of the same technologies to support their unique scale.

Was a notable WWDC, was curious to see what they would do with the Mac Studio and Mac Pro, and nothing about the M3 Ultra or M4 Ultra, or the M3/M4 Extreme.

I also predicted that they would use their own M2 Ultras and whatnot to support their own compute capacity in the cloud, and interestingly enough it was mentioned. I wonder if we'll get more details on this front.

reply
peppertree
1 month ago
[-]
I think the biggest announcement was the private compute cloud with Apple Silicon. Apple is building up internal expertise to go after Nvidia.
reply
dmix
1 month ago
[-]
Can you explain what that means for someone who missed part of the video today?
reply
theshrike79
1 month ago
[-]
The Apple Intelligence cloud system uses Apple's own M-series chips, not Nvidia.
reply
ismepornnahi
1 month ago
[-]
Because they will be running inference using much smaller models than GPT 4.
reply
int_19h
1 month ago
[-]
At least they are honest about it in the specs that they have published - there's a graph there that clearly shows their server-side model underperforming GPT-4. A refreshing change from the usual "we trained a 7B model and it's almost as good as GPT-4 in tests" hype train.

(see "Apple Foundation Model Human Evaluation" here: https://machinelearning.apple.com/research/introducing-apple...)

reply
theshrike79
1 month ago
[-]
Yea, their models are more targeted. You can't ask Apple Intelligence/Siri about random celebrities or cocktail recipes.

But you CAN ask it to show you all pictures you took of your kids during your vacation to Cabo in 2023 and it'll find them for you.

The model "underperforms", but not in the ways that matter. This is why they partnered with OpenAI, to get the generic stuff included when people need it.

reply
yborg
1 month ago
[-]
Isn't it also that Nvidia chips are basically unobtainable right now anyway?
reply
solarmist
1 month ago
[-]
Yeah, but Apple wouldn’t care either way. They do things for the principle of it. “We have an ongoing beef with NVIDIA so we’ll build our own ai server farms.”
reply
teruakohatu
1 month ago
[-]
Apple have a long antagonist relationship with NVIDIA. If anything it is holding Apple back because they don’t want to go cap in hand to NVIDIA and say “please sir, can I have some more”.

We see this play out with the ChatGPT integration. Rather than hosting GPT-4o themselves, OpenAI are. Apple is providing NVIDIA powered AI models through a third party, somewhat undermining the privacy first argument.

reply
hmottestad
1 month ago
[-]
Rumours say that Apple has bought a lot of GPUs from Nvidia in the last year or so in order to train their own models.
reply
s3p
1 month ago
[-]
Not really. They use ChatGPT as a last resort for a question that isn't related to the device or an Apple-related interaction. Ex: "Make a recipe out of the foods in this image" versus "how far away is my mom from the lunch spot she told me about". And in that instance they ask the user explicitly whether they want to use ChatGPT.
reply
hawski
1 month ago
[-]
I see what they did here and it is smart, but can bring chaos. On one side it is like saying "we own it", but on the other hand it is putting a brand outside of their control. Now I only hope people will not abbreviate it with ApI, because it will pollute search results for API :P
reply
buildbot
1 month ago
[-]
Yeah I feel like we are getting the crumbs for a future hardware announcement, like M4 ultra. They’ll announce it like “we are so happy to share our latest and greatest processor, a processor so powerful, we’ve been using it in our private AI cloud. We are pleased to announce the M4 Ultra”
reply
samatman
1 month ago
[-]
It was speculated when the M4 was released only for the iPad Pro that it might be out of an internal need on Apple's part for the bulk of the chips being manufactured. This latest set of announcements gives substantial weight to that theory.
reply
buildbot
1 month ago
[-]
Yeah that seems very reasonable/likely. The release of the training toolkit for Apple silicon too points that way: https://github.com/ml-explore/mlx-examples/tree/main/transfo...
reply
tonynator
1 month ago
[-]
Yeah real smart move to make your products initials unusable and unsearchable. Apple has done it again
reply
iansinnott
1 month ago
[-]
Indeed. I suppose they are hoping people will associate the two letters with their thing rather than the original acronym.
reply
zarzavat
1 month ago
[-]
People will just call it Apple AI like ATM machine.
reply
jspann
1 month ago
[-]
I remain skeptical until I see it in action. On the one hand, Apple has a good track record with privacy and keeping things on device. On the other, there was too much ambiguity around this announcement. What is the threshold for running something in the cloud? How is your personal model used across devices - does that mean it briefly moves to the cloud? How does its usage change across guest modes? Even the phrase "OpenAI won’t store requests" feels intentionally opaque.

I was personally holding out for a federated learning approach where multiple Apple devices could be used to process a request but I guess the Occam's razor prevails. I'll wait and see.

reply
yencabulator
1 month ago
[-]
> Apple has a good track record with privacy and keeping things on device.

Apple also has a long track record of "you're holding it wrong". I don't expect an amazing AI assistant out of them, I expect something that sometimes does what the user meant.

reply
thebruce87m
1 month ago
[-]
> Apple also has a long track record of "you're holding it wrong".

And yet this was never said.

Closest was this:

> Just don't hold it that way.

Or maybe this:

> If you ever experience this on your iPhone 4, avoid gripping it in the lower left corner in a way that covers both sides of the black strip in the metal band, or simply use one of many available cases.

reply
yencabulator
1 month ago
[-]
It's merely the instance that gave the name to the phenomena, not the only time it happened.
reply
thebruce87m
1 month ago
[-]
What phenomena?
reply
hmottestad
1 month ago
[-]
When Apple published a webpage about how other phones also got reduced reception when you held them in a particular way, but then basically immediately pulled it. And then a while later they offered a free bumper case to mitigate the whole issue.
reply
thebruce87m
1 month ago
[-]
None of that suggests any malice. We don’t know what happened internally, other than the arial designer was eventually let go. That engineer could have been pushing the “every phone has the problem” narrative and brushing it off. At some point the pressure from customer feedback could have meant they were overruled and ordered to retest, or test under the specific conditions.

The fact that Apple changed their stance from “here’s a workaround” to “here’s a free bumper” is a sign they reacted to something, and that could have been anything from the conclusion of internal testing to a PR job to keep customers happy.

If they had said there was no design flaw from the start and stuck with that the whole way then I’d understand people’s reaction, but all I see is a company that said “don’t hold it that way” as a workaround then eventually issued free bumpers, thus confirming the issue. That doesn’t suggest they were blaming the user for doing something wrong. The sentiment just wasn’t there.

reply
fennecfoxy
1 month ago
[-]
Apple don't react to anything until there's a large enough outcry about it, rather than immediately address the issue they wait to see how many people complain to decide if it's worth the negative press and consumer perception or not.

Everyone makes fun of Sammy batteries exploding, but forget antennagate, bendgate, software gimping of battery life, butterfly keyboards, touch disease, yellow screens (which I believe were when Apple had to split supply Samsung/LG), exploding Macbook batteries (not enough to cause a fuss tho). Etc.

Other companies can of course be ne'er-do-wells, but people actively defend Apple for the company's missteps.

reply
thebruce87m
1 month ago
[-]
I rarely see anyone defending Apple, but I do see people constantly applying logic to them specifically that they don’t seem to apply to other companies. Take this:

> Apple don't react to anything until there's a large enough outcry about it, rather than immediately address the issue they wait to see how many people complain to decide if it's worth the negative press and consumer perception or not.

You can’t immediately address any issue. You need time to investigate issues. You might not even start investigating until you hit some sort of threshold or you’ll be chasing your tail in every bit of customer feedback. It takes time to narrow down anything to a bad component, bad batch, software bug or whatever it is.

As for weighing whether the issue is worth addressing at all - this is literally every company. If you did a recall of every bit of hardware at the slightest whiff of an issue you’d go bankrupt very quickly. There are always thresholds.

I wish we would just criticise apple in the same way we do with other companies. There is no need to invent things like “you’re holding it wrong” or intentionally misunderstanding batterygate into “they slowed down phones to sell you a new one”. They already do other crappy things, inventing fake ones isn’t necessary.

reply
robbyking
1 month ago
[-]
> What is the threshold for running something in the cloud?

To be fair, this was just the keynote -- details will be revealed in the sessions.

reply
epolanski
1 month ago
[-]
> has a good track record with privacy

They repeated this so many times they've made it true.

reply
theshrike79
1 month ago
[-]
Do you have proof otherwise? Compared to the competition, who openly use everything about you to build a profile.
reply
lern_too_spel
1 month ago
[-]
The iPhone will let you install an app only if you tell Apple about it. It will let you get your location only if you also give that location to Apple. The only way to get true privacy is to give users control, which even Google-flavored Android builds provide more of than iOS.
reply
mendigou
1 month ago
[-]
After so many years, so many people still believe in this user control paradigm.

Giving users control works for the slim percentage of power users. Most users will end up obliterated by scammers and other unsavory characters.

Perhaps there is a way to give control to today's users (that includes my non-technical mother) and still secure them against the myriad of online threats. If anyone knows of a paper or publication that addresses this, I'd love to read it.

reply
lern_too_spel
1 month ago
[-]
If you want privacy, that's the only way to get it. As Apple has demonstrated, giving the platform owner control means eroding your privacy with no recourse and still getting obliterated by scammers.
reply
kaba0
1 month ago
[-]
> The iPhone will let you install an app only if you tell Apple about it

That’s not 100% true, and where it is, there is a good reason, and pretty much every other store does it (being able to revoke malware)

reply
lern_too_spel
1 month ago
[-]
It's 100% true. On Android, you don't have to use a store, and you don't have to tell anybody anything if you don't use a store.
reply
themadturk
1 month ago
[-]
I get the sense there's still a lot of work to be done over the next few months, and we may see some feature slippage. The betas will be where we see their words in action, and I'll be staying far away from the betas, which will be a little painful. I think ambiguity works in their favor right now. It's better to underpromise and overdeliver, instead of vice versa.
reply
tmpz22
1 month ago
[-]
They need to provide a mechanism to view the data being uploaded by you
reply
machinekob
1 month ago
[-]
Same they say privacy so many times i got Facebook PTSD.
reply
boringg
1 month ago
[-]
I mean theres a difference between these companies on their privacy stance historical and current.
reply
KaiserPro
1 month ago
[-]
> Apple has a good track record with privacy and keeping things on device.

I mean they have great PR, but in terms of privacy, they extract more information from you than google does.

reply
theshrike79
1 month ago
[-]
Do you have a source for this?

Google is an ad company, they have a full model of what you like and dont like at different states of your life built.

What does Apple have that's even close?

reply
manquer
1 month ago
[-]
Apple is also an ad company,

they generate between $5-10B on ads alone a year now and more importantly that is one their fastest growing revenue segment .

Add the context of declining revenue from iPhone sales. That revenue and its potential will have enormous influence on decision making .

The thesis that Apple doesn’t have ads business so there is no use to collect the data is dead for 5years now

reply
theshrike79
1 month ago
[-]
Talking about billions is disingenuous, you should be talking about percentages of revenue. Ten billion _sounds_ like a lot but really isn't.

For Google, over 80% of their revenue comes from ads.

Apple's revenue is around 380 billion, 5-10 billion in ads is in the "other" category if you draw a pie chart of it... They make 30 billion just selling iPads - their worst selling product.

Apple can lose the ad category completely and they won't even notice it. If Google's ads go away because of privacy protections, the company will die.

reply
manquer
1 month ago
[-]
Talking absolutes is not accurate either. Not all revenue is equal.

There is reason why NVDIA, TSLA or stocks with growth[1] potential gets the P/E multiple that their peers do not or an blue chip traditional company can only dream of. The core of Apple revenue the biggest % chunk of iPhone sales is stagnant at best falling at worst. Services is their fastest growing revenue segment and already is ~$100 B of the $380B. Ads is a key component of that, 5 years back Ads was less < $1B, that is the important part.

Also margins matter, even at Apple where there is enormous margin for hardware, gross margins for services is going to be higher, that is simple economics of any software service cost of extra user or item is marginal. The $100B is worth lot more than equivalent $100B in iPhone, iPad sales where significant chunk will go to vendors.

Executives are compensated on stock performance, stock valuation depends on expected future growth a lot. Apple's own attempts and the billions invested to get into Auto, Healthcare, or Virtual Reality are a testament to that need to find new streams of revenue.

It would be naive to assume a fast growing business unit does not get outsized influence, any middle manager in a conglomerate would know how true this is.

A Disney theme park executive doing even 5x revenue as say the Disney+ one will not get the same influence, budgets,resources or respect or career paths.

[1] Expected Growth, doesn't have to be real,when it does not materialize then market will correct as is happening to an extent with TSLA.

reply
KaiserPro
1 month ago
[-]
> Google is an ad company, they have a full model of what you like and dont like at different states of your life built.

Thats not what I was saying. I was saying that Apple extract more information than google does. I was not saying that Apple process it to make a persona out of you. Thats not the issue here. Apple is saying that they are a "Privacy first" company. To be that, you need to not be extracting data in the first place.

Yes, they make lots of noise about how they do lots of things on device. Thats great and to be encouraged. But Apple are still extracting your friend list, precise location, financial records, various biometrics, browsing and app history. ANd for sure, they need some of that data to provide services.

But whats the data life cycle? are they deleting it on time? who has access to it, what about when a new product wants to use it? how do they stop internal bad actors?

All I want you to do is imagine that Facebook has made iOS, and the iphone, and is now rolling out these features. They are saying the same things as Apple, do you trust them?

Do you believe what they say?

I don't want Apple to fail, I just want people to think critically about a very very large for profit company. Apple is not our friend, and we shouldn't be treating them like they are.

reply
its_ethan
1 month ago
[-]
I think what he's getting at is that Apple does collect a lot of very similar data about it's users. Apple Maps still collects data about where you've driven - the difference is that they don't turn around and sell that data like Google loves to do.

I believe (but could be wrong) they also treat that data in a way that prevents it from being accessed by anyone besides the user (see: https://en.wikipedia.org/wiki/Apple%E2%80%93FBI_encryption_d...

reply
corps_and_code
1 month ago
[-]
Can you explain what you mean with "extract more information from you than google" here?

Not saying you're wrong, I'm just curious what sources or info you're using to make that claim.

reply
KaiserPro
1 month ago
[-]
on iOS Apple record:

o who you message, when you message.

o Your locations (find my devices)

o your voice (siri)

o the location of your items (airtags)

o what you look at (App telemetry)

o What websites you visit (Safari telemetry)

o what you buy (Apple Pay)

o Who your with (location services, again)

o your facial biometrics (apple photos tags people with similar faces, something FAcebook got fined for)

o Who emails you, who you email

With these changes, you'll need to allow apple to process the contents of the messages that you send and receive. If you read their secuirity blog it has a lot of noise about E2E security, then admit that its not practical for things other than backups and messaging.

they then say they will strive to make userdata ephemeral in the apple private cloud.

I'm not saying that they will abuse it, I'm just saying that we should give apple the same level of scrutiny that we give people like Facebook.

Infact, personally I think we should use Facebook as the shitty stick to test data use for everyone.

reply
ducadveritatem
1 month ago
[-]
What do you mean by the “record”? It seems like you think this means Apple somehow has access and stores all that information in their cloud and we just have to hope/trust that they don’t decide they want to poke around in it?

You should look more into their security architecture if you’re curious about stuff like this. The way Secure Enclave, E2EE (including the Advanced Data Protection feature for all iCloud data), etc. The reality is that they use a huge range of privacy enhancing approaches to minimize what data has to leave your device and how it can be used. For example the biometrics you mention are never outside the Secure Enclave in the chip on your phone and nobody except you can access them unless they have your passcode. Things like running facial recognition on your photos library is handled locally on your device with no information going up to the cloud. FindMy is also architected in a fully E2E encrypted way.

You can browse their hundreds of pages of security and privacy documentation via the table of contents here to look up any specific service or functionality you want to know more about: https://support.apple.com/guide/security/welcome/web

reply
KaiserPro
1 month ago
[-]
by record, I mean precisely that. Apple stores this data. As the Key bearer it has significant control.

Moreover, because apple has great PR, you don't hear about privacy breeches. Everyone seems to forget they made a super cheap and for a long time undetectable stalking service. Despite the warnings. (AirTag)

Had that been Facebook or Google, it would have been the end of the feature. They have improved the unauthorised tracking flow, but its really quite unreliable with ios, and really bad in android still.

> You should look more into their security architecture if you’re curious about stuff like this.

I have, and its a brilliant manifesto. I especially love the documentation on PCC.

but, its crammed full of implied actions that aren't the case For example: https://support.apple.com/en-gb/108756

> If you choose to enable Advanced Data Protection, the majority of your iCloud data – including iCloud Backup, Photos, Notes and more – is protected using end-to-end encryption.

Ok good, so its not much different to normal right?

> When you turn on Advanced Data Protection, access to your iCloud data on the web at iCloud.com is disabled

Which leads me to this:

> It seems like you think this means Apple somehow has access and stores all that information in their cloud and we just have to hope/trust that they don’t decide they want to poke around in it?

You're damn right I do. Its the same with Google, and Facebook. We have no real way of verifying that trust. People trust Apple, because they are great at PR. But are they actually good at privacy? We have no real way of finding out, because they also have really reactive lawyers.

and thats my point, we are basically here: https://www.reddit.com/r/comics/comments/11gxpcu/our_little_... but with apple.

reply
davidbarker
1 month ago
[-]
So, if I've got this correct there's:

1. On-device AI

2. AI using Apple's servers

3. AI using ChatGPT/OpenAI's services (and others in the future)

Number 1 will pass to number 2 if it thinks it requires the extra processing power, but number 3 will only be invoked with explicit user permission.

[Edit: As pointed out below, other providers will be coming eventually.]

reply
SoftTalker
1 month ago
[-]
I see no real difference between 2 and 3. Once the data has left your device, it has left your device. There is no getting it back and you no longer have any control over it.
reply
Terretta
1 month ago
[-]
> I see no real difference between 2 and 3.

This #2, so-called "Private Cloud Compute", is not the same as iCloud. And certainly not the same as sending queries to OpenAI.

Quoting:

“With Private Cloud Compute, Apple Intelligence can flex and scale its computational capacity and draw on larger, server-based models for more complex requests. These models run on servers powered by Apple silicon, providing a foundation that allows Apple to ensure that data is never retained or exposed.“

“Independent experts can inspect the code that runs on Apple silicon servers to verify privacy, and Private Cloud Compute cryptographically ensures that iPhone, iPad, and Mac do not talk to a server unless its software has been publicly logged for inspection.”

“Apple Intelligence with Private Cloud Compute sets a new standard for privacy in AI, unlocking intelligence users can trust.”

reply
jeroenhd
1 month ago
[-]
"We make the hardware and we pinky promise that we will protect your data and will open source part of it" means nothing for privacy. Especially when things like warrants come into play.
reply
axoltl
1 month ago
[-]
How would a warrant work in this case? Using Silicon Data Protection[0] the hash of the currently running firmware (of both the AP _and_ the SEP) is locked into hardware registers in the PKA engine used by the SEP. This hash perturbs the key derivation, and the PKA engine can also attest to the running firmware hash(es) by using an EC key only available to it (they call this BAA, Basic Attestation Authority).

iOS won't send any data to a PCC that isn't running a firmware that's been made public in their transparency logs and compute nodes have no way to be debugged in a way that exposes user data[1]

And at the end of the day, this is going to give the warrant holder a handful of requests from a specific user? Why wouldn't they use that same warrant to get onto the target's device directly and get that same data plus a ton more?

0: https://help.apple.com/pdf/security/en_US/apple-platform-sec... 1: https://security.apple.com/blog/private-cloud-compute/

reply
jeroenhd
1 month ago
[-]
Apple controls the hardware and the private keys baked into the hardware. If one of their servers can decrypt the payload, they can intercept, duplicate, and decrypt the payload and its response. I'm sure this'll start a long fight between law enforcement and Apple after the first warrant hits and Apple claims it can't comply.

Warrants to hack devices are a lot less common and generally harder to obtain. That's why police will send Google warrants for "give us info on every device who has been in a radius of x between y and z time".

I'm sure Apple did their very best to protect their users, but I don't think their very best is good enough to warrant this kind of trust. A "secure cloud" solution will also tempt future projects to use the cloud over local processing more, as cloud processing is now readily available. Apple's local processing is a major advantage over the competition but I doubt that'll stay that way if their cloud solution remains this integrated.

reply
axoltl
1 month ago
[-]
Apple actually does not control the private keys baked into the hardware, see the "Root Cryptographic Keys" section of their security whitepaper: https://help.apple.com/pdf/security/en_US/apple-platform-sec...

Your example indicates a situation where law enforcement does not know which device belongs to their suspect, if they even have one. That's a very different scenario from a targeted "tell us the requests belonging to this individual".

Warrants to search a device are extremely common place, otherwise the likes of Grayshift and Cellebrite would not be around.

From a threat modeling perspective compromising PCC is high risk (Apple's not just going to comply and the fight will be very public, see the FBI San Bernardino fight) , high effort (Long protracted court case), low reward (I only see requests that are going to get shipped off to the cloud). If I were law enforcement I'd explore every other avenue available to me before I go down that particular rabbit hole which is exactly what this design is intended to achieve.

reply
theshrike79
1 month ago
[-]
If we can't trust independent audits of code and hardware, what can we trust?
reply
l33tman
1 month ago
[-]
I think it boils down to that it doesn't matter what they promise, if you send a videocap of all you ever do on your computer to some company on the internet, you just have to take your chances. Would you put mics and cameras in all of your rooms in your home that send data to Apple (or someone else) to analyze "for your benefit" even if they say and promise they won't do anything bad with the feeds?

At least with gmail and chat clients etc. things are somewhat put in compartments, one of the services might screw up and do something with your emails but your Messenger or WhatsApp chats are not affected by that, or vice versa. But when you bake it into the OS (laptop or phone) you're IMHO taking a much bigger risk, no matter what the intentions are.

reply
simondotau
1 month ago
[-]
There is nothing which Apple Intelligence can do that a hypothetically evil Apple couldn't have done before, given sufficiently treacherous code in their operating systems. Thus if you use an Apple device, you're already trusting Apple to not betray you. These new features don't increase the number of entities one must place their trust in.

Whereas with apps like Gmail and WhatsApp on an iPhone, you must trust Google and Meta in addition to Apple, not in place of Apple. It doesn't distribute who you trust, it multiplies it.

reply
l33tman
1 month ago
[-]
I still think it's a big difference between trusting existing OS'es and apps, which are under scrutiny by hundreds of security researchers and thousands of security nerds all the time, and willingly sending away all your data to a party who promises they will treat it well (I know it doesn't work like this in this case, but just for the sake of argument).

In essence, what you're doing is training an assistant to learn all of your details of your life and habits and the question is if that "assistant" is really secure forever. Taken to the extreme, the assistant becomes a sort of "backup" of yourself. But yeah it's an individual decision with the pro's and con's of this.

reply
simondotau
1 month ago
[-]
If you were already in on iCloud, that one residual distinction is moot.
reply
ENGNR
1 month ago
[-]
Open code, inspected and used by a large number of users, hosted on hardware you physically control
reply
ryr11
1 month ago
[-]
I think that's fair, but impractical for most users. I have a number of Home Assistant integrations with locally hosted AI models for smart home features, but I wouldn't expect my grandma to set up a server and a few VMs when she could just give her HomePod a prompt that works with AI and have no worries about the implementation. Do you feel like Apple's "independent" auditing is insufficient?
reply
ENGNR
1 month ago
[-]
> Do you feel like Apple's "independent" auditing is insufficient?

Yeah, pretty much

Also, your grandma might not setup a VM, but it sounds like the off-device processing is essentially stateless, or at most might have a very lightweight session. It seems like the kind of thing one person could setup for their family (with the same tamper-proof signatures, plus physical security), or provide a privacy focused appliance for anyone to just plug into a wall, if they wanted to.

reply
threeseed
1 month ago
[-]
Most open source code isn't inspected though.

There have been many cases recently of compromised code being in the wild for quite some time and then only known about by accident.

reply
spywaregorilla
1 month ago
[-]
100% of closed code is not inspected
reply
theshrike79
1 month ago
[-]
I have been involved in security audits for 110% closed code, code that's secret even within the company.

Auditing helps the company writing it, the auditors are usually experts in breaking stuff in fun ways, and it's good for business - we could slap "code security audited by XXX" on the sales pitch.

reply
spywaregorilla
1 month ago
[-]
> and it's good for business - we could slap "code security audited by XXX" on the sales pitch.

You're on the precipice of discovering the problem of incentives when it comes to audits.

Audits are good, but they're inferior to source available

reply
sodality2
1 month ago
[-]
By you. Three comments above references "independent audits". Meaning professional cybersecurity firms
reply
jeroenhd
1 month ago
[-]
From. what I can tell, Apple doesn't actually provide the source code itself, or provides the (cryptographically verified) binaries and VMs to run it. Reverse engineering will still need to take place, it seems.
reply
jeroenhd
1 month ago
[-]
I will trust independent audits of local code and local hardware. There are still plenty of opportunities for someone to send out malicious patches, but the code running can (and probably will) be analysed by journalists looking for a scoop and security researchers looking for a bug bounty.

I have no idea what code is running on a server I can't access. I can't exactly go SSH into siri.apple.com and match checksums. Knowing Apple's control freak attitude, I very much doubt any researcher permitted to look at their servers is going to be very independent either.

Apple is just as privacy friendly as ChatGPT or Gemini. That's not necessarily a bad thing! AI requires feeding lots of data into the cloud, that's how it works. Trying to sell their service as anything more than that is disingenuous, though.

reply
ReverseCold
1 month ago
[-]
> I have no idea what code is running on a server I can't access.

That's like... the whole point? You have some kind of hardware-based measured boot thing that can provide a cryptographic attestation that the code it's running is the same as the code that's been reviewed by an independent auditor. If the auditor confirms that the data isn't being stored, just processed and thrown away, that's almost as good as on-device compute for 99.999% of users. (On-device compute can also be backdoored, so you have to trust this even in the case that everything is local.)

The presentation was fairly detail-light so I don't know if this is actually what they're doing, but it's nice to see some effort in this direction.

E: I roughly agree with this comment (https://news.ycombinator.com/item?id=40638740) later down the thread -- what exactly the auditors are verifying is the key important bit.

reply
jeroenhd
1 month ago
[-]
I do like Apple's attempts to make this stuff better for privacy, but a pinky promise not to leak any information is still just that.

Apple has developed some of the strongest anti tampering compute on existence to prevent people from running code they don't want on hardware they produce. However, that protection is pretty useless when it comes to protection from Apple. They have the means to bypass any layer of protection they've built into their hardware.

It all depends on what kind of auditing Apple will allow. If Apple allows anyone to run this stuff on any Mac, with source or at least symbols available, I'll give it the benefit of the doubt. If Apple comes up with NDAs and limited access, I won't trust them at all.

reply
BlobberSnobber
1 month ago
[-]
Exactly, Apple has barely any oversight or accountability for their privacy claims. Sad to see so many people taking their word at face value.
reply
subjectsigma
1 month ago
[-]
Isn’t this basically what Signal does? Legitimately asking; I thought parts of their server implementation were closed source.
reply
jeroenhd
1 month ago
[-]
I don't think so. Signal regularly stops committing code to the public repos (https://github.com/signalapp/Signal-Server) when they're working on some kind of big reveal (cryptocurrency integration and such), but the server code is out there for you to run yourself.

Signal has the added benefit that it doesn't need to read what's in the messages you send. It needs some very basic routing information and the rest can be encrypted end to end. With AI stuff, the contents need to be decrypted in the cloud, so the end-to-end protections don't apply.

reply
subjectsigma
1 month ago
[-]
I meant more regarding their server setup but now that I think about it you are correct, it matters a lot more if the query/message/whatever isn’t encrypted before hitting the cloud.
reply
dudus
1 month ago
[-]
It's a big mental gymnastics to do the same as Google and Microsoft while claiming moral superiority.

Apple's thrown stones come back to hunt their glass ceiling.

reply
cyberpunk
1 month ago
[-]
Eh with modern processor features like secure enclaves it's definitely possible to build systems in which the operators CANNOT access the information. (I worked on such a system using SGX for a large car producer, even physical access to the machines/hypervisors/raw memory would not give you access, perhaps the nsa has some keys baked in to extract a session key you may generate inside an enclave, but it would be very surprising if they burned that backdoor on anything as low fruit as this).
reply
jeroenhd
1 month ago
[-]
SGX has been broken by speculative execution bugs, though. Had something to do with people extracting DRM keys, if I recall correctly, not exactly a nation state attack. Since then, SGX has been removed from modern Intel processors (breaking some Blurays and software products for newer chips in the process).

Secure enclave stuff can be used to build a trust relationship if it's designed well, but Apple is the party hosting the service and the one burning the private keys into the chip.

reply
cyberpunk
1 month ago
[-]
Yep, it was broken a few times but fixed with microcode patches (afaik). It's still a part of the server processors and in wide use already. I'm not saying it's a golden bullet or otherwise infallable, but it sure beats cat /dev/mem by quite some way.
reply
bpye
1 month ago
[-]
If you produce the hardware you necessarily have access to the signing key to say update the microcode or the firmware. Intel is in the TCB for SGX, but your cloud operator wouldn’t be. In this case Apple is both the hardware manufacturer and the operator.
reply
SoftTalker
1 month ago
[-]
Yes, that's all well and good but assumes no mistakes and no National Security letters ordering them to describe it that way and no changes of control or business strategy at some point in the future.

Once the data is out of your possession it's out of your control.

reply
theshrike79
1 month ago
[-]
There are VERY few things that can keep your information safe if a TLA wants it. You need to go full Edward Snowden with phones in faraday cages and typing passwords under a sheet -levels of paranoia to be fully safe.

Drow "nation state is after me" from the threat model and you'll be a lot happier.

reply
ENGNR
1 month ago
[-]
True, but there's a difference between

- TLA agency deploys scarce zero days or field ops because you're particularly interesting, vs..

- TLA agency has everything about you in a dragnet, and low level cop in small town searches your data for a laugh because they know you, and leaks it back to your social circle or uses it for commercial advantage

reply
transpute
1 month ago
[-]
> nation state ... threat model

The history of tech is the history of falling costs with mass production. Expensive TLA surveillance tech for nation states can become broadly accessible, e.g. through-wall WiFi radar sensing sold to millions via IEEE 802.11bf WiFi 7 Sensing in NPU/AI PCs [1], or USB implant cables [2] with a few zeros lopped off the TLA price.

Instead of adversary motives, threat models can be based on adversary costs.

As adversary costs fall, threat models need to evolve.

[1] https://www.technologyreview.com/2024/02/27/1088154/wifi-sen...

[2] https://shop.hak5.org/products/omg-cable

reply
Terretta
1 month ago
[-]
> Once the data is out of your possession it's out of your control.

Actually, once your e2e key that encrypts your data is out of your possession, it's out of your control.

Over the past decade it's become commercially feasible to be NSL-proof.

reply
mostlysimilar
1 month ago
[-]
Not everyone has nation states in their threat models. I want privacy from corporations / surveillance capitalism, not the US government. Apple's privacy promises are focused on keeping my data out of the hands of bad actors like Google etc. and that's more than enough for me.
reply
magicalist
1 month ago
[-]
A threat model of Google getting your email revision makes these statements sillier. TLS and existing privacy policies are sufficient.
reply
StrLght
1 month ago
[-]
That's too many words with surprisingly little meaning. I'd suggest to wait for more technical details and to treat this as marketing until then.
reply
HeladeraDragon
1 month ago
[-]
You can read more here: https://security.apple.com/blog/private-cloud-compute/

But in summary 1. The servers run on Apple Silicon hardware which have fancier security features 2. Software is open source 3. iOS verifies that the server is actually running that open source software before talking to it 4. This is insane privacy for AI

The security features are meant to prevent the server operator (Apple) from being able to access data that's being processed in their farm. The idea is that with that + E2E encryption, it should be way closer to on-device processing in terms of privacy and security

reply
StrLght
1 month ago
[-]
Thanks! That's great and sounds like they're really trying to go as far as possible with it.

Here's also a great summary from Matthew Green: https://x.com/matthew_d_green/status/1800291897245835616

reply
privacyking
1 month ago
[-]
You do realise that already happens though? If you read apple's privacy policy they send a lot of what you do to their servers.

Furthermore how private do you think Siri is? Their privacy policy explicitly states they send transcripts of what you say to them. That cannot be disabled.

reply
underlogic
1 month ago
[-]
That's the problem. These AI features may be "free" but is there an option to disable them system wide from rummaging through all your data and building a profile in order to be helpful? If not I won't update. And I mean one tickbox not a separate switch for every app and feature like siri has making it nearly impossible to disable
reply
sodality2
1 month ago
[-]
> Furthermore how private do you think Siri is? Their privacy policy explicitly states they send transcripts of what you say to them. That cannot be disabled.

Ten minutes ago i set up a new Apple device and it not only asked me if I wanted to enable Siri, but whether I wanted to contribute audio clips to improve it. What, exactly, cannot be disabled?

reply
redwall_hp
1 month ago
[-]
You can trivially find it in the Settings app after setup, too: Privacy & Security -> Analytics & Improvements -> scroll to the Improve Siri & Dictation toggle that explains that it controls whether Apple can store and review audio of interactions with Siri and the dictation function. Plenty of other options to review in the vicinity too, since the first party privacy settings are basically all in the same place.
reply
privacyking
1 month ago
[-]
That is the option for the audio itself. The transcripts of the audio (you do know what transcripts are, right?) are always sent to apple as per their privacy policy.

"When you use Siri, your device will indicate in Siri Settings if the things you say are processed on your device and not sent to Siri servers. Otherwise, your voice inputs are sent to and processed on Siri servers. In all cases, transcripts of your interactions will be sent to Apple to process your requests."

It's pretty clear and not in dispute that your transcripts are always sent to Apple.

reply
sodality2
1 month ago
[-]
That’s because Siri doesn’t run on-device - phones like the iPhone 6 can’t run that level of analysis. They “collect transcripts” insofar as they need to process your request.

Nonetheless, Siri is trivial to disable altogether.

reply
privacyking
1 month ago
[-]
Yeah if you asked the average person on the sheet (e.g. you) if they thought Siri was 100% private, they'd say yes because Apple has mislead them and said it is. That's the point. Apple says everything is private but then secretly collects data via their privacy policies.
reply
coob
1 month ago
[-]
Most people are happy for (2) already - iCloud Photos, Device backups, iCloud Messages… email.

Those that won’t use those won’t use this either.

reply
stnmtn
1 month ago
[-]
Certainly there's a difference. You are right that the jump is big between 1 and 2, but it is negligent to say that Apple, a company which strives for improved privacy and security, and ChatGPT have the same privacy practices.
reply
drpossum
1 month ago
[-]
No, that's not the point. The point is neither of those companies could have the same values you have for your data and you are then leaving the security of that data in the hands of someone else. Even Apple, who is better than most, values your privacy with a dollar value representing your custom and their reputation. That is not how I value (nor most people value) their data. The latter point applies to any company, regardless of intention because security breaches are a matter of when, not if, and if anyone says otherwise they should not be talking about security.
reply
solarkraft
1 month ago
[-]
Apple has demonstrated to be relatively trustworthy about privacy while most AI companies have demonstrated the opposite, so I do see a significant difference.
reply
SoftTalker
1 month ago
[-]
Google was considered very cool and trustworthy at one point also. "Do no evil" and all that.
reply
doctor_eval
1 month ago
[-]
Google was cool, once upon a time, but they always used your personal info pretty openly. The CEO a himself famously said, “The Google policy on a lot of things is to get right up to the creepy line and not cross it.”

Apple has taken a markedly different approach, and has done so for years - E2E encryption, hashing and segmenting routes on maps, Secure Enclave, etc.

While I think it’s perfectly reasonable to “trust no one”, and I fully agree that there may be things we don’t know, I don’t think there it’s reasonable to put Apple on the same (exceedingly low) level as Google.

reply
cchance
1 month ago
[-]
No they never were, they were "do no evil" but at the exact same time everyone knew they were an advertising company and most people in the field could see where it was heading eventually, or at least i'd hope.

Apples motives are different, selling premium hardware and MORE premium hardware, they wouldn't dare fuck that up, their nestegg is hardware and slowly more services tied to said hardware ecosystem (icloud subs, tv subs etc). Hence the privacy makes sense to pull people into the ecosystem.

Google... everything google does even phones, is for more data gathering for their advertising revenue.

reply
babypuncher
1 month ago
[-]
Google's entire buisness model was built on hoovering up and selling access to user data in the form of AdSense. Without that data, their business falls apart.

Apple's business model is to entice people into a walled garden ecosystem where they buy lots of expensive hardware sold on high margins. They don't need user data to make this work, which is why they can more comfortably push features like end-to-end and no-knowledge encryption.

reply
cchance
1 month ago
[-]
#2 is publicaly auditable, 100% apple controlled and apple hardware servers, tied to your personal session (probably via the ondevice encryption), i'd imagine ephemeral docker containers or something similar for requests that just run for each request or some form of Encrypted AI Lambdas.
reply
babypuncher
1 month ago
[-]
The difference is Apple and OpenAI's privacy policies.
reply
madeofpalk
1 month ago
[-]
I think that is completely fair.

I think also a bunch of people will trust Apple’s server more (but not completely) than other third parties.

reply
r00fus
1 month ago
[-]
Hopefully they have some toggle in settings for this.
reply
terramex
1 month ago
[-]
Lvl 3 is supposed to support other models and providers in the future too. I hope it will support every server with simple, standard API so I can run self-hosted LLama 3 (or whatever will be released in next 6-12 months).
reply
hmottestad
1 month ago
[-]
Or Groq. They can do 1250 tokens/s with Llama 3 8B.
reply
spike021
1 month ago
[-]
It sounded like 3 is meant for non-personal stuff. Basically like a search engine style feature. When you want to look up things like say sports records and info, or a movie and info about it, etc.
reply
gigel82
1 month ago
[-]
The problem is they don't explicitly define when 1 can pass to 2 and whether we can fully and categorically disable it. As far as I know, 1 can pass to 2 when governments ask for some personal data or when Apple's ad model needs some intimate details for personalization.
reply
Tagbert
1 month ago
[-]
The information provided for level two is end to end encrypted and not stored so the risk level is pretty low here.
reply
coolspot
1 month ago
[-]
Ent-to-end encrypted means that the other end (Apple/NSA) has access to it.
reply
cyberpunk
1 month ago
[-]
Imagine the memory on their server is encrypted with an on-processor key (something like intel SGX) -- reading OS memory, e.g dumping from linux or hardware, you can't read it unless you somehow extract the key (which are different on each chip) from the physical chip. Now, the process running using that encrypted memory generates TLS keys for you to send the data, and operates on it only inside this secure enclave.

There is no way to access it without destroying the chip, and even in this scenario it will be extremely expensive and imo unlikely, certainly impossible at scale. Some scientists may be able to do it once in a lab.

reply
cyberpunk
1 month ago
[-]
BTW there is an entire industry popping up around exactly this sort of use case, it's called 'confidential computing' and CNCF have some software in the works (confidential containers iirc). I'm pretty excited to see what risc-v is going to bring to the party enclave wise.
reply
davidcbc
1 month ago
[-]
Ok, I'm imagining.

Now, is any of that actually true?

reply
sodality2
1 month ago
[-]
reply
Tagbert
1 month ago
[-]
It does need to process the data. The server has no persistent storage and no remote shell. It is a limited and locked down special-purpose iOS.
reply
gigel82
1 month ago
[-]
Maybe, maybe not. I would like to purchase one of those servers and put it in my rack, so I can monitor all network traffic. Is that an option?
reply
frenchie4111
1 month ago
[-]
That was my sense as well. I would have appreciated some clarification on where the line between 1 and 2 was, although I am sure a YouTuber will deep dive on it as soon as they have it in their hands
reply
sc077y
1 month ago
[-]
I'm skeptical of the on-device AI. They crave edge compute but I'm doubtful their chips can handle a 7B param model. Maybe ironically with Microsoft's phi 3 mini 4k you can run this stuff on a cpu but today it's no where near good enough.
reply
KaiserPro
1 month ago
[-]
I don't know how they are going to square the privacy circle when at worst its a RAG based firehose to OpenAI, and at best you can just ask the model to leak your personal info.
reply
nerdjon
1 month ago
[-]
Said this in the other thread, but I am really bothered that image generation is a thing but also that it got as much attention as it did.

I am worried about the reliability, if you are relying on it giving important information without checking the source (like a flight) than that could lead to some bad situations.

That being said, the polish and actual usefulness of these features is really interesting. It may not have some of the flashiest things being thrown around but the things shown are actually useful things.

Glad that ChatGPT is optional each time Siri thinks it would be useful.

My only big question is, can I disable any online component and what does that mean if something can't be processed locally?

I also have to wonder, given their talk about the servers running the same chips. Is it just that the models can't run locally or is it possibly context related? I am not seeing anything if it is entire features or just some requests.

I wonder if that implies that over time different hardware will run different levels of requests locally vs the cloud.

reply
skybrian
1 month ago
[-]
Regarding image generation, it seems the Image Playground supports three styles: Animation, Illustration, or Sketch.

Notice what's missing? A photorealistic style.

It seems like a good move on their part. I'm not that wild about the cartoon-ification of everything with more memes and more emojis, but at least it's obviously made-up; this is oriented toward "fun" stuff. A lot of kids will like it. Adults, too.

There's still going to be controversy because people will still generate things in really poor taste, but it lowers the stakes.

reply
maronato
1 month ago
[-]
I noticed that too, but my conclusion is that they probably hand-picked every image and description in their training data so that the final model doesn’t even know what the poor taste stuff is.
reply
0xfae
1 month ago
[-]
Exactly. And, I'm assuming, it will reject any prompts or words in poor taste.
reply
skydhash
1 month ago
[-]
> I am worried about the reliability, if you are relying on it giving important information without checking the source (like a flight) than that could lead to some bad situations.

I think it shows the context for the information it presents. Like the messages, events and other stuff. So you can quickly check if the answer is correct. So it's more about semantic search, but with a more flexible text describing the result.

reply
latexr
1 month ago
[-]
> I wonder if that implies that over time different hardware will run different levels of requests locally vs the cloud.

I bet that’s going to be the case. I think they added the servers as a stop-gap out of necessity, but what they see as the ideal situation is the time when they can turn those off because all devices they sell have been able to run everything locally for X amount of time.

reply
solarmist
1 month ago
[-]
I’m betting 100% on this. And I think the new sidecar controlling your phone is an example of where they’re going in reverse.

If you have a M6 MacBook/ipad pro it’ll run your AI queries there if you’re on the same network in two-four years.

reply
kylehotchkiss
1 month ago
[-]
> I am worried about the reliability, if you are relying on it giving important information without checking the source (like a flight) than that could lead to some bad situations.

I am worried at the infinite ability of teenagers to hack around the guardrails and generate some probably not safe for school images for the next 2 years while apple figures out how to get them under control.

reply
intended
1 month ago
[-]
They hid the workaround for this - it’s going to be available in US english first, and then other locations over the coming year.

This can be never. LLMs fail fast as you move away from high resourced languages.

reply
cube2222
1 month ago
[-]
This seems really cool.

They said the models can scale to "private cloud compute" based on Apple Silicon which will be ensured by your device to run "publicly verifiable software" in order to guarantee no misuse of your data.

I wonder if their server-side code will be open-source? That'd be positively surprising. Curious to see how this evolves.

Anyway, overall looks really really cool. If it works as marketed, then it will be an easy "shut up and take my money". Siri seems to finally be becoming what it was meant to be (I wonder if they're piggy-backing on top of the Shortcuts Actions catalogue to have a wide array of possible actions right away), and the image and emoji generation features that integrate with Apple Photos and other parts of the system look _really_ cool.

It seems like it will require M1+ on Macs/iPads, or an iPhone 15 Pro.

reply
zitterbewegung
1 month ago
[-]
You don't even have to buy a new device since it's backwards compatible with A17 Pro and M1, M2, M3 and M4. It feels like the integration of the services are using existing models and integrating the API used traditionally originally from AppleScript but, extending it to LLM or stable diffusion systems. It seems that they want the M4 as soon as possible though for the gaming and cloud pushes.
reply
cube2222
1 month ago
[-]
For those curious, there is in fact a ChatGPT integration.

The way it works is that when the on-device model decides "this could better be answered by chatgpt" then it will ask you if it should use that. They described it in a way which seems to indicate that it will be pluggable for other models too over time. Notably, ChatGPT 4o will be available for free without creating an OpenAI account.

reply
ladams
1 month ago
[-]
I don't think that 4o will actually be available for free. It seemed like they were quite careful in choosing their words. My guess is 3.5 is free without an account, and accessing 4o requires linking your OpenAI account.
reply
skygazer
1 month ago
[-]
They only mentioned 4o, but they mentioned it explicitly at the start, well before they mentioned one can also tie in to their openAI account, if you have one, at the end of the presentation.

To me that implies 4o by default, but I guess we'll find out.

reply
kokon
1 month ago
[-]
reply
ukuina
1 month ago
[-]
Apple must be paying OpenAI a pretty penny to give the best LLM tokens away for free.
reply
ukuina
1 month ago
[-]
It seems no money is exchanging hands!
reply
losvedir
1 month ago
[-]
I'm really curious about this. Framing it as "running a large language model in the cloud" is almost burying the lede for me. Is this saying that in general the client will be able to cryptographically ascertain somehow the code that the server is running? That sounds incredibly interesting and useful outside of this.
reply
localhost
1 month ago
[-]
It seems like this is an orchestration layer that runs on Apple Silicon, given that ChatGPT integration looks like an API call from that. It's not clear to me what is being computed on the "private cloud compute"?
reply
cube2222
1 month ago
[-]
If I understand correctly there's three things here:

- on-device models, which will power any tasks it's able to, including summarisation and conversation with Siri

- private compute models (still controlled by apple), for when it wants to do something bigger, that requires more compute

- external LLM APIs (only chatgpt for now), for when the above decide that it would be better for the given prompt, but always asks the user for confirmation

reply
localhost
1 month ago
[-]
The second point makes sense. It gives Apple optionality to cut off the external LLMs at a later date if they want to. I wonder what % of requests will be handled by the private cloud models vs. local. I would imagine TTS and ASR is local for latency reasons. Natural language classifiers would certainly run on-device. I wonder if summarization and rewriting will though - those are more complex and definitely benefit from larger models.
reply
windowshopping
1 month ago
[-]
The "Do you want me to use ChatGPT to do that?" aspect of it feels clunky as hell and very un-Apple. It's an old saw, but I have to say Steve Jobs would be rolling over in his grave at that. Honestly confused as to why that's there at all. Could they not come up with a sufficiently cohesive integration? Is that to say the rest ISN'T powered by ChatGPT? What's even the difference? From a user perspective that feels really confusing.
reply
dmix
1 month ago
[-]
I thought it was the smartest and most pragmatic thing they've announced.

Being best in class for on-device AI is a huge market opportunity. Trying to do it all would be dumb like launching Safari without a google search homepage partnership.

Apple can focus on what they are good at which is on device stuff and blending AI into their whole UX across the platform, without compromising privacy. And then taking advantage of a market leader for anything requiring large external server farms and data being sent across the wire for internet access, like AI search queries.

reply
FinnKuhn
1 month ago
[-]
I think they also announced the possibility to integrate Siri with other AI platforms than ChatGPT so this prompt would be especially useful to make clear to the user which of these AIs Siri wants to use.
reply
theshrike79
1 month ago
[-]
From a user perspective it's 100% clear.

If the system doesn't say "I'm gonna phone a friend to get an answer for this", it's going to stay either 100% local or at worst 100% within Apple Intelligence, which is audited to be completely private.

So if you're asking for a recipe for banana bread, going to ChatGPT is fine. Sending more personal information might not be.

reply
windowshopping
1 month ago
[-]
I just don't think the average user cares enough to want this extra friction. It's like if every time you ran a google search it gave you lower-quality results and you had to click a "Yes, give me the better content" option every time to get it to then display the proper results. It's just an extra step which people are going to get sick of very fast.

You know what it's really reminiscent of? The EU cookies legislation. Do you like clicking "Yes I accept cookies" every single time you go to a new website? It enhances your privacy, after all.

reply
internetter
1 month ago
[-]
reply
IMTDb
1 month ago
[-]
In theory there isn't. In practice > 99% of the website I visit have a cookie banner thingy. Including the EU own website (https://european-union.europa.eu/index_en).

Think about it: even a government agency isn't able to produce a simple static web page without having to display that cookie banner. If their definitions of "bad cookies that require a banner" is so wide that even they can't work around it to correctly inform citizens, without collecting any private data, displaying any ad or reselling anything; maybe the definition is wrong.

For all intent and purposes, there is a cookie banner law.

reply
FinnKuhn
1 month ago
[-]
They could not have a cookie banner, but their privacy policy states pretty clearly why they want your consent. It is to "gather analytics data (about user behaviour)". Additionally you don't need to consent to this and can access everything without them "collecting any private data, displaying any ad or reselling anything". The only reason they ask for consent is to gather analytics, which is similar to you being asked for your postal code when paying while shopping.
reply
theshrike79
1 month ago
[-]
The cookie banners are a cargo cult.

Someone somewhere figured out that it might be a good idea and others just copied it.

reply
adrianmsmith
1 month ago
[-]
It's interesting you phrase it that way, because that's sort of how DuckDuckGo works with their !g searches. I'm not saying that's good or bad, it's just an observation.
reply
rohitpaulk
1 month ago
[-]
Still involves friction. A more "seamless" way for Apple to do this would've been to license GPT-4's weights from OpenAI and run it on Apple Intelligence servers.
reply
asadm
1 month ago
[-]
but that restricts it to just openai then.

I want to use perplexity from siri too!

reply
fckgw
1 month ago
[-]
It's a clear delineation between "My data is on my device or within Apple's ecosystem" and "My data is now leaving Apple and going to a 3rd party"
reply
0xCMP
1 month ago
[-]
At the core of everything they presented is privacy. Yes the point is that most questions are answered locally or via the Private Compute system.

More specifically "is openai seeing my personal data or questions?" A: "No, unless you say it's okay to talk to OpenAI everything happens either on your iPhone or in Private Compute"

reply
chrisBob
1 month ago
[-]
Apple is touting the privacy focus of their AI work, and going out to ChatGPT breaks that. I would be reluctant to use any of their new AI features if it weren't for that prompt breaking the flow and making it clear when they are getting results from ChatGPT.
reply
dag11
1 month ago
[-]
What? The original Siri asked if the user wanted to continue their search on the web if it couldn't handle it locally. It was one of the last things from the Jobs era.
reply
xanderlewis
1 month ago
[-]
I agree. Quite odd and not very Apple-ish. I wonder if there’s some good reason for it; it must have been debated internally.
reply
empath75
1 month ago
[-]
They'll probably add an option to disable that prompt at some point. I'm glad it is the default behavior, though.
reply
cube2222
1 month ago
[-]
This seems really cool.

They said the models can scale to "private cloud compute" based on Apple Silicon which will be ensured by your device to run "publicly verifiable software" in order to guarantee no misuse of your data.

I wonder if their server-side code will be open-source? That'd be positively surprising. Curious to see how this evolves.

Anyway, overall looks really really cool. If it works as marketed, then it will be an easy "shut up and take my money". Siri seems to finally be becoming what it was meant to be (I wonder if they're piggy-backing on top of the Shortcuts Actions catalogue to have a wide array of possible actions right away), and the image and emoji generation features that integrate with Apple Photos and other parts of the system look _really_ cool.

It seems like it will require M1+ on Macs/iPads, or an iPhone 15 Pro.

reply
TillE
1 month ago
[-]
> I wonder if their server-side code will be open-source

No, but they said it'll be available for audit by independent experts.

reply
TheFragenTaken
1 month ago
[-]
I don't understand why people act like this is a new way of working. Hundreds of ISO certifications require independent audit. Functionally this can be done in many ways, like source code access by human reviewers, or static scanning with signed results. What's important is not who looks, be it PwC, Deloitte, or industry peers. It's important whats being looked for, and what standards are being followed.
reply
anonbanker
1 month ago
[-]
How do we sign up to be an independent expert? We need about 50,000 eyeballs on this at all times.
reply
theshrike79
1 month ago
[-]
How many independent eyeballs are on Gemini's servers or OpenAI's?
reply
ENGNR
1 month ago
[-]
They're not making the privacy claim
reply
ru552
1 month ago
[-]
~It seems like it will require M1+ on Macs/iPads, or an iPhone 15 Pro.

They specifically stated it required iPhone 15 Pro or higher and anything with a m1 or higher.

reply
htrp
1 month ago
[-]
> Apple Intelligence is free for users, and will be available in beta as part of iOS 18, iPadOS 18, and macOS Sequoia this fall in U.S. English. Some features, software platforms, and additional languages will come over the course of the next year. Apple Intelligence will be available on iPhone 15 Pro, iPhone 15 Pro Max, and iPad and Mac with M1 and later, with Siri and device language set to U.S. English. For more information, visit apple.com/apple-intelligence.

iphone 15 Pro 8 GB RAM (https://www.gsmarena.com/apple_iphone_15_pro-12557.php)

iphone 15 6 GB Ram (https://www.gsmarena.com/apple_iphone_15-12559.php)

reply
Jtsummers
1 month ago
[-]
Along with a 2GB RAM difference, they have different processors (A17 vs A16).

https://en.wikipedia.org/wiki/Apple_A17

Per the comparison table on that page, the "Neural Engine" has double the performance in the A17 compared to the A16, which could be the critical differentiator.

reply
iLoveOncall
1 month ago
[-]
That's a good reason not to upgrade my iPhone 13!
reply
dudus
1 month ago
[-]
English only? That is surprising
reply
c1sc0
1 month ago
[-]
The platform talk had a bit more architectural details and it looks like they heavily optimize / compress the Foundation model to run for specific tasks on-device. I'm guessing that sticking to US English allows them to compress the foundation model further?
reply
theshrike79
1 month ago
[-]
As long as they don't geolock it to "english speaking" countries, I'm fine with that.
reply
TillE
1 month ago
[-]
As far as I'm aware, the only time Apple has implemented that kind of restriction is with their DMA compliance. Like, I used the US App Store (with a US credit card) while physically in Europe for many years.
reply
theshrike79
1 month ago
[-]
And Apple Fitness and Apple News.

I can follow workout instructions in english, as can my kids. But Apple has decided that Apple One is more shit over here for some reason.

reply
elAhmo
1 month ago
[-]
I am quite disappointed that 14 pro is not supported. So much power, but they decided to not support any of the older chips.
reply
hbn
1 month ago
[-]
The 15 Pro's SoC has an extra 2GB of RAM which could very well be make-or-break for running a local model which tends to be very memory-constrained
reply
luigi23
1 month ago
[-]
it's about 15 having 2x more powerful neural engine
reply
kolinko
1 month ago
[-]
It’s a matter of RAM most likely - models require crazy amounts of ram, and I bet they had to struggle to fit them on 15’s pro 8GB.
reply
doawoo
1 month ago
[-]
Happy as long as there is a switch to toggle it all off somewhere. I find very little of this useful. Maybe someone does, and that’s great!

And my concern isn’t from a privacy perspective, just a “I want less things cluttering my screen” perspective.

So far though it looks like it’s decent at being opt-in in nature. So that’s all good.

reply
Optimal_Persona
1 month ago
[-]
My thoughts exactly, as someone who manages 145 iPhones for a health-care org, all of this stuff needs to be completely blockable and granularly manageable in Mobile Device Management or things could go very, very wrong compliance-wise.
reply
LogHouse
1 month ago
[-]
Strong agree here. Features are cool, but I value screen real estate and simplicity. Plus, the gpt app works fine for me. I don’t need it built into other things yet.
reply
doutatsu
1 month ago
[-]
I feel like this is actually the thing you want when you say "less things cluttering my screen".

Siri can now be that assistant, that summarises or does things, that would instead make you go through various screens or apps. Feels like it rescues clutter, not increases it to me imo

reply
doawoo
1 month ago
[-]
I simply cannot agree, but again, it's a personal thing. I never ever find voice interfaces useful though...

Aside: When the presenter showed the demo of her asking Siri to figure out the airport arrival time and then gloat it "would have taken minutes" to do on her own... I sat there and just felt so so strongly that I don't want to optimize out every possible opportunity to think or work out a problem in my life for the sake of "keeping on top of my inbox".

I understand value of the tools. But I think overall nothing about them feels very worth showing even more menus for me to tick through to make the magic statistical model spit out the tone of words I want... when I could have just sat there and thought about my words and the actual, real, human person I'm talking to, and rephrase my email by hand.

reply
deergomoo
1 month ago
[-]
> I don't want to optimize out every possible opportunity to think or work out a problem in my life for the sake of "keeping on top of my inbox"

Completely agree. My first thought on seeing this stuff is that it suggests we, as an industry, have failed to create software that fulfils users’ needs, given we’re effectively talking about using another computer to automate using our computers.

My second thought is that it’s only a matter of time before AI starts pushing profitable interests just like seemingly all other software does. How long before you ask some AI tool how to achieve something and it starts pitching you on CloudService+ for only 4.99 per month?

reply
theshrike79
1 month ago
[-]
It's actually taking LESS screen space, because "Siri" is now just a glowing edge on your screen.

And good news! You can clear your homescreen too fully from all icons now =)

reply
jl6
1 month ago
[-]
I can see people using Rewrite all the time. In the grim darkness of the AI future, your friends speak only in language that is clean, sanitized, HR-approved, and soulless.
reply
twoWhlsGud
1 month ago
[-]
At work, yes. However, it won't be long until the language you speak will become a feature of your ML driven consumer language service. There will likely be products that reflect your style/ identity/ whatever. And once you reach a certain socioeconomic level, you'll speak a highly customized bespoke dialect that reflects your station in life, just like today but much, much weirder…
reply
mcpar-land
1 month ago
[-]
reply
dinkleberg
1 month ago
[-]
That perfectly describes how I feel about all of this.

I'm sure that there will be lots of genuinely useful things that come out of this AI explosion, but can't help but be a bit saddened by what we're losing along the way.

Of course I can choose not to use all of these tools and choose to spend time with folks of a similar mindset. But in the working world it is going to be increasingly impossible to avoid entirely.

reply
kylehotchkiss
1 month ago
[-]
Young people already seem bothered by how pristine/flawless modern photography looks and seem increasingly obsessed with using film cameras/camcorders to be more authentic or whatever pleasing attribute they find in that media. I think they'll respond with more misspellings and sloppier writing to appear more authentic
reply
tavavex
1 month ago
[-]
As one of these young people, you're way overestimating the popularity of these trends. There are always some "we gotta go back"-centered communities lingering in the background, but digital vs analogue photography isn't even a close match-up. People who want to get more into photography are far more likely to buy a good digital camera than a film camera.
reply
TillE
1 month ago
[-]
I feel like this is an awful feature for your native language, but fantastically exciting for a second language where you're not quite fluent and need to be able to write coherently.
reply
glial
1 month ago
[-]
People already use words like 'product', 'content', 'feature', and 'vehicle' in everyday conversation. It makes me shudder every time.
reply
PodgieTar
1 month ago
[-]
Few thoughts:

It seems like this is what Rabbit's LAM was supposed to be. It is interesting to see it work, and I wonder how it will work in practice. I'm not sold on using voice for interacting with things still.

Image Generation is gross, I really didn't want this. I am not excited to start seeing how many horrible AI images I'm going to get sent.

I like Semantic Search in my photos.

This does seem like the typical Apple polish.I think this might be the first main stream application of Gen AI that I can see catching on.

reply
Tagbert
1 month ago
[-]
I like that they finally brought typing interaction to Siri. You won't always need to use voice.

This does look like a real-world implementation of the concept promoted by Rabbit. Apple already had the App Intents API mechanism in place to give them the hooks into the apps. They have also publish articles about their Ferret UI LLM that can look at an app's UI and figure out how to interact with it, if there are no available intents. This is pretty exciting.

reply
throwanem
1 month ago
[-]
Text as a Siri interface has been available for a while now. Long-press the sleep/wake button to raise the prompt.
reply
rcdemski
1 month ago
[-]
It has, but it’s presently an accessibility affordance you have to enable first. It’s found under device Settings > Accessibility > Siri > “Type to Siri” On/Off
reply
throwanem
1 month ago
[-]
Oh, go figure. Let that be a lesson: if you don't check out the accessibility options, you're missing at least half the cool stuff your phone can do to actually make your life easier.
reply
abrichr
1 month ago
[-]
I wonder how they will extend this to business processes that are not in their training set.

At https://openadapt.ai we rely on users to demonstrate tasks, then have the model analyze these demonstrations in order to automate them. The goal is similar to Rabbit's "teach mode", except it's desktop only and open source.

reply
tr3ntg
1 month ago
[-]
I had similar reactions, a couple add-ons to make:

1. Yes, App Intents feel like the best version of a LAM we'll ever get. With each developer motivated to log their own actions for Siri to hook into, it seems like a solid experience.

2. Image Gen - yeah, they're pretty nasty, BUT their focus on a "emoji generator" is great. Whatever model they made is surprisingly good at that. It's really niche but really fun. The lifelessness of the generations doesn't matter so much here.

3. Polish - there's so much polish, I'm amazed. Across the board, they've given the "Intelligence" features a unique and impressive look.

reply
nsbk
1 month ago
[-]
This on-device, private cloud compute, and tight hardware-software integration take may be first useful for all genAI we’ve seen so far.

Apple may have actually nailed this one.

Edit: except for image generation. That one sucks

reply
whimsicalism
1 month ago
[-]
> Edit: except for image generation. That one sucks

Say more? it's just the media thing about intellectual property rights?

reply
c1sc0
1 month ago
[-]
I asked up thread and people apparently have concerns about the IP and think that AI images often lack taste.
reply
whimsicalism
1 month ago
[-]
it's fascinating to see this resurgence of people who now like copyright/IP law
reply
rimunroe
1 month ago
[-]
I don't know why it's that fascinating. Lots of people think some amount of copyright is reasonable but that life of the author + 70 years is far too long.
reply
bbatha
1 month ago
[-]
Meanwhile Adobe is in hot water for generating images in the style of Ansel Adams... https://www.theverge.com/2024/6/3/24170285/adobe-stock-ansel...
reply
Aloisius
1 month ago
[-]
Adobe didn't generate them. Someone uploaded them to their stock photos site.

Nor was their issue that the images were in the style of Ansel Adams, but rather that they used his name. That's not a copyright issue. It's a trademark one.

reply
jwells89
1 month ago
[-]
Exactly, nuance is important. Short term protections are good, but companies shouldn’t be able to keep works from entering public domain for decades on end.
reply
nsbk
1 month ago
[-]
I was referring to the limitations of the feature which can only generate images in three pre-canned cartoony styles
reply
Jtsummers
1 month ago
[-]
I wonder if those are to avoid two of the big image generation controversies:

1. Imitation of artists' styles (Make an image in the style of...). The restricted styles are pretty generic, so harder to pin down as being a copy of or imitation of some artist.

2. It's cartoony, which avoids photorealistic but fake images being generated of real people.

reply
dmix
1 month ago
[-]
Native smartphone integration was always going to be the most important UI for genAI.

Followed by maybe search engines once it gets to a certain level of quality (which we seem to be a bit far from).

Then either desktop or home(alexa).

reply
doctoboggan
1 month ago
[-]
Apple trying to rebrand AI = "Apple Intelligence" is a totally Apple thing to do.

I'll be curious to see if Apple gets caught with some surprise or scandal from unexpected behavior in their generative models. I expect that the rollout will be smoother than Bing or Google's first attempts, but I don't think even Apple will prove capable of identifying and mitigating all possible edge cases.

I noticed during the livestream that the re-write feature generated some pretty bad text, with all the hallmarks of gen AI content. That's not a good sign for this feature, especially considering they thought it was good enough to include in the presentation.

reply
lacy_tinpot
1 month ago
[-]
It's bad branding because they can't use the "AI" abbreviation. It's too commonly used to be appropriated by Apple. Honestly calling it "Apple Intelligence" just feels a little lazy.
reply
ukuina
1 month ago
[-]
> the re-write feature generated some pretty bad text, with all the hallmarks of gen AI content

This is the curse of small-language models. They are better suited for constrained output like categorization. Using them for email generation takes... well, that takes courage.

Thankfully, there is an option to use GPT-4o for many of the text generation tasks.

reply
tr3ntg
1 month ago
[-]
I noticed this too. I quickly skimmed a rewritten email and it was totally wooden. It's the one where they ask the AI to reword their rant that includes tons of all-caps words.

The output is almost worse, dripping in a passive aggressive tone.

reply
tonynator
1 month ago
[-]
It's good enough for the typical Apple consumer.
reply
xanderlewis
1 month ago
[-]
Please don't sneer, including at the rest of the community.

- HN Guidelines.

reply
nottorp
1 month ago
[-]
I have only one question: can I turn it off?
reply
latexr
1 month ago
[-]
You can turn Siri off, so I wouldn’t be surprised if this is the same: a toggle on by default that they present you when upgrading the OS. Perhaps even just the same toggle for Siri controls all of this as a whole.
reply
everfree
1 month ago
[-]
Turn off what part and why? They announced several new systems, much of which runs on-device, one of which is simply an improved Siri. I was surprised by how considerate they seemed about AI data privacy, even for Apple.
reply
pessimizer
1 month ago
[-]
> Turn off what part and why?

Assume any part, and assume none of your business.

reply
everfree
1 month ago
[-]
It's not reasonable to expect to be able to turn off "any part" of a piece of software, unless it's open source and you're digging through the code yourself to remove sections of it, refactor and re-compile everything.

That said, Apple generally gives people very fine-grained controls over what software features they want enabled, at least compared to other closed-source software vendors.

My question "what part and why" was intended to open up a discussion about privacy in regards to Apple's AI. But if your answer is simply "none of your business", then my answer to the question "can I turn it off" is simply "nobody has any way of knowing yet." Neither of those answers are great discussion openers.

Your username seems to check out.

reply
demondemidi
1 month ago
[-]
I don't want any part of my personal data (what I write, what I photograph, what I record, what I jot down) to be viewed by anything by my own eyes or the encryption algorithm converting it to ciphertext to send across a secure channel WITHOUT MY CONSENT.

Period.

Reason: again, nobody's business.

If you don't get this then, a) you're not in a high-risk group for discrimination, or b) you've never been subjected systemic polices designed to keep you "in line".

reply
everfree
1 month ago
[-]
I do get this. I don't know why you'd assume I don't.

And the sentiment behind your comment seems very reasonable, reading past its non-sequitur tone.

reply
nottorp
1 month ago
[-]
I do wonder if my privacy awareness has a connection with the fact that I lived the first 13 years of my life under an eastern block dictatorship ...

However in this case I'm also concerned about needless power consumption. Especially on battery.

reply
theshrike79
1 month ago
[-]
In most cases the one using most power in a modern smartphone is the display.

And knowing Apple, the RAG-stuff will be done overnight when the phone is charging, not during use.

reply
visarga
1 month ago
[-]
It's one thing to have private information at rest, another to have it indexed, and interpreted by a LLM. What if some virus orders the LLM to search for blackmail material and email it to them? The very act of putting a LLM near your data is a security concern. If someone else orders your Siri to reveal something, it can get to the prize in seconds, with AI help.
reply
slashdave
1 month ago
[-]
A virus can use its own LLM, so I guess you don't want indexing at all. Makes it hard to find stuff.
reply
dividedbyzero
1 month ago
[-]
It did sound like it would be opt-in. I think the current iteration of Siri already is, so it would make sense if they kept it that way.
reply
red_admiral
1 month ago
[-]
Microsoft's recall is going to have that feature, according to the latest updates on the matter. I hope apple won't lag behind on implementing this one.
reply
martimarkov
1 month ago
[-]
Which part? The online or offline capabilities?
reply
duskhorizon2
1 month ago
[-]
Nope. I afraid AI future is mandatory ;)
reply
rockemsockem
1 month ago
[-]
You're an apple user, you decided a long time ago that they know what's best for you
reply
nottorp
1 month ago
[-]
I almost shed a tear, then I remembered the alternative is Google...
reply
hot_gril
1 month ago
[-]
Apple has decided to allow users to disable various features, so the question is, do they let you disable this.
reply
gkoberger
1 month ago
[-]
Why? Things are secure (outside of the explicit OpenAI calls via Siri) and mostly seem subtly integrated. You don't have to use each feature, but why blindly disable all AI having not even tried it?
reply
upon_drumhead
1 month ago
[-]
I've tried a number of these things and I honestly don't see the value in them. I have to double check everything they do and it takes longer to describe what I want and double check everything then just to do it myself.

I'll be disabling everything I can. I don't use Siri or anything of that sort as well.

reply
seydor
1 month ago
[-]
"It will automatically find a picture of your drivers license, read the number and add it to your text"

This is scary stuff that should not be happening on anything that is closed-source and unaudited publicly. The pervasiveness of surveillance it enables is astounding.

reply
gkoberger
1 month ago
[-]
How is it any more dangerous than having a picture of your ID on your phone? It uses a local model for finding and extracting data, and confirms before autofill.

Should we start auditing wallets next? People's driver licenses are sitting insecure and unencrypted in their pockets! Anyone could grab it!

Security is important, but being alarmist toward thoughtful progress hurts everyone.

reply
Spivak
1 month ago
[-]
What's different about this from the current implementation of searching photos for 'driver's license' and it pulling up pictures of your license? iOS has already been using "AI" image recognition for years on your photos.
reply
Tagbert
1 month ago
[-]
Yes, this is an extension of that feature and a further integration with other enhancements. Apple has been doing “machine learning” for years for features like this. Now they are starting to bring those features together using other models like LLMs.
reply
HWR_14
1 month ago
[-]
Why take up processor time or memory for a feature I don't want? Or the increase in threat space?
reply
Almondsetat
1 month ago
[-]
This could be said for literally every single feature of a smartphone, down to the out of order execution of the CPU
reply
HWR_14
1 month ago
[-]
Yes. I would like to be able to disable other smartphone features I don't use. But that's already the case. Like the GPS, for instance, is disabled unless I'm using the map. And even that can be set to "never" if I want.
reply
nottorp
1 month ago
[-]
I also have animations toned down from the accessibility settings, yes :)
reply
hot_gril
1 month ago
[-]
That's exactly why I wait extra long to install updates.
reply
Voloskaya
1 month ago
[-]
Not wanting to send that much data to Apple's server no matter the pinky promise they make about caring for our data? That's a legit ask.
reply
nottorp
1 month ago
[-]
Apple's? You mean OpenAI's...
reply
hot_gril
1 month ago
[-]
Stuff that I don't use can get in the way.
reply
nerdjon
1 month ago
[-]
I am pretty unhappy with Apple doing the image generation, was really hoping that just would not happen.

But a lot of the other features actually seem useful without feeling shoehorned in. At least so far.

I am hoping that I can turn off the ability to use a server while keeping local processing, but curious what that would actually look like. Would it just say "sorry can't do that" or something? Is it that there is too much context and it can't happen locally or entire features that only work in the cloud?

Edit:

OK how they handle the ChatGPT integration I am happy with. Asks me each time if I want to do it.

However... using recipe generation as an example... is a choice.

reply
c1sc0
1 month ago
[-]
What’s wrong with the image generation?
reply
PodgieTar
1 month ago
[-]
I think there's still a myriad of concerns around the ethics of using others uncredited images to power models that aim to disenfranchise artists.

but my biggest concern is that I think they look tacky, and putting it right in the messaging apps is gonna be ... irritating.

reply
Toutouxc
1 month ago
[-]
Emoji, Memoji, stickers, now gen images. Can’t wait to start receiving them from my dad and my mother-in-law in the most absurd of contexts. Like honestly, I like how much the older relatives enjoy weird, tacky stuff like this.
reply
matwood
1 month ago
[-]
Tech people make fun of tacky stuff like this, but it's a big driver for regular people to upgrade quickly.
reply
bee_rider
1 month ago
[-]
People have ethical concerns about all the public images that were scraped. Regardless of whether or not we agree with them, it is a pretty popular stance to take.
reply
threetonesun
1 month ago
[-]
Most of the generated images shown were terrible but my kid is going to love that emoji generator.
reply
theshrike79
1 month ago
[-]
It was Google who wanted to put glue on pizza, not ChatGPT :)
reply
nerdjon
1 month ago
[-]
I feel like I remember there being plenty of examples of bad ChatGPT recipe generation.

Regardless, even if it wasn't ChatGPT, given the recent problems I would not have used that as one example given that regardless of who it came from.

reply
burningChrome
1 month ago
[-]
Am I only person who's reached their threshold on companies forcing and shoving AI into every layer and corner of our lives?

I don't even look at this stuff any more and see the upside to any of it. AI went from, "This is kinda cool and quaint." to "You NEED this in every single aspect of your life, whether you want it or not." AI has become so pervasive and intrusive, I stopped seeing the benefits of any of this.

reply
lancesells
1 month ago
[-]
I feel like this WWDC kind of solidified that these corporations really don't know what to do with AI or aren't creative enough. Apple presented much better AI features that weren't called AI than the "summarize my email" and "generate an ugly airbrushed picture you buy at the mall kiosk to send to your mom".

All of these "make your life easier" features really show that no tech is making our lives simpler. Task creation is maybe easier but task completion doesn't seem to be in the cards. "Hey siri, summarize my daughters play and let me know when it is and how to get there" shows there's something fundamentally missing in the way we're living.

reply
acjohnson55
1 month ago
[-]
I'm resistant, too. I think from a number of reasons:

- So far, the quality has been very hit or miss, versus places where I intentionally invoke generative AI.

- I'm not ready to relinquish my critical thinking to AI, both from a general perspective, and also because it's developed by big companies who may have different values and interests than me.

- It feels like they're trying to get me to "just take a taste", like a bunch of pushers.

- I just want more/better of the right type of features, not a bunch of inscrutable magic.

reply
wilg
1 month ago
[-]
The new generative AI stuff has been barely implemented in most products, I don't know how you are experiencing it as pervasive and intrusive. Are you sure you're not just cynical from all flood of negative news stories about AI?
reply
mcpar-land
1 month ago
[-]
this being a news thread about Apple integrating AI into all their operating systems and apps aside... Chrome has started prompting me to use generative AI in text boxes. Twitter (X) has an entire tab for Grok that it keeps giving me popup ads for. Every single productivity suite (Notion, Monday, Jira) are perpetually prompting me to summarize my issue with AI. Github has banner ads for Copilot. It is everywhere.
reply
ethbr1
1 month ago
[-]
Summarization was implemented everywhere because it was the easiest AI feature to ship when a VP screamed "Do AI, so our C-suite can tell investors we're an AI company!"
reply
kristofferR
1 month ago
[-]
Summarization is damn useful, though. It has solved clickbait and TLDR-spam, now you can always know if something is worth watching/reading before you do.
reply
ukuina
1 month ago
[-]
Agreed, the dehyping of article titles is one of the main reasons I built hackyournews.com, and the avoidance of clickbait via proactive summarization is consistently rewarding.
reply
lottin
1 month ago
[-]
AI doesn't have to be intrusive but this "personal assistant" stuff, which is what they're marketing to the general public at the moment, certainly is.
reply
thuuuomas
1 month ago
[-]
Are you sure you’re not optimistic just bcuz you stand to materially benefit from widespread adoption of chatgpt wrappers?
reply
wilg
1 month ago
[-]
How would I materially benefit?
reply
epistasis
1 month ago
[-]
Currently, AI use has a "power user" requirement. You have to spend a lot of time with it to know what it is and is not capable of, how to access those hidden capabilities, and be very creative at applying it in your daily life.

It's not unlike the first spreadsheets. Sure, they will some day benefit the entire finance department, but at the beginning only people who loved technology for the sake of technology learned enough about them to make them useful in daily life.

Apple has always been great at broadening the audience of who could use personal computing. We will see if it works with AI.

I think it remains to be seen how broadly useful the current gen of AI tech can be, and who it can be useful for. We are in early days, and what emerges in 5-10 years as the answer is obvious to almost no one right now.

reply
moralestapia
1 month ago
[-]
You're in for a ride.

This barely scratches the surface on how much AI integration there's going to be in the typical life of someone in the 2030s.

reply
pndy
1 month ago
[-]
> Am I only person who's reached their threshold on companies forcing and shoving AI into every layer and corner of our lives?

After a random update my bank's app has received AI assistant out of blue to supposedly help their clients.

At first I was interested how these algorithms could enhance apps and services but now, this does indeed feels like shoving AI everywhere it's possible even if it doesn't makes any sense; as if companies are trying to shake a rattle over your baby's cradle to entertain it.

Aside above, I was hoping that after this WWDC Siri would get more languages so I could finally give it instructions in my native language and make it actually more useful. But instead there are generated emoticons coming (I wonder if people even remember that word). I guess chasing the hottest trends seems more important for Apple.

reply
warkdarrior
1 month ago
[-]
They are not making it mandatory to use, just widely available through various interfaces. I see this closer to how spellcheck was rolled out in word processors, then editors, then browsers, etc.
reply
blibble
1 month ago
[-]
if I can't turn 100% of this botshit off then my iphone's going in the bin

I'll go back to a dumbphone before I feed the AI

reply
dieortin
1 month ago
[-]
You’re not feeding anything by having this feature turned on
reply
blibble
1 month ago
[-]
I have zero confidence in any privacy or contractual guarantees being respected by the parasitic OpenAI
reply
ru552
1 month ago
[-]
you have to acknowledge a pop up authorizing your request be sent to OpenAI every single time it happens. it's not going to happen by mistake.
reply
theswifter01
1 month ago
[-]
And they’re parasitic how exactly? Even if they do collect every single of my prompts the benefit of chatGPT outweighs my data being sold
reply
ethagnawl
1 month ago
[-]
Right. This thread on the other hand ...
reply
blibble
1 month ago
[-]
I have curtailed my internet commenting considerably in the last 12 months

it is now almost exclusively anti-AI, which funnily enough I don't mind them training on

reply
hawski
1 month ago
[-]
This ramping up AI war will leave no prisoners. I am not an Apple customer in any way, I am in Google's ecosystem, but I feel that I need to make an exit, at least some essentials, preferably this year.

My e-mail, my documents, my photos, my browsing, my movement. The first step for me was setting up Syncthing and it was much smoother than I initially thought. Many steps to go.

reply
sircastor
1 month ago
[-]
I haven’t adopted passcodes, and moved all my email out of gmail to a private domain. Photos backup t to my NAS. I’m terrified of the automated systems deciding I’m a bad actor.

I can’t help but think it’ll get worse with AI

reply
its_ethan
1 month ago
[-]
Not that you shouldn't do it, but too much of an active effort or obsession with not using standard e-mail services or photo back ups is probably a faster way to get flagged as suspicious lol
reply
jwrallie
1 month ago
[-]
For things that don’t leave your system it’s ok, but the moment you send something to others it will go into the systems that you try to avoid anyway.

Mostly I see no point in things like email self hosting if half my contacts are on Gmail and the other half on Microsoft.

My suggestion (as someone that tried to escape for some time) is to build a private system for yourself (using private OS and networks) and use a common system to interface with everyone else.

reply
ayakang31415
1 month ago
[-]
There was one part that I didn't understand about AI compute: For certain tasks, server side compute will be done as on-device chip is not powerful enough I suppose. How does this ensure privacy in verifiable manner? How do you CONFIRM that your data is not shared when cloud computing is involved with AI tasks?
reply
tom1337
1 month ago
[-]
Your data is being shared. But they've shown that it is being done in a way where only required data leaves the devices and there are some protections in place which try to minimize misuse of the data (the OS will only communicate with publicly signed versions of the server for example). The call to Apples "Private Compute Cloud" is intransparent to the user, ChatGPT calls need permission if I understood it correctly.
reply
ayakang31415
1 month ago
[-]
So it is not really private then.
reply
Spivak
1 month ago
[-]
I think it's a semantic thing at this point. If for you private can't mean plaintext living on a computer you don't control then no. If it's private in the way your iCloud photos are private then yes, and seemingly more so.
reply
pertymcpert
1 month ago
[-]
What does private mean? If I store my children's photos on iCloud encrypted, that's not private?
reply
AshamedCaptain
1 month ago
[-]
> the OS will only communicate with publicly signed versions of the server for example

This hardly increases security, and does not increase privacy at all. If anything it provides Apple with an excuse that they will throw at you when you ask "why can't I configure my iOS device to use my servers instead of yours?" , which is one of the few ways to actually increase privacy.

This type of BS should be enough to realize that all this talk of "privacy" is just for the show, but alas...

reply
ducadveritatem
1 month ago
[-]
Before you write off their claims I encourage you to read more about the detailed specifics (if you have the technical footing and inclination to do so). While the approaches should certainly be probed and audited, it’s clearly more than performative. https://security.apple.com/blog/private-cloud-compute/

Also a good thread from Matthew Green, a privacy/cryptography wonk, on the subject: https://x.com/matthew_d_green/status/1800291897245835616?s=4...

reply
theshrike79
1 month ago
[-]
Can you configure a Google phone to use your servers instead of theirs for Google Assistant requests?
reply
AshamedCaptain
1 month ago
[-]
I don't know what your argument was going to be if I said "no", but in any case, the answer is yes, you can. You can even entirelly uninstall Google Assistant and replace it with your own software, and you do not lose any functionality of the device nor require access to private hooks to do that. I do that myself.
reply
mfiguiere
1 month ago
[-]
> Privacy protections are built in for users who access ChatGPT — their IP addresses are obscured, and OpenAI won’t store requests. ChatGPT’s data-use policies apply for users who choose to connect their account.

> ChatGPT will come to iOS 18, iPadOS 18, and macOS Sequoia later this year, powered by GPT-4o.

reply
talldayo
1 month ago
[-]
> and OpenAI won’t store requests.

What's a promise from Sam Altman worth, again?

reply
aaronharnly
1 month ago
[-]
That's not a "promise from Sam Altman", that's a contractual term between Apple, Inc. and OpenAI, LLC.

So I think it's worth as much as Apple is willing to spend enforcing it, which I imagine would be quite a bit.

reply
hot_gril
1 month ago
[-]
Idk about Sam Altman in particular, but OpenAI pulled the bait-and-switch you can still see in its name. We don't know what the contract says exactly, but there are always loopholes, and I would not assume anything OpenAI says to be in good faith.

I also don't really care, but it's understandable why some people do.

reply
shbooms
1 month ago
[-]
> that's a contractual term between Apple, Inc. and OpenAI, LLC.

do you have a source on this or are you just assuming?

reply
buildbot
1 month ago
[-]
Do you think this is all running off the standard openai API and they picked a dev at random in Apple to use their accounts API keys?

Of course there is some agreement…

reply
Tagbert
1 month ago
[-]
It would be a very surprising business arrangement if that was not explicitly called out. Apple is not going to leave this to chance.
reply
talldayo
1 month ago
[-]
> Apple is not going to leave this to chance.

How much would you be willing to bet, on a statement like this? I love a sporting chance.

reply
theshrike79
1 month ago
[-]
If we find out in the next 12 months that OpenAI has been storing requests from Apple/Siri AND Apple doesn't come down on them with a 10 ton lawyer hammer, I'll pay you $500.

Can you match it the other way around? :)

reply
pertymcpert
1 month ago
[-]
crickets from OP
reply
pertymcpert
1 month ago
[-]
I will bet around $10,000 FWIW.
reply
spacebanana7
1 month ago
[-]
Even if the promise were made in good faith, I fear it may be hard to resist pressure from law enforcement etc.
reply
Turing_Machine
1 month ago
[-]
If Apple is sitting in the middle proxying the IP addresses, and not keeping any logs for longer than they absolutely need to, law enforcement could go pee up a rope, right?
reply
talldayo
1 month ago
[-]
You'd hope so, but corporate resistance against domestic intelligence has a bumpy track record: https://arstechnica.com/tech-policy/2023/12/apple-admits-to-...
reply
Handy-Man
1 month ago
[-]
That's just the enterprise guarantee. The same applies to Azure OpenAI services and the API services provided by OpenAI directly.
reply
wilg
1 month ago
[-]
What broken Sam Altman promises are you referring to?
reply
avtar
1 month ago
[-]
Personally I would say the disparity between what was in their founding agreement "be open-source—freely and publicly available for anyone to use, modify and distribute" https://archive.ph/R0LBL to the current state of affairs.

But I guess the list of grievances could be longer:

https://garymarcus.substack.com/p/what-should-we-learn-from-...

reply
zoky
1 month ago
[-]
Leaving OpenAI, for one.
reply
minimaxir
1 month ago
[-]
The promise is from Apple, not OpenAI, and likely contractual.

If OpenAI actually went against that, Apple would unleash the mother of all lawsuits.

reply
educasean
1 month ago
[-]
Tim Cook doesn't seem to mind hanging his reputation on sama's promise, so at least that's something
reply
shironononon
1 month ago
[-]
"let's store responses and a hash of the request intent in a kvp then"
reply
talldayo
1 month ago
[-]
Please, an encrypted key-value store. The private key is only shared between you, Apple, and relevant law-enforcement agencies. It's as private as you can ask for, these days!
reply
ukuina
1 month ago
[-]
Storing the response and *a GPT-summarized request* would not violate the spirit or letter of the statement here, either.
reply
whalee
1 month ago
[-]
I am deeply disturbed they decided to go off-device for these services to work. This is a terrible precedent, seemingly inconsistent with their previous philosophies and likely a pressured decision. I don't care if they put the word "private" in there or have an endless amount of "expert" audits. What a shame.
reply
tr3ntg
1 month ago
[-]
They didn't have a choice. Doing everything on-device would result in a horrible user experience. They might as well not participate in this generative AI rush at all if they hoped to keep it on-device. Which would have looked even worse for them.

Their brand is equally about creativity as it is about privacy. They wouldn't chop off one arm to keep the other, but that's what you're suggesting they should have done.

And yes, I know generative AI could be seen specifically as anti-creativity, but I personally don't think it is. It can help one be creative.

reply
roncesvalles
1 month ago
[-]
I don't think it would've looked bad for their brand to have not participated. Apple successfully avoided other memes like touchscreens laptops and folding phones.
reply
adpirz
1 month ago
[-]
Siri is bad and is bad for their brand. This is making up for that ground.
reply
Terretta
1 month ago
[-]
> Doing everything on-device would result in a horrible user experience. They might as well not participate in this generative AI rush at all if they hoped to keep it on-device.

On the contrary, I'm shocked over the last few months how "on device" on a Macbook Pro or Mac Studio competes plausibly with last year's early GPT-4, leveraging Llama 3 70b or Qwen2 72b.

There are surprisingly few things you "need" 128GB of so-called "unified RAM" for, but with M-series processors and the memory bandwidth, this is a use case that shines.

From this thread covering performance of llama.cpp on Apple Silicon M-series …

https://github.com/ggerganov/llama.cpp/discussions/4167

"Buy as much memory as you can afford would be my bottom line!"

reply
philjohn
1 month ago
[-]
Yes - but people don't want to pay $4k for a phone with 128GB of unified memory, do they?

And whilst the LLM's running locally are cool, they're still pretty damn slow compared to Chat-GPT, or Meta's LLM.

reply
theshrike79
1 month ago
[-]
Depending on what you want to do though.

If I want some help coding or ideas about playlists, Gemini and ChatGPT are fine.

But when I'm writing a novel about an assassin with an AI assistant and the public model keeps admonishing me that killing people is bad and he should seek help for his tendencies, it's a LOT faster to just use an uncensored local LLM.

Or when I want to create some people faces for my RPG campaign and the online generator keeps telling me my 200+ word prompt is VERBOTEN. And finally I figure out that "nude lipstick" is somehow bad.

Again, it's faster to feed all this to a local model and just get it done overnight than fight against puritanised AIs.

reply
chuckadams
1 month ago
[-]
To say nothing of battery life.
reply
wilg
1 month ago
[-]
You are deeply disturbed by the idea that some services can be better implemented server-side? Who do you think pressured them, the illuminati?
reply
grishka
1 month ago
[-]
Here's a shocking suggestion: maybe wait some time before these services could be implemented on-device, and implement them on-device, instead of shipping this half-baked something? Apple seems to be the perfect company to make it happen, they produce both the hardware and the software, tightly integrated with each other. No one else is this good at it.
reply
wilg
1 month ago
[-]
They implemented way more on the device than anyone else is doing, and I don't see how it makes it "half-baked" that it sometimes needs to use an online service. Your suggestion is essentially just not shipping the product until some unspecified future time. That offers no utility to anyone.
reply
grishka
1 month ago
[-]
It is, however, very much Apple's philosophy to wait it out and let others mature a technology before making use of it.
reply
TeMPOraL
1 month ago
[-]
At the current rate of advancement, we might get a runaway AGI before the technology "matures".
reply
grishka
1 month ago
[-]
Or we might not. LLMs are remarkably dumb and incapable of reasoning or abstract thinking. No amount of iterative improvement on that would lead to an AGI. If we are to ever get an actual AGI, it would need to have a vastly different architecture, at minimum allowing the parameters/weights to be updated at runtime by the model itself.
reply
TeMPOraL
1 month ago
[-]
Right. But there's so much effort, money and reputation invested in various configurations, experimental architectures, etc. that I feel something is likely going to pan out in the coming months, enabling models with more capabilities for less compute.
reply
dleink
1 month ago
[-]
It offers utility to user privacy.
reply
qeternity
1 month ago
[-]
Here’s a shocking suggestion: if you’re not comfortable using it, don’t use it.
reply
electriclove
1 month ago
[-]
I like their approach. Do everything possible on device and if it can only be done off-device, provide that choice.
reply
NewJazz
1 month ago
[-]
You misunderstand.

They will go off device without asking you, they just ask if you want to use ChatGPT.

reply
rahkiin
1 month ago
[-]
No: they do on device, ask to do off device in their private cloud. Chatgpt is then a separate integration / intent you ask can for
reply
NewJazz
1 month ago
[-]
I don't see anything to that effect in tfa, and a few people in the comments have claimed otherwise.
reply
pertymcpert
1 month ago
[-]
Yeah that's how it works.
reply
unshavedyak
1 month ago
[-]
Are they giving us a choice? I thought the choice was primarily about using ChatGPT? It sounded like everything in apples "Private Cloud" was being considered fully private.
reply
cedws
1 month ago
[-]
Circa 2013 Snowden says the intelligence agencies are wiretapping everything and monitoring everyone.

In 2024 they don't have to wiretap anything. It's all being sent directly to the cloud. Their job has been done for them.

reply
buildbuildbuild
1 month ago
[-]
I hear you but caution against such oversimplification. Advanced Data Protection for iCloud is a thing. Our culture of cloud reliance is truly dangerous, but some vendors are at least trying to E2E data where possible.

There are big risks to having a cloud digital footprint, yet clouds can be used “somewhat securely” with encryption depending on your personal threat model.

Also, it’s not fair to compare clouds to wiretapping. Unless you are implying that Apple’s infrastructure is backdoored without their knowledge? One does not simply walk into an Apple datacenter and retrieve user data without questions asked. Legal process is required, and Apple’s legal team has one the stronger track records of standing up against broad requests.

reply
hu3
1 month ago
[-]
iCloud end-to-end encryption is disabled by default.

So by default, user data is not protected.

https://support.apple.com/en-us/102651

reply
theshrike79
1 month ago
[-]
Yes, because the UX is better that way.

With ADP if your mom loses her encryption keys, it's all gone. Forever. Permanently.

And of course it's Apple's fault somehow. That's why it's not the default.

reply
ggamecrazy
1 month ago
[-]
Broadly, in the US, the Federal Wiretap Act of 1968 still applies. You're going to have to convince a judge otherwise.

Yes, perhaps broad dragnet type of might be scoffed down by some judges (outside of Patriot act FISA judges ofc)

I would warn you about the general E2E encryption and encrypted at rest claims. They are in-fact correct, but perhaps misleading? At some point, for most, the data does get decrypted server-side - cue the famous ":-)"

reply
lancesells
1 month ago
[-]
It's been going to the cloud since at 2013 as well.
reply
drexlspivey
1 month ago
[-]
That’s a necessary temporary step until these powerful LLMs are able to run locally. I’m sure Apple would be delighted to offload everything on device if possible and not spend their own money on compute.
reply
giancarlostoro
1 month ago
[-]
They prompt you before you go off-service, which makes the most sense.
reply
Me1000
1 month ago
[-]
They prompt you before they send your data to OpenAI, but it's clear that they prompt you before they send it to Apple's servers (maybe they do and I missed it?). And their promise that their servers are secure because it's all written in Swift is laughable.

Edit:

This line from the keynote is also suspect: "And just like your iPhone , independent experts can inspect the code that runs on the servers to verify this privacy promise.".

First off, do "independent experts" actually have access to closed source iOS code? If so we already have evidence that this is sufficient (https://www.macrumors.com/2024/05/15/ios-17-5-bug-deleted-ph...).

The actual standard for privacy and security is open source software, anything short of that is just marketing buzz. Every company has an incentive to not leak data, but data leaks still happen.

reply
ethbr1
1 month ago
[-]
They're promising to go farther than that.

>> Independent experts can inspect the code that runs on Apple silicon servers to verify privacy, and Private Cloud Compute cryptographically ensures that iPhone, iPad, and Mac do not talk to a server unless its software has been publicly logged for inspection. Apple Intelligence with Private Cloud Compute sets a new standard for privacy in AI, unlocking intelligence users can trust.

reply
pdpi
1 month ago
[-]
That promise made my ears perk up. If it actually stands up to scrutiny, it's pretty damn cool.
reply
ethbr1
1 month ago
[-]
I look at things like that from a revenue/strategic perspective.

If Apple says it, do they have any disincentives to deliver? Not really. Their ad business is still relatively small, and already architected around privacy.

If someone who derives most of their revenue from targeted ads says it? Yes. Implementing it directly negatively impacts their primary revenue stream.

IMHO, the strategic genius of Apple's "privacy" positioning has been that it doesn't matter to them. It might make things more inconvenient technically, but it doesn't impact their revenue model, in stark contrast to their competitors.

reply
Hizonner
1 month ago
[-]
Their disincentive to delivering it is that it's not actually possible.
reply
warkdarrior
1 month ago
[-]
It's certainly possible through remote attestation of software. This is basically DRM on servers (i.e., the data is not decrypted on the server unless the server stack is cryptographically attested to match some trusted configuration).
reply
Hizonner
1 month ago
[-]
That requires trusting that the attestation hardware does what it says it does, and that the larger hardware system around it isn't subject to invasion. Those requirements mean that your assurance is no longer entirely cryptographic. And, by the way, Apple apparently plans to be building the hardware.

It could be a very large practical increase in assurance, but it's not what they're saying it is.

reply
ethbr1
1 month ago
[-]
I haven't read all the marketing verbage yet, but even 'Our cloud AI servers are hardware-locked and runtime-checked to only run openly auditable software' is a huge step forward, IMHO.

It's a decent minimum bar that other companies should also be aiming for.

Edit: ref https://security.apple.com/blog/private-cloud-compute/

reply
ducadveritatem
1 month ago
[-]
reply
curious_cat_163
1 month ago
[-]
I agree with you about this being a bad precedent.

However, to me, the off-device bit they showed today (user consent on every request) represents a strategic hedge as a $3T company.

They are likely buying time and trying to prevent people from switching to other ecosystems while their teams catch up with the tech and find a way to do this all in the “Apple Way”.

reply
evrenesat
1 month ago
[-]
I hope at some point they start selling a beefy Mac mini variant that looks like a HomePod to work as an actual private AI server for the whole family.
reply
pram
1 month ago
[-]
This is called a Mac Studio
reply
evrenesat
1 month ago
[-]
It would be great if they let us install the private cloud server on our Macs, but I’m not holding my breath. Then again, in the name of more privacy, maybe they want to sell a dedicated local AI hub as another hardware product. They could even offer it for an affordable upfront cost that can be amortized into a multi-year iCloud subscription.
reply
29athrowaway
1 month ago
[-]
Siri and other assistants already do this no?
reply
re
1 month ago
[-]
Yes. Siri debuted with the iPhone 4s (running iOS 5) in 2011. It wasn't until iOS 15 in 2021 that Siri gained the ability to do some things without an internet connection, on devices with the A12 Bionic chip (the 2018 iPhone XR/XS or later).
reply
hrdwdmrbl
1 month ago
[-]
Yes
reply
steve1977
1 month ago
[-]
You can’t charge for a service so easily if it runs on-device.
reply
karaterobot
1 month ago
[-]
> Apple sets a new standard for privacy in AI,

That does not necessarily mean better, just different. I reserve judgment until I see how it shakes out.

but if I don't like this feature, and can't turn it off, I guess it's sadly back to Linux on my personal laptops.

reply
theshrike79
1 month ago
[-]
It's just Siri, but with better context.

If you don't specifically activate it, it won't do shit.

reply
nkotov
1 month ago
[-]
While I think it's cool and I appreciate Apple crafting better stories on why this is helpful, I still think for the everyday person, they won't really care if it's AI or not.
reply
dylan604
1 month ago
[-]
But Apple's integration means you can use it and not care if it is AI or not. It'll just become part of using iOS (let's face it, that's were the majority of Apple's users will be). From creating a new "genmoji" to any of the other examples of allowing people to do this without know WTF huggingface or the other equally ridiculously named products are. They don't need accounts. They just type a message and decide to put in a new image.

Of course we've only seen examples from an overly produced hype/propaganda video, but it looks to me of yet another example of Apple taking products and making them usable to the masses

reply
amne
1 month ago
[-]
Watched Tim Cook and MKBHD interview and Tim did say something along the same lines: the average smartphone user doesn't care about the technology branding but what can it do and that's what Apple aims for.

And I agree with their goal here:

Dall-e how much now? it can do what? text to image? can it do emojis? vs Genmoji: oh .. it can do emojis. nice!

same with "Rewrite" and so on.

reply
hot_gril
1 month ago
[-]
There's value in OS integration, but again, what are the real use cases? Memoji or whatever doesn't qualify. Apple has added a ton of features in recent years that I haven't used once. If it's going to manage my calendar in a way I can rely on or autocorrect will be smarter, that's useful.
reply
dylan604
1 month ago
[-]
If only they had shown examples of how they are integrating with calendars, emails, texts, photos. You should reach out to Apple's marketing department about producing better release videos that have examples of how the new features will be used. I bet they'd think it was a great idea!
reply
hot_gril
1 month ago
[-]
I only read the article, didn't watch the 2hr video, and it's only marketing material. What it really does in my hands is tbd.
reply
FireBeyond
1 month ago
[-]
> but it looks to me of yet another example of Apple taking products and making them usable to the masses

This is a bit obsequious to Apple. I find it hard to give a cogent argument of how ChatGPT is not "usable to the masses" at this point (and being -used- by the masses).

reply
theshrike79
1 month ago
[-]
It doesn't integrate to anything, you need to explicitly give it context every time you ask it something.

You can't just log in to ChatGPT and ask it what was on your calendar 2 weeks ago.

reply
dylan604
1 month ago
[-]
The fact that someone was even making this argument suggest they didn't fully comprehend the presentation or missed some salient details. How anyone could confuse anybody's current integration of AI tools be it chat or generative images into something so central to user's everyday life is beyond me. I would ask for examples of anything else the comes close
reply
ru552
1 month ago
[-]
This is exactly what they are going for. You can just ask Siri now "what day did my wife say the recital is?" and Siri spits the answer out without requiring you to go scroll through your messages. Who cares that an LLM did the work?
reply
gkoberger
1 month ago
[-]
Agreed! And the UI seemed pretty focused on not really clarifying too much; I think they just mentioned AI a lot since it was WWDC.
reply
criddell
1 month ago
[-]
If Apple does a really good job of this, then the everyday person probably shouldn't care if it's AI or not.

Who cares how your flight information shows up at the right time in the right place? the only thing that should matter is that it does.

reply
anonbanker
1 month ago
[-]
And nobody cares about how absolutely terrifying your statement truly is, because the shiny benefits obfuscate the destruction of privacy, despite Apple's reassurances.
reply
criddell
1 month ago
[-]
The upsides are obvious and concrete, the downsides are mostly hypotheticals.

People already carry around a device with a GPS, camera, and microphone that has access to most of their intimate and personal communications and finances. Adding AI capabilities doesn't seem like a bridge too far, that's for sure.

reply
smith7018
1 month ago
[-]
Most of Apple's announcements today featured AI but the term wasn't explicitly mentioned. I think the last portion of the keynote that focused on AI was merely for investors tbh
reply
algesten
1 month ago
[-]
"Semantic Index" sure is a better name than "Recall". Question is whether I can exfiltrate all my personal data in seconds?
reply
anonbanker
1 month ago
[-]
I'm sure a simple Webkit vulnerability (there's none of those, ever, right?) will definitely not ensure that Semantic Index is featured in a future pwn2own competition.
reply
fmbb
1 month ago
[-]
I mean I can already search my photos for “dog” or “burger” or words in text on photos. Adding an LLM to chat about it is just a new interface is it not?
reply
algesten
1 month ago
[-]
I think the important thing is that the semantic index tracks all you do through all your apps.
reply
qeternity
1 month ago
[-]
They are likely implemented very differently. I’m not certain but I imagine the current photos app uses an image model to detect and label objects which you can search against. I expect Semantic Index (by virtue of the name) to be a vector store of embeddings.
reply
gigel82
1 month ago
[-]
It's all in the "private cloud". "Trust me bro", it's like totally private, only us and a handful of governments can read it.
reply
algesten
1 month ago
[-]
Yeah. It's going to be great. Selected experts are saying so.
reply
the_arun
1 month ago
[-]
Wouldn't this reduce sales for Grammerly? If Apple packs the same feature for every application in iOS, it is kinda cool.

Private Cloud - Isn't this what Amazon did with their tablet - Fire? What is the difference with Apple Private Cloud?

reply
terramex
1 month ago
[-]
> Wouldn't this reduce sales for Grammerly?

https://en.wikipedia.org/wiki/Sherlock_(software)#Sherlocked...

> Sherlocked as a term

> The phenomenon of Apple releasing a feature that supplants or obviates third-party software is so well known that being Sherlocked has become an accepted term used within the Mac and iOS developer community.[2][3][4]

reply
thisarticle
1 month ago
[-]
1Password too.
reply
jerbear4328
1 month ago
[-]
Well, the Passwords app is just the Passwords section in Settings moved out into its own app. It already exists on Windows, too, but maybe they are updating it to allow autofill without using a Chrome extension or add other features. It isn't the biggest change, just bringing attention to an existing feature that already competes with 1Password et al.
reply
cueo
1 month ago
[-]
It would be good if they add support for third-party browsers. Bitwarden (or other apps) can feel clunky sometimes compared to Keychain / Passwords.
reply
kfinley
1 month ago
[-]
Which browser are you using?

I switched to Apple Passwords and have been using the official Chrome extension for a few months. It's not as seamless as some of the password manager extensions, but has been working well enough.

https://chromewebstore.google.com/detail/icloud-passwords/pe...

reply
gherkinnn
1 month ago
[-]
After years and years of annoying ads, Grammarly taking a hit is the least they deserve
reply
kylehotchkiss
1 month ago
[-]
Agreed! I'm happy to not have to hear about them anymore.
reply
xnx
1 month ago
[-]
Grammarly is great example of the classic adage, "a feature, not a product".
reply
secfirstmd
1 month ago
[-]
TBH I'd say the same about Notion.
reply
nehal3m
1 month ago
[-]
> Wouldn't this reduce sales for Grammerly?

There's a term for that, it's called being Sherlocked: https://www.howtogeek.com/297651/what-does-it-mean-when-a-co...

reply
tomjen3
1 month ago
[-]
How many use Grammerly on a Mac exclusively? My guess is that most of their accounts are students through schools and companies. But yeah, there is a risk in any business that a better competitor comes along.
reply
PodgieTar
1 month ago
[-]
It jumped out to me that I had to highlight and ask it to check my grammar, rather than have it be an automatic process.

I don't use Grammarly, really, but I think at least that one is more automatic?

reply
insane_dreamer
1 month ago
[-]
I would not bet on Grammerly's future.
reply
JohnMakin
1 month ago
[-]
I am excited to try Siri with this technology enabled. I can't really remember a time when siri ever really worked, although recently I actually got her to play a song on youtube for me after a few attempts and was pretty pleased with that. Outside of "set my alarm for 4:30" kind of stuff, she's never really been that useful, and if you are even kind of disabled, this feature can be really useful to the point of life changing if it is done properly.
reply
skilled
1 month ago
[-]
They did a lot of work for this release, and the number of integrations is beyond what I expected. In a few years time you might not need to hold your phone at all and just get everything done with voice - kind of cool, actually.

Auto transcripts for calls (with permission) is another feature I really liked.

I was a little surprised to see/hear no mention of inaccuracies, but for ChatGPT they did show the "Check for important facts" notice.

reply
Tagbert
1 month ago
[-]
There is a lot less fodder for inaccuracies if the data and processing are all on your device. A lot of the inaccuracies in Gemini and ChatGPT arise because they are using the web for answers and that is a much less reliable source than your own emails and messages.
reply
culopatin
1 month ago
[-]
That sounds like what Humane is trying to do. But I would honestly hate to do everything by voice and have everyone around me know what I’m doing and hear everyone around me talk to their phones all the time. Sounds like a nightmare
reply
realfeel78
1 month ago
[-]
> That sounds like what Humane is trying to do.

The only thing Humane was trying to do was scam their investors. Let us never speak of them again.

reply
Tagbert
1 month ago
[-]
I would expect it to be situational. I also was happy to see that they introduced a typed interface to Siri so you can do this without speaking.
reply
amne
1 month ago
[-]
So far the only reasonable place I can think of where I could find myself actually using voice to control anything is on the toilet. that's it
reply
runeb
1 month ago
[-]
Walking, cycling, running, driving, relaxing on the couch
reply
amne
1 month ago
[-]
This is a large numbers problem that is not yet visible. You can't have everybody walking and talking. There would be too much noise (think crickets, cicadas, toads, etc.)

I'm not going into cycling/running and talking .. that's just not how things work when you need to breathe.

Driving and talking to a phone to then have it recite back to you 10 minutes of details you can just glance at but would be dangerous to?

If I'm relaxing on a couch .. i'm using a device. And please don't come back with "play me a chill song" as a fancy use case.

What I'm trying to say is that voice is not it and the only other kind of interaction I'm looking forward to see evolve is neuralink-style. In the sense that it needs to be wireless / non-invasive for mass adoption. That's it.

reply
TIPSIO
1 month ago
[-]
So the future of computing really is AI agents doing everything:

- siri text chat now on the lock screen

- incoming a billion developer app functions/APIs

- better notifications

- can make network requests

Why even open any other app?

reply
terramex
1 month ago
[-]
> Why even open any other app?

This was my first thought when I saw Rabbit r1 - will all of us become backend developers just glueing various API between legacy services and LLMs? Today seems like another step in that direction.

reply
imabotbeep2937
1 month ago
[-]
The whole world will be headless content. There won't be any web pages, or bank sites, or TV networks. Nobody will be a developer. We'll all just be content authors, like Google Maps Guides basically being unpaid interns checking restaurant data for Google.

You open your phone, it just shovels content. And it does absolutely nothing but optimize on addiction.

No apps, only masters.

reply
ragazzina
1 month ago
[-]
The year is 2040. I pick up my iPhone. I ask Siri Pro to be entertained. She makes me a mix of Instagram Reels, TikToks, Youtube Shorts and Netflix trailers, not only handpicked for me, but each of those re-cut and re-edited to match my tastes.

When I ask Siri Pro what I'm doing on the weekend, she plans a dinner with a mix of friends and compatible strangers. Any restaurant is fine: the food is going to be personalized anyway.

reply
stackedinserter
1 month ago
[-]
I hope 2040 iPhone will be able to not start playing fucking music every time when I sit in a car.
reply
singularity2001
1 month ago
[-]

  >> - incoming a billion developer app functions/APIs
That would be cool, but the App Intents API is severely crippled. Only a few hardcoded use cases are supported.

So any _real agent_ which has full access to all Apps can still blow Siri out of the water.

reply
imabotbeep2937
1 month ago
[-]
Nobody is realizing this coming singularity.

Your phone won't do anything else. For 99% of people, they pick up their phone, AI will just decide what they want to see. And most will accept it.

Someday everyone in the room will all pick up their phones when they all ring at once. It will be some emotional trigger like a live feed from a school shooting. Everyone in the room will start screaming at the totally different experiences they're being presented. Evil liberals, clueless law enforcement, product placement being shown over the shooter's gun. You'll sit horrified because you returned to a dumbphone to escape.

That will be the reality if this AI assistant stuff isn't checked hard now. AI is getting better at addiction an order of magnitude faster than it's getting better at actual tasks.

reply
c1sc0
1 month ago
[-]
Not necessarily, that entirely depends on the reward function being used, but I get your point.
reply
KolmogorovComp
1 month ago
[-]
The WWDC is still ongoing and the stream can be followed here: https://www.apple.com/apple-events/event-stream/

(Sharing because I had trouble finding it).

reply
ryankrage77
1 month ago
[-]
The image generation seems really bad. Very creepy, offputting, uncanny-valley images. And that's the the best cherry-picked examples for marketing.

I'm curious to try some of the Siri integrations - though I hope Siri retains a 'dumb mode' for simple tasks.

reply
teolandon
1 month ago
[-]
I wish there was a way to leverage my M1 Mac to use this on my iPhone Pro 14. Like a private connection between my phone and computer to use the more powerful chip, even if it's limited to when I'm at home on the same Wi-Fi. Latency shouldn't be too bad.

But I think Apple is going to limit iPhones from doing something like that to boost sales of the 15 Pro and the future gens.

reply
ahmeneeroe-v2
1 month ago
[-]
Yes, I would love the escalation path to be: on-device -> owned Mac -> "private cloud"
reply
thimabi
1 month ago
[-]
Oh, well, many apps will have a hard time competing with “Apple Intelligence” features. Why bother downloading a third-party app if some feature you want is included by default in the OS?

Better yet, no more dealing with overpriced subscriptions or programs that do not respect user privacy.

Kudos to the Apple software team making useful stuff powered by machine learning and AI!

reply
ukuina
1 month ago
[-]
It was amusing to see the Duolingo product placement when their entire product is just a prompt in ChatGPT.
reply
throwaway71271
1 month ago
[-]
Amazing how Microsoft, Google and now Apple are racing to 'generate' more and more text and images, and they also race to 'summarize' the now generated texts because everything is just noise. Like an anxious digital beehive.

By the end of the year maybe 1% of the content you interact with will be human made.

Even now in HN maybe 20-30% of the comments are generated by various transformers, but it seems every input box on every OS now has a context aware 'generate' button, so I suspect it will be way more in few months.

The Eternal September is coming. (and by ironic coincidence it might actually be in September haha)

reply
markus_zhang
1 month ago
[-]
TBH, I think the IT industry is too concentrated at eating itself. We are happily automating our jobs away and such while the other industries basically just sleep through.

I don't want generative AI in my phone. I want someone, or something to book a meeting with my family doctor, the head of my son's future primary school, etc. I don't need AI to do that. I need the other industries (medical/government/education) to wake up and let us automate them.

Do you know that my family doctor ONLY take calls? Like in the 1970s I guess? Do you know it takes hours to reach a government office, and they work maybe 6 hours a day? The whole world is f**ing sleeping, IT people, hey guys, slow down on killing yourselves.

AI is supposed to get rid of the chores, now it leaves us with the chores and take the creative part away. I don't need such AI.

reply
skilled
1 month ago
[-]
I wonder if Apple ever approached Google about using Gemini as the flagship integration. I say that because during the keynote I kept thinking to myself, this could be the moment that Google realises it needs to stick to what it knows best - Search - and all they have to do is sit back and watch the hype fade away.

But that’s in a perfect world.

Even to this day, post ChatGPT, I still can’t imagine how I would ever use this AI stuff in a way that really makes me want to use it. Maybe I am too simple of a mind?

Maybe the problem is in the way that it is presented. Too much all at once, with too many areas of where and how it can be used. Rewriting emails or changing invitations to be “poems” instead of text is exactly the type of cringe that companies want to push but it’s really just smoke and mirrors.

Companies telling you to use features that you wouldn’t otherwise need. If you look at the email that Apple rewrote in the keynote - the rewritten version was immediately distinguishable as robotic AI slop.

reply
barkerja
1 month ago
[-]
My understanding is that Apple's approach to this integration is adaptable; much like how you would change your browser's search engine, you'll be able to change which external AI model is utilized. ChatGPT, Gemini, Claude, etc.
reply
rurp
1 month ago
[-]
I don't think the choice of integration really matters for GP's point. Regardless of which model is used, how useful is the ability to rewrite an email in AI Voice really going to be? If I'm struggling over how to word an email there's usually a specific reason for it; maybe I'm trying to word things for a very particular audience or trying to find a concise way to cover something complicated that I have a lot of knowledge of. General purpose language model output wouldn't help at all in those cases.

I'm sure there are usecases for this and the other GenAI features, but they seem more like mildly useful novelties than anything revolutionary.

There's risk to this as well. Making it easier to produce low value slop will probably lead to more of it and could actually make communication worse overall.

reply
markus_zhang
1 month ago
[-]
TBF I was too harsh in my original comment. I did use ChatGPT to automate away the chore part of the coding (boiler plate for example). But I have a gut feeling that in maybe 5-10 years this is going to replace some junior programmer's job.

My job can be largely "AIed" away if such AI gets better and the company feeds internal code to it.

reply
nomel
1 month ago
[-]
> My job can be largely "AIed" away if such AI gets better and the company feeds internal code to it.

The first company to offer their models for offline use, preferably delivered in shipping container you plug in, with the ability to "fine tune" (or whatever tech) with all their internal stuffs, wins the money of everyone that has security/confidentiality requirements.

reply
kolinko
1 month ago
[-]
Unless the company handles national security, the existing cloud tos and infrastructure fulfill all the legal and practical requirements. Even banks and hospitals use cloud now.
reply
nomel
1 month ago
[-]
The context here is running third party LLM, not running arbitrary things in the cloud.

> the existing cloud tos and infrastructure fulfill all the legal and practical requirements

No, because the practical requirements are set by the users, not the TOS. Some companies, for the practical purposes of confidentiality and security, DO NOT want their information on third party servers [1].

Top third party LLM are usually behind an API, with things like retention, in those third party servers, for content policy/legal reasons. On premise, while being able to maintain content policy/legal retention on premise, for any needed retrospection (say after some violation threshold), will allow a bunch of $$$ to use their services.

[1] Companies That Have Banned ChatGPT: https://jaxon.ai/list-of-companies-that-have-banned-chatgpt/

edit: whelp, or go this route, and treat the cloud as completely hostile (which it should be, of course): https://news.ycombinator.com/item?id=40639606

reply
worldsayshi
1 month ago
[-]
If it can automate a junior away it seems as likely it will just make that junior more capable.

Somebody still needs to make those decisions that it can't make well. And some of those decisions doesn't require seniority.

reply
jonathankoren
1 month ago
[-]
That’s not what happens.

What happens is if you don’t need junior people, you eliminate the junior people, and just leave the senior people. The senior people then age out, and now you have no senior people either, because you eliminated all the junior people that would normally replace them.

This is exactly what has happened in traditional manufacturing.

reply
TexanFeller
1 month ago
[-]
> this could be the moment that Google realises it needs to stick to what it knows best - Search

In my mind Google is now a second class search like Bing. Kagi has savagely pwned Google.

reply
notpachet
1 month ago
[-]
> this could be the moment that Google realises it needs to stick to what it knows best - Search

You misspelled "ads"

reply
triyambakam
1 month ago
[-]
> AI is supposed to get rid of the chores, now it leaves us with the chores and take the creative part away. I don't need such AI.

You know I hadn't considered that and I think that's very insightful. Thank you

reply
matt-attack
1 month ago
[-]
This quote has been circulating recently:

> I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes

reply
Dakizhu
1 month ago
[-]
Seems kind of silly. Laundry machines and dishwashers exist. The issue with the last mile is more robotics and control engineering than strictly AI. It's getting annoying seeing AI used as an umbrella term for everything related to automation.
reply
mensetmanusman
1 month ago
[-]
That’s not possible yet, moving atoms is much more difficult than moving bits.
reply
__loam
1 month ago
[-]
I feel like you've hit this industry in the nose without realizing it. How much actual value is the tech industry producing?
reply
mensetmanusman
1 month ago
[-]
Absolutely, doing easier things at massive scale can still have more value than harder things at small scale.

I actually think exactly what should happen is already happening. All the low hanging fruit from software should be completed over the next decades, then the industry will undergo massive consolidation and there will be a large number of skilled people not needed in the industry anymore and they can move onto the next set of low hanging hard(er) problems for humanity.

reply
tavavex
1 month ago
[-]
Given that almost every major thing in the world runs on some kinds of computers running some software - a lot, probably. The fact that we don't have perfect and infallible robots and universal ways to manipulate the unpredictable, chaotic environment that is the real world (see also - self-driving cars) simply doesn't affect a lot of industries.
reply
lioeters
1 month ago
[-]
Bits will move bots, and hopefully do laundry and dishes too.
reply
kolinko
1 month ago
[-]
As for government - depends on a country. In Poland we have an mCitizen (mObywatel) mobile app that allows to handle more things tear by year, and we have internet sites with unified citizen login for most of the other government interactions.

The last time our IRS wanted sth from me, they just e-mailed me, I replied and the issue was solved in 5 minutes.

Oh, and you don’t need any paper ids within the country - driver license, car registration and official citizen id are apps on your phone, and if you don’t have your phone when say police catches you, you give them your data and they check it with their database and with your photo to confirm.

reply
tavavex
1 month ago
[-]
Sounds similar to what the Ukrainian government did with the Diya app (lit. "Act/Action" but also an abbreviation of "the country and me") a few years ago. It's an interesting trend to see Eastern Europe adopt this deep level of internet integration before the countries that pioneered that same internet.
reply
nox101
1 month ago
[-]
> The last time our IRS wanted sth from me, they just e-mailed me, I replied and the issue was solved in 5 minutes.

Lol, that will never happen in the USA. We have companies like Intuit actively lobbying against making things easy because their entire business is claiming to deal with the complexity for you.

reply
kolinko
1 month ago
[-]
Yeah. Another cool thing is that we have government tax forms that are web/mobile, and it takes like 2 min and 8 clicks to fill them - including login. (for private people)
reply
gambiting
1 month ago
[-]
You don't have to walk to the local government office to get car registration plates anymore? That was always annoying as hell.
reply
kolinko
1 month ago
[-]
That’s one of the remaining bastions, sadly. The pickup I can understand, but the need for signup is weird.

On the upside, they are removing the requirements to change plates when you buy a used car, so there’s that.

reply
whizzter
1 month ago
[-]
In Sweden doctors have a fair bit of automation/systems around them, the sad part is that much of it has been co-opted for more stringent records keeping,etc that's just making doctors unhappy and ballooning administration costs instead of focusing on bringing better care for patients.

In essense, we've saved 50 lives a year by avoiding certain mistakes with better record keeping and killed 5000 since the medical queues are too long due to busy doctors so people don't bother getting help in time.

reply
mihaaly
1 month ago
[-]
I have a faint to noticable but persistent back pain. It should be checked out but I do not want to cause bigger pain and mental strain than caused by the back pain by talking to 3-4 persons sending me around and putting me in phone queues weeks apart just to see a doctor sometime in the future - with my embarrassingly low priority issue - making mountains of paperworks bored having too little time to diagnose me (that have the risk of leading to even bigger pile of paperwork). It's a different country, life is all the same.
reply
TheKarateKid
1 month ago
[-]
I completely agree, especially with the taking away the creative part and leaving us with the chores.

Doctors have exams, residencies, and limited licenses to give out to protect their industry. Meanwhile, tech companies will give an engineering job to someone who took a 4 month bootcamp.

reply
runeb
1 month ago
[-]
I share your frustration on services that won’t let you automate them, but to me that’s precisely what generative AI will let you do. You don’t need an API at the family doctors to have AI automate it for you. It just rings them up and sorts it out at your command. AI is like obtaining an API to anything
reply
acchow
1 month ago
[-]
AI is skipping software integrations the same way cell phone towers (and Starlink) skipped phone wire deployment.
reply
iLoveOncall
1 month ago
[-]
> Do you know that my family doctor ONLY take calls?

And despite that it's still your family doctor.

I fully agree with your vision. It's obvious once laid out in words and it was a very insightful comment. But the incentives are not there for other industries to automate themselves.

reply
hot_gril
1 month ago
[-]
I like a family doctor who only takes calls. Good doctors are responsive or have responsive staff. One time a doctor was locked into booking and communicating via this One Medical app that's a total piece of shit and just made things harder, so I went elsewhere. If someone makes a truly better solution, AI or not, doctors will use it without being forced.

And government offices don't even care to begin with, you have no other choice.

reply
dionian
1 month ago
[-]
> I don't want generative AI in my phone. I want someone, or something to book a meeting with my family doctor, the head of my son's future primary school, etc. I don't need AI to do that.

If someone can do that more productively with Gen AI, do you care?

reply
xnx
1 month ago
[-]
> Do you know that my family doctor ONLY take calls? ... Do you know it takes hours to reach a government office, and they work maybe 6 hours a day?

Google has a few different features to handle making calls on your behalf and navigating phone menus and holds.

reply
prepend
1 month ago
[-]
I’ve had some success with google assistant calling restaurants to make reservations, when they are phone only. I expect it’s a matter of time until they can camp on my doctors office. Or call my insurance and pretend to be me.
reply
sigmoid10
1 month ago
[-]
>some success with google assistant calling

The funny thing is, these auto-callers don't even need to be successful. They just need to become common enough for restaurants and doctors to get annoyed to the point where they finally bring their processes to the 21st century.

reply
heywire
1 month ago
[-]
I know this wasn’t really your point, but most physicians around me use Epic MyChart, so I can book all that online. I also almost exclusively use email to communicate with our school district, and we’re in a small town.
reply
brundolf
1 month ago
[-]
Social problems are the hard ones, information problems are the easy ones. So the latter are the low-hanging fruit that gets solved first
reply
preezer
1 month ago
[-]
Ohhhh yes. That's why I was so hyped about Google Duplex or duo?! Never heard of it again....
reply
themacguffinman
1 month ago
[-]
It's available today, it's just not a product called "Duplex". Android has call screening and "hold my call" and phone menu tree detection. On select Google Maps listings, you can make reservations by clicking a button which will make a phone call in the background to make a reservation.
reply
pms
1 month ago
[-]
Great points!

The only thing I'd add: I don't think the responsibility for lack of automation is solely on these other industries. To develop this kind of automation, they need funds and IT experts, but (i) they don't have funds, especially in the US, since they aren't as well funded as IT industry, (ii) for the IT industry this kind of automation is boring, they prefer working on AI.

In my view, the overall issue is that capitalism is prone to herding and hype, and resulting suboptimal collective decision-making.

reply
segmondy
1 month ago
[-]
The world has never cared about what you want. Your life has always revolved around the world. Don't like it, you vs the world. Beat it if you can.
reply
markus_zhang
1 month ago
[-]
I agree. It's just some rant. Whatever, better bury it under the other comments...
reply
dombili
1 month ago
[-]
None of these features seem to be coming to Vision Pro, which I think is quite baffling. Arguably it's the device that can use them the most.
reply
ducadveritatem
1 month ago
[-]
Word on the street (someone who was talking to Apple employees at WWDC) is that the Vision Pro doesn’t have enough headroom on the processor for it. It’s driving that sucker really hard just doing its “regular” thing.
reply
atlex2
1 month ago
[-]
baffling indeed- seems like they should be over-investing in AVP right now, not under-investing
reply
rdl
1 month ago
[-]
I'm super excited about how the apple private compute cloud stuff works -- I tried to build this using intel TXT (predecessor of SGX) and then SGX, and Intel has fucked up so hard and for so long that I'm excited by any new silicon for this. AWS Nitro is really the direct competition, but having good APIs which let app developers do stuff on-device and in some trustworthy/private cloud in a fairly seamless way might be the key innovation here.
reply
xnx
1 month ago
[-]
Credit where credit is due for co-opting the components of the "AI" acronym.
reply
latexr
1 month ago
[-]
Agreed. Got to hand it to them that marketing was sharp on the name. Unless, of course, it doesn’t really work as advertised and then every “AI <negative>” search specifically bubbles Apple stories to the top.
reply
PodgieTar
1 month ago
[-]
It is funny to think about how many Apps have probably built text-generation into their product, just to get it enabled on Apple Devices for free.
reply
visarga
1 month ago
[-]
It's either "Apple Intelligence" or "Generative Intelligence", not "Artificial Intelligence" and "Generative Models"... so silly to brand common ideas with a small twist.

Basically all your information is sucked into a semantic system, and your apps are accessible to a LLM. All with closed models and trusted auditors.

Also funny how they pretend it's a great breakthrough when Siri was stupid-Siri for so many years and only now is lately coming to the AI party.

I really hope those gen-images won't be used to ridicule and bully other people. I think it's kind of daring to use images of known people without their consent, relying on the idea that you know them.

And it's dawning on me that we are already neck-deep in AI. It's flowing through every app and private information. They obliterate any privacy in this system, for the model.

reply
stnmtn
1 month ago
[-]
Would you rather they have jumped onto AI early and LLM-ified Siri years ago?
reply
visarga
1 month ago
[-]
It's late but better late than...
reply
milansuk
1 month ago
[-]
This looks cool for v1! The only problem I see is most devices don't have much RAM, so local models are small and most requests will go to the servers.

Apple could use it to sell more devices - every new generation can have more RAM = more privacy. People will have real reason to buy a new phone more often.

reply
MVissers
1 month ago
[-]
Apple is starting to anticipate a higher RAM need in their M4+ silicon chips: There are rumors they are including more ram than specified in their entry level computers.

https://forums.macrumors.com/threads/do-m4-ipad-pros-with-8g...

One reason could be future AI models.

I'm not sure if this has been verified independently, but interesting nonetheless and would make sense in an AI era.

reply
whoiscroberts
1 month ago
[-]
For anyone who is technical and wants to play with AI but doesn’t want to use cloud services it’s worth digging into LangChain, CrewAI, OpenDevin. Coupled with Ollama to serve the inference from your local network. You can scratch the AI itch without getting in bed with OpenAI.
reply
losvedir
1 month ago
[-]
> Independent experts can inspect the code that runs on Apple silicon servers to verify privacy, and Private Cloud Compute cryptographically ensures that iPhone, iPad, and Mac do not talk to a server unless its software has been publicly logged for inspection.

Technically, the sentence could be read that experts inspect the code, and the client uses TLS and CA's to ensure it's only talking to those Apple servers. But that's pretty much the status quo and uninteresting.

It sounds like they're trying to say that somehow iPhone ensures that it's only talking to a server that's running audited code? That would be absolutely incredible (for more things than just running LLMs), but I can't really imagine how it would be implemented.

reply
Hizonner
1 month ago
[-]
> I can't really imagine how it would be implemented.

People do stuff that they claim implements it using trusted, "tamperproof" hardware.

What they're ignoring is that not all of the assurance is "cryptographic". Some of it comes from trusting that hardware. It's particularly annoying for that to get glossed over by a company that proposes to make the hardware.

You can also do it on a small scale using what the crypto types call "secure multiparty computation", but that has enormous performance limitations that would make it useless for any meaningful machine learning.

reply
warkdarrior
1 month ago
[-]
There is no known solution to remote software attestation that does not depend on trusted hardware.
reply
Hizonner
1 month ago
[-]
That's correct. But Apple is not making that clear, and is therefore misrepresenting what assurance can be offered.
reply
ducadveritatem
1 month ago
[-]
reply
shepherdjerred
1 month ago
[-]
I'm so happy about this. Siri has great voice recognition and voice synthesis, but it really struggled with intent, context, and understanding what I wanted it to do.

Combining the existing aspects of Siri with an LLM will, I expect, make it the best voice assistant available.

reply
newhaus1994
1 month ago
[-]
the natural language tasking for actions between apps is the first thing that's made me excited about anything related to the latest AI craze. if apple can keep it actually private/secure, I'm looking forward to this.
reply
dwighttk
1 month ago
[-]
If I can just get Siri to control music in my car I will be happy.

Hey siri play classical work X a randomly selected version starts playing

Hey siri play a different version same version keeps playing

Hey siri play song X some random song that might have one similar keyword in the lyrics starts playing

No play song X I don’t understand

Hey siri play the rangers game do you mean hockey or baseball?

Only one is playing today and I’ve favorited the baseball team and you always ask me this and I always answer baseball I can’t play that audio anyway

>car crashes off of bridge

(All sequences shortened by ~5 fewer tries at different wordings to get Siri to do what I want)

reply
blixt
1 month ago
[-]
Did I miss the explanation of how they trained their image generation models? It's brave of a company serving creative professionals to generate creative works with AI. I'm a fan of using generative AI, but I would have expected them to at least say a little about what they trained on to make their diffusion models capable of generating these images.

Other than that, using an LLM to handle cross-app functionality is music to my ears. That said, it's similar to what was originally promised with Siri etc. initially. I do believe this technology can do it good enough to be actually useful though.

reply
glial
1 month ago
[-]
I thought it was interesting that the only image generation they support are sketches (that look like a photoshop styling) and goofy 3d cartoons -- not really competition with most creatives.
reply
menacingly
1 month ago
[-]
The privacy conversation was pretty shady, and honestly full of technical holes with pointless misleading distractions
reply
toddmorey
1 month ago
[-]
I thought privacy was really well handled for a high-level overview. Basically it seems like anything thing it can't do on device uses ephemeral private compute with no stored data.

Any data sent to 3rd party AI models requests your consent first.

The details will need to emerge on how they live up to this vision, but I think it's the best AI privacy model so far. I only wish they'd go further and release the Apple Intelligence models as open source.

reply
menacingly
1 month ago
[-]
if the servers are so private, why is on-device such a win? here are some irrelevant distractions:

- the cpu arch of the servers

- mentioning that you have to trust vendors not to keep your data, then announcing a cloud architecture where you have to trust them not to keep your data

- pushing the verifiability of the phone image, when all we ever cared about was what they sent to servers

- only "relevant" data is sent, which over time is everything, and since they never give anyone fine-grained control over anything, the llm will quietly determine what's relevant

- the mention that the data is encrypted, which of course it isn't, since they couldn't inference. They mean in flight, which hopefully _everything_ is, so it's irrelevant

reply
menacingly
1 month ago
[-]
it will defer to the server a _lot_, if you just consider the capability they can fit on that phone
reply
uoaei
1 month ago
[-]
Considering they spent the first half of that segment throwing shade at people who claim privacy guarantees without any way to verify them, Apple hopefully will provide a very robust verification process.
reply
solarkraft
1 month ago
[-]
Like they've done in the past, huh?

They talk about "independent experts" a bit, which I remember being hindered (and sued?) by them rather than supported.

reply
uoaei
1 month ago
[-]
Yep initiatives like these have a nasty habit of being underfunded and deprioritized.
reply
ducadveritatem
1 month ago
[-]
You can only go into so much detail in that format for that audience. I was happy that they simultaneously posted a much more technical deep dive: https://security.apple.com/blog/private-cloud-compute/
reply
alberth
1 month ago
[-]
Regarding OpenAI, has Apple in its history ever relied so heavily on a 3rd party for software features?

(TSMC for hardware, but it seems very un-Apple to be so dependent upon someone else for software capabilities like OpenAI)

reply
OkGoDoIt
1 month ago
[-]
Google Maps in the early days of iOS?

Anyway it seems like a small subset of Siri queries utilize ChatGPT, the vast majority of functionality is performed either locally or with Apple's cloud apparently.

They were also pretty explicit about planning to support other backend AI providers in the future.

reply
gfosco
1 month ago
[-]
The OpenAI reference came at the end, and it appears it's mostly a fallback... an option, that users must explicitly allow every time. Hardly a dependency. Most of the time, it will be on-device or apple-hosted in "private compute cloud", not connected to OpenAI at all.
reply
martimarkov
1 month ago
[-]
They did it with Google Maps and YouTube. They also do this with the search engine used in Safari.

I believe they will just provide an interface in the future to plugin as a Backend AI provider to trusted parties (like the search engine) but will slowly build their own ChatGPT for more and more stuff.

reply
insane_dreamer
1 month ago
[-]
As with Google Maps, my guess is that they will only rely on it long enough to get their own LLM offer up to parity, at which point it might still be there as an option but there will be very little need for users to activate it.

Also, it seems that most of Siri's improved features will still work without it (though perhaps less well in same cases) -- and therefore Apple is not fully dependent on it.

reply
whywhywhywhy
1 month ago
[-]
OG iPhone had Google as Maps provider and YouTube both within Apple shells and the branding downplayed in Maps case

That’s the only case I can think of where it’s an external tech you’re making requests to, usually it’s things like Rosetta made out of Apple IIRC but integrated internally

reply
latexr
1 month ago
[-]
> Rosetta is made out of Apple IIRC but integrated internally

Don’t think that’s right. I think Rosetta was always made inside Apple.

https://en.wikipedia.org/wiki/Rosetta_(software)

Perhaps mixing it up because of Rosetta Stone?

https://en.wikipedia.org/wiki/Rosetta_Stone_(software)

reply
kevbin
1 month ago
[-]
> Transitive is providing the engine used in Apple's Rosetta software, which translates software for its current machines using PowerPC processors so it can run on forthcoming Intel-based Macintoshes. "We've had a long-term relationship with them," Transitive Chief Executive Bob Wiederhold said Tuesday.

https://www.cnet.com/tech/services-and-software/the-brains-b...

reply
latexr
1 month ago
[-]
Thank you for the correction. For those wanting to know more, the technology was called QuickTransit.

https://en.wikipedia.org/wiki/QuickTransit

Rosetta 2 may have been developed in-house, though. That bit isn’t yet clear.

reply
latexr
1 month ago
[-]
That might be smarter than we initially give it credit. By leaving the “safer” (read: harder to get wrong) things to their own models and then the more “creative” stuff to an explicit external model, they can shift blame: “Hey, we didn’t made up that information, we explicitly said that was ChatGPT”. I don’t think they’ll say it outright like that. Because they won’t have to.
reply
dialup_sounds
1 month ago
[-]
Maybe I missed something but it doesn't sound like OpenAI is powering any of this except the optional integrations.
reply
kristjansson
1 month ago
[-]
Seems like the OpenAI integration is a nice-to-have, but mostly separate from Super-Siri?
reply
tavavex
1 month ago
[-]
OpenAI seems like the last-chance fallback - in which case, they've already done the exact same thing with Siri trying to Google search your request if it couldn't do anything with it.
reply
pat2man
1 month ago
[-]
Google Maps, YouTube, on the original iPhone?
reply
alberth
1 month ago
[-]
But those were standalone apps.

This AI capability is integrated throughout the entire OS and Apps.

It's now part of the "fabric" of iOS.

reply
Tagbert
1 month ago
[-]
Only in response to some classes of requests. They didn’t go into detail about when but they said that the local Siri LLM would evaluate the request and decide if it could be services locally, in their private cloud AI, or would need to use OpenAI. Then it would pop up a requesting asking if you want to send the request to OpenAI. It doesn’t look like that would a particularly common occurrence. Seems like it would be needed for “answerbot” type of requests where live web data is being requested.
reply
mholm
1 month ago
[-]
The majority of this is local AI with nothing to do with openAI. Only particularly complex requests go to them
reply
paradite
1 month ago
[-]
Does Microsoft Office in early days of Mac OS count? I guess not.
reply
TMWNN
1 month ago
[-]
I don't see why it would not count. Same for Adobe products.
reply
philwelch
1 month ago
[-]
Microsoft Office was released five years after the Macintosh; what are you talking about?
reply
wtallis
1 month ago
[-]
The applications that were later bundled into Office were on the Mac pretty early: 1985 for Word and Excel, and the first PowerPoint in 1987 was Mac-only.
reply
philwelch
1 month ago
[-]
Fair, though the very first Macs came with MacWrite preinstalled.
reply
Almondsetat
1 month ago
[-]
iCloud uses Google Cloud
reply
LogHouse
1 month ago
[-]
I wondered the same, but frankly, what other options are there?
reply
hackerlight
1 month ago
[-]
It's going to be easy to substitute in their own LLM behind the API in the future. None of the branding or platform is controlled by OpenAI.
reply
xrisk
1 month ago
[-]
It seems that the Apple intelligence stuff will be 15 Pro. Man, I just bought a 15 ~8 months ago. That really sucks.
reply
theswifter01
1 month ago
[-]
For real, I’m sure a fair amount of previous processors are able to handle it fine, just a reason for ppl to buy the next phone
reply
rasengan
1 month ago
[-]
There has never been a better time to move to Linux. Have you tried Omakub? Manjaro? Mint? Ubuntu 24? These are polished and complete alternatives and your favorite app probably has a Linux build already!
reply
jawngee
1 month ago
[-]
| These are polished and complete alternatives

Are they though?

I just setup ubuntu 24 for my son to play games and it's comparatively a very unpolished experience. I'm being very polite when I say that.

reply
jwells89
1 month ago
[-]
Even as someone who keeps a laptop booted into Linux most of the time, yes there are bumps and rough edges that will be encountered once venturing off the most common path of “internet, video, and word processor box”. It’s much better than it once was but it still has problems and the way that fervent advocates try to sweep them under the rug doesn’t help the situation.
reply
anonbanker
1 month ago
[-]
Ubuntu, sadly, is not a good experience for a multitude of reasons outside of Canonical's control, including codec and software licensing restrictions.

Gamers should absolutely be heading towards Nobara Linux (Fedora-based, created by GloriousEggroll of Proton-GE fame). Developers should be trying Omakub. Grandma and Grandpa should be using Linux Mint.

reply
rasengan
1 month ago
[-]
That's interesting - it was Ubuntu 24 that made me feel confident the first time to recommend to non-computer enthusiasts. What about Ubuntu 24 came off unpolished to you, if you don't mind me asking?
reply
anonbanker
1 month ago
[-]
Trying Omakub in the next 48-72 hours. I can't wait. It looks like the curated experience I've been looking for.
reply
machinekob
1 month ago
[-]
Microsoft Recall => bad. Apple Recall => good.
reply
fh9302
1 month ago
[-]
Apple does not take screenshots every couple seconds, unlike Microsoft. That's what people were bothered about.
reply
anonbanker
1 month ago
[-]
That was merely one aspect of what people were bothered about. The most obvious one.
reply
samatman
1 month ago
[-]
Two companies who have earned very different reputations over the decades, will elicit rather different reactions when announcing similar features, yes.

I also missed the part of the linked article where it says that my Mac is going to take a screenshot every few seconds and store it for three months.

reply
pbronez
1 month ago
[-]
Yup, this is the fascinating thing to me. Looking forward to some detailed comparisons between the two architectures.
reply
minimaxir
1 month ago
[-]
The massive difference here is that Apple Recall is 100% on device. (for the use cases they demoed anyways)

EDIT: Yes, I'm wrong.

reply
Foe
1 month ago
[-]
Isn't Microsoft Recall also 100% on device?
reply
sseagull
1 month ago
[-]
Microsoft Recall is completely on-device (or so they say).
reply
skydhash
1 month ago
[-]
It's mostly the screenshots things that get people. Semantic search is ok if the index is properly secured and privacy is a concern. And localized context is ok too (summarizing one web page does not screenshot my whole screen). I believe Microsoft has gone with building the easiest option (recording everything) instead of thinking about better contextual integration.
reply
anonbanker
1 month ago
[-]
Those are pretty big If's when you have a webkit or blink-based browser on the same device.
reply
r0m4n0
1 month ago
[-]
Big partnership for OpenAI. Incredible Apple decided to integrate with a third party like this directly into the OS. This feels like something Apple could have executed well by themselves. I was hoping they weren't going to outsource but I suppose the rumors while they were shopping around were true.

I think this further confirms that they think these AI services are a commodity that they don't feel a need to compete with for the time being.

reply
avtar
1 month ago
[-]
> This feels like something Apple could have executed well by themselves. I was hoping they weren't going to outsource

Who is to say they aren't eventually going to replace the OpenAI integration with an in-house solution later down the line? Apple Maps was released in 2012, before that they relied on Google Maps.

reply
r0m4n0
1 month ago
[-]
My bet is on an trial/acquisition if it works out. I guess that could be complicated with the current ownership structure
reply
Tagbert
1 month ago
[-]
They seem to have kept the OpenAI integration to a minimum, only using it for requests that need large scale processing or for web trivia type of requests.
reply
Jtsummers
1 month ago
[-]
And apparently via Siri, not as part of their other integrations. So you ask something, Siri suggests ChatGPT, you agree to send the prompt. It's not built into the other ML related capabilities.
reply
Tomte
1 month ago
[-]
No transcripts in Voice Memos? The one feature I was surprised hasn‘t already been there for years, heavily rumored before this WWDC, and now nothing?
reply
andrewmunsell
1 month ago
[-]
From MacRumors:

> Notes can record and transcribe audio. When your recording is finished, Apple Intelligence automatically generates a summary. Recording and summaries coming to phone calls too.

So the functionality exists, maybe just not in the Voice Memos app?

reply
Tomte
1 month ago
[-]
That would be great, the three or so articles I read said nothing about it. Thanks!
reply
nsxwolf
1 month ago
[-]
"AI for the rest of us" is an interesting resurrection of the "The computer for the rest of us" Macintosh slogan from 1984.
reply
a_petrov
1 month ago
[-]
What would be interesting for me is, if I can develop an app for, let's say macOS, and expose its context to Siri (with Intelligence) in an easy way.

For example:

Imagine a simple Amazon price tracker I have in my menu bar. I pick 5 products that I want to have their price tracked. I want that info to be exposed to Siri too. And then I can simply ask Siri a tricky question: "Hey Siri, check Amazon tracker app, and tell me if it's a good moment to buy that coffee machine." I'd even expect Siri to get me that data from my app and be able to send it over my email. It doesn't sound like rocket science.

In the end of the day, the average user doesn't like writing with a chatbot. The average user doesn't really like reading (as it could be overwhelming). But the average user could potentially like an assistant that offloads some basic tasks that are not mission critical.

By mission critical I mean asking the next best AI assistant to buy you a plane ticket.

reply
shuckles
1 month ago
[-]
In theory this is covered by the App Entities framework, though it seems like Apple only trains their on-device models to learn about entities for some standard types of apps. https://developer.apple.com/documentation/appintents/app-ent...
reply
duskhorizon2
1 month ago
[-]
Some generative AI features are quite useful. I’m already using AI to generate icons for my apps and write nonsense legalese. But one thing when I explicitly creating image by prompting at the third-party server, and another when AI index and upload all my private documents in the cloud. Apple promised: “Independent experts can inspect the code that runs on Apple silicon servers to verify privacy, and Private Cloud Compute cryptographically ensures that iPhone, iPad, and Mac do not talk to a server unless its software has been publicly logged for inspection.” There are so many questions: Who’re these experts? Can myself be this expert? Will the server software be open sourced? Well, I will postpone my fears until Apple rolls out AI on devices, but now I see this is a privacy nightmare. Now it’s all looks like Microsoft’s Recall. I afraid that without homogeneous encryption private cloud is a sad joke.
reply
JumpCrisscross
1 month ago
[-]
> write nonsense legalese

Oh boy. Someone is going to make a lot of money in court finding people who did this.

reply
duskhorizon2
1 month ago
[-]
Nope. I'm not in USA ;)
reply
JumpCrisscross
1 month ago
[-]
> not in USA

If you’re somewhere where contracts have meaning, it’s a true statement.

reply
duskhorizon2
1 month ago
[-]
Well, if contracts have meaning, I will not use AI. But AppStore for example requires privacy policy, that one AI wrote.
reply
hartator
1 month ago
[-]
It’s a little messy.

Local LLMs + Apple Private Cloud LLMs + OpenAI LLMs. It’s like they can’t decide on one solution. Feels very not Apple.

reply
lz400
1 month ago
[-]
I suppose it was to be expected by IMHO this takes the wind out of the sails of the OpenAI / Apple deal. In the end they don't let OpenAI get into the internals of iOS / Siri, it's just a run of the mill integration. They actually are competing with ChatGPT and I assume eventually they expect to replace it and cancel the integration.

The OpenAI integration also seems setup to data mine ChatGPT. They will have data that says Customer X requested question Q and got answer A from Siri, which he didn't like and went to ChatGPT instead, and got answer B, which he liked. Ok, there's a training set.

I'm always wrong in prediction and will be wrong here but I'd expect openAI is a bad spot long term, doesn't look like they have a product strong enough to withstand the platform builders really going in AI. Once Siri works well, you will never open ChatGPT again.

reply
Aerbil313
1 month ago
[-]
About time. I was saying that Apple is cooking these features, especially intelligent Siri, for the past 1.5 years. It was obvious really.

You can clearly see only people objecting to this new technological integration are the people who don't have a use case for it yet. I am a college student and I can immediately see how me and my friends will be using these features. All of us have ChatGPT installed and subscribed already. We need to write professionally to our professors in e-mail. A big task is to locate a document sent over various communication channels.

Now is the time you'll see people speaking to their devices on street. As an early adopter using the dumb Siri and ChatGPT voice chat far more than average person, it has always been weird to speak to your phone in public. Surely the normalization will follow the general availability soon after.

reply
mihaaly
1 month ago
[-]
I can't wait until making tools for users will be the centerpiece of device development again instead of this corporate crap enforcement about half cooked whatevers acting on our behalf pretending to be a different us (I intentionally avoid the word intelligence, it is the mockery of the word that is going on all around).

Who will trust in anything coming from anyone through electonic channels? Not me. Sooner start to talk to a teddy bear or a yellow rubber duck.

This is a bad and dangerous tendency that corporate biggheads piss up with glares and fanfares so the crowd get willing to drink with amaze.

The whole text is full of corporate bullsh*t, hollow and cloudy stock phrases from a thick pipe - instead of facts or data - a generative cloud computing server room could pour at us without a shread of thoughts.

reply
TIPSIO
1 month ago
[-]
It was a very quick mention, but Siri will now have a text button directly on the lock screen.

If we assume AI will get even 3-4x better, at a certain point, I can't help but think this is the future of computing.

Most users on mobile won't even need to open other apps.

We really are headed for agents doing mostly everything for us.

reply
singularity2001
1 month ago
[-]
Except the Intent API is completely crippled. Maybe the next big OS will just let the AI parse existing menus and figure out all the potential actions an app can take. Some actions need complex objects, so we need a new general mechanism for AIs to connect to 'exported functions'.

Some general OS rethinking is overdue. Or maybe Android is ready for this? Haven't looked into it since they made development impossible via gradle.

Despite this negativity the announcements were better than expected, rebranding AI is bold and funny. But the future will belong to general Agents, not a hardcoded one as presented.

reply
solarkraft
1 month ago
[-]
Android theoretically has a pretty rich intent API, but like anything on Android adoption is a big meh.
reply
denimnerd42
1 month ago
[-]
Siri already has an optional text button on the lockscreen. They changed the shortcut though. For me in ios17 it's a long press on the side button.
reply
tr3ntg
1 month ago
[-]
And with ChatGPT's direct integration into Siri, ChatGPT will be available to anyone using iOS for free, without an account. Interesting.
reply
resfirestar
1 month ago
[-]
The Image Playground demos contrast pretty strongly, in a bad way, with how image generation startups like Stability typically emphasize scifi landscapes and macro images in their marketing material. We're more open to strange color palettes and overly glossy looking surfaces in those types of images, so they're a good fit for current models that can run on smaller GPUs. Apple's examples of real people and places, on the other hand, look like they're deep in uncanny valley and I'm shocked anyone wanted them in a press release. More than any other feature announced today, that felt like they just got on the hype bandwagon and shipped image generation features to please AI-hungry investors, not create anything real people want to use.
reply
minimaxir
1 month ago
[-]
The name and attempted reappropriation of the term "AI" is going to make SEO a pain in the ass.
reply
dylan604
1 month ago
[-]
good. SEO should die in a dumpster fire. in fact, i would love to create a genmoji of that very thing
reply
kaba0
1 month ago
[-]
I think this will be a complete game changer. This will be the first device (with reasonable capabilities) where the human-machine interaction can transform into an actual instruction/command only one. We no longer just choose between a set of predefined routes, but ask the machine to choose for us. If it really does work out even half as good as they show, this will fundamentally alter our connection to tech, basically having created “intelligence” known from our previous century sci-fi tales.

Of course LLMs will quickly show their bounds, like they can’t reason, etc - but for the everyday commands people might ask their phones this probably won’t matter much. The next generation will have a very different stance towards tech than we do.

reply
GeekyBear
1 month ago
[-]
One thing that I found thoughtful was that images could only be generated as cartoons, sketches or animations. There was no option for a more photorealistic style.

That seems like an effective guardrail if you don't want people trying to pass off AI generated images as real.

reply
zmmmmm
1 month ago
[-]
It's interesting to see Apple essentially throw in the towel on on-device compute. I fully expected them to announce a stupendous custom AI processor that would do state of the art LLMs entirely local. Very curious if they are seeing this as a temporary concession or if they are making a fundamental strategic shift here.

The problem is, regardless how hard they try, I just don't believe their statements on their private AI cloud. Primarily because it's not under their control. If governments or courts want that data they are a stroke of the pen away from getting it. Apple just can't change that - which is why it is surprising for me to see them give up on local device computing.

reply
xcv123
1 month ago
[-]
Why would you expect that? State of the art LLMs need a GPU with hundreds of GB of RAM costing tens of thousands of dollars. Apple doesn't have magical supernatural powers. They are still bound by the laws of physics. Siri is still running on device (mostly) and is quite powerful with this new update.
reply
zmmmmm
1 month ago
[-]
in the past Apple has made the choice to gimp their functionality before sending data off-device - one of the reasons Siri has sucked so badly. This seems like a distinct change here, finally conceding that they just can't do it on device and be competitive. But I forsee they now have a much more challenging story to tell from a marketing point of view, now that they can no longer clearly and simply tell people information doesn't leave their device.
reply
xcv123
1 month ago
[-]
I've been using Siri more often recently and surprised at how capable it is for something that runs entirely on a phone. The speech recognition is perfect and it can do basic tasks quite well. Send messages, lookup word definitions, set timers and alarms, check the weather, adjust timers/alarms, control Spotify, call people, adjust the brightness and sound level, control lighting in the lounge, create and read notes or reminders, etc. It all works.
reply
password54321
1 month ago
[-]
"There is no way we can reduce the size of transistors" - You in the 20th century.
reply
xcv123
1 month ago
[-]
Apple uses TSMC for fabrication. The roadmap for TSMC and Intel are planned years in advance.

Two orders of magnitude improvement in 6 months? Not possible. Have you heard of Moore's Law? Maybe in 20 years.

https://en.wikipedia.org/wiki/Moore%27s_law

reply
password54321
1 month ago
[-]
- Doesn't read article

- Doesn't understand not every LLM needs to be ChatGPT

- Links Moores Law wikipedia

I give up.

reply
xcv123
1 month ago
[-]
Try reading the thread before mindlessly replying.

"I fully expected them to announce a stupendous custom AI processor that would do state of the art LLMs entirely local."

State of the art LLM means GPT-4 or equivalent. Trillion+ parameters. You won't run that locally on an iPhone any time soon.

reply
password54321
1 month ago
[-]
"A cornerstone of Apple Intelligence is on-device processing, and many of the models that power it run entirely on device."
reply
seabass
1 month ago
[-]
Adding ai features to the right-click menu is something I’ve been working on for the past year or so, and it’s always both exciting and disappointing to see one of the big players adopt a similar feature natively. I do strongly believe in the context menu being a far better ux than copying and pasting content into ChatGPT, but this release does have me questioning how much more effort to expend on my side project [1]. It doesn’t seem like Apple will support custom commands, history, RAG, and other features, so perhaps there is still space for a power-user version of what they will provide.

[1] https://smudge.ai

reply
jbkkd
1 month ago
[-]
Love your extension! There's definitely room for it
reply
breadwinner
1 month ago
[-]
The worst part of Apple Intelligence is that it will now be a layer in between you and your friends & family. Every message will now be "cleaned up" by Apple Intelligence so you are not directly talking with your mom, best friend etc.
reply
ducadveritatem
1 month ago
[-]
I actually liked that they didn’t show any of the AI writing capabilities being used in iMessage but rather in email client for more professional contexts. I’m really curious to see if they make it available in iMessage…
reply
sc077y
1 month ago
[-]
Impressive not technically because nothing here is new but because it's the first real implementation for the average end consumers of "ai". You have semantic indexing which allows series to basically retrieve context for any query. You have image gen which gives you emojigen or messaging using genAI images. TextGen within emails. UX is world class as usual.

However, The GPT integration feels forced and even dare I say unnecessary. My guess is that they really are interested in the 4o voice model, and they're expecting openAI to remain the front runner in the ai race.

reply
Hippocrates
1 month ago
[-]
The OpenAI/ChatGPT part of this looks pretty useless. Similar to what some shortcuts like “hey data” already do. I was shocked, and relieved that Apple isn't relying on their APIs more. Seems like a big L for OpenAI.
reply
abaymado
1 month ago
[-]
I really just want Siri to perform simple tasks without me giving direct line-by-line orders. For example, I often use Siri to add reminders to my Calendar app but forget to mention the word “calendar” or replace it with “remind me,” and Siri ends up adding it to the Reminders app instead of the Calendar app. I want Siri to have an explicit memory that every time I use the phrase “remind me,” I want the task done in my Calendar app. Additionally, if most apps end up adopting App Intents like OpenAI’s Function Calling, I see a bright future for Siri.
reply
abrichr
1 month ago
[-]
> With onscreen awareness, Siri will be able to understand and take action with users’ content in more apps over time. For example, if a friend texts a user their new address in Messages, the receiver can say, “Add this address to his contact card.”

I wonder how they will extend this to business processes that are not in their training set. At https://openadapt.ai we rely on users to demonstrate tasks, then have the model analyze these demonostrations in order to automate them.

reply
ducadveritatem
1 month ago
[-]
Can’t find a link right now (search terms are pretty crowded atm!) but I saw they just recently shared an llm they’ve been working on that is designed to answer questions about how a give screen of an app functions. (Identifying buttons, core functionality, etc.)
reply
riversflow
1 month ago
[-]
> coming to iPhone 15 pro, iPad and Mac with M1 or later.

I assume it will come to all the iPhone 16’s this fall? Or is Apple Intelligence a Pro Feature?

Either way, my first reaction is that this is going to sell a lot of iPhones.

reply
Jtsummers
1 month ago
[-]
Apple's phones have, for several years now, been on a tick/tock sort of pattern. iPhone N Pro has the new CPU (15 Pro got the A17) and iPhone N uses the prior gen pro CPU (15 got the A16, which was in the iPhone 14 Pro). So this is probably an A17+ feature. If they stick to form, the iPhone 16 will have the A17 processor and the iPhone 16 Pro will have the A18.

That's part of the differentiation between (on the phones) the Pro and non-Pro these days with the Pro getting all the new stuff, and non-Pro getting a partially improved (and partially degraded, usually wrt the camera) Pro from the prior year.

reply
Toutouxc
1 month ago
[-]
Actually 15 Pro has the A17 “Pro” SoC, while the regular 15 uses the last gen A16 Bionic (Bionic being the regular version). We don’t know whether the next regular iPhone will get the last-gen A17 Pro (which would be a little confusing), or a “last-gen-but-new” A17 Bionic or if they’ll go with 18 Bionic + Pro.
reply
vessenes
1 month ago
[-]
I wonder what if any Developer support for “AI” — I need a better way to write that — ahem I - will have for accessing the personal data store. I’ve spent the last four years running up at collecting this data about myself, and, it’s hard, real hard to do a good job at it.

I’d love to have an app I write be able to subscribe to this stream.

It feels like a sort of perfect moat for Apple - they could say no on privacy concerns, and lock out an entire class of agent type app competitors at the same time. Well, here’s hoping I can get access to the “YouSDK” :)

reply
tavavex
1 month ago
[-]
Ah, yes, "F8FF I" lol

More seriously, I think the SDK they've teased is only really intended for making their ready-made features integrate with your programs. If you want complete control, you'd probably have to write it the old way, or integrate it with some existing local LLM backend.

reply
rock_artist
1 month ago
[-]
What's not clear to me during that time is, how will this work on pre-M1 / pre-iPhone 15 devices. (also worth noting that iPhone 14 Pro is almost identical to iPhone 15 in terms of CPU... which is odd, especially when someone bought "Pro" tier...)

If I have some "AI" workflow on my MacBook Pro and then it's broken on my iPhone, I would most likely to entirely stop using it, as it's unexpected (I cannot trust it) or in Apple words... lack continuity...

reply
fdpdkf
1 month ago
[-]
I find the removing people from photos thing creepy. Yes you can remove others to see only your family, but forging the reality to only conform to what you wish is disturbing I think.
reply
qeternity
1 month ago
[-]
Photos are already just one perspective on reality. Instagram has shown that to be painfully true. This is merely a continuation of that.

We all experience our own reality individually.

reply
standardUser
1 month ago
[-]
Maybe it will remind people that we should never have been mistaking recorded media for reality in the first place, a lesson we've been learning since at least 1917...

https://en.wikipedia.org/wiki/Cottingley_Fairies

reply
guyforml
1 month ago
[-]
we've had photoshop for more than a decade now
reply
coolg54321
1 month ago
[-]
As a pixel user I'm really impressed with their cleanup tool, it looks way ahead in UX compared to magic editor on pixel, also having able to select the distractions without altering the main object looks really cool (at least in their demo), magic editor on pixel's underpowered SoC runs too slow, In general iphones have superior hardware vs pixel (as per the benchmarks) so having this on-device should make it really nice experience overall.
reply
bigyikes
1 month ago
[-]
My home is filled with Apple HomePods even though Siri is dumb as rocks.

Looking forward to my house gaining a few IQ points.

I don’t see anything that mentions HomePod specifically but hopefully the updates will come.

reply
oidar
1 month ago
[-]
I was looking for homepod updates as well - I want to get rid of my Amazon Echo devices with homepods, but siri is #1 Slow #2 Dumb #3 messes up my grocery list. Grocery list and timers are the main use case for Amazon Echos - I really hope apple fixes it soon.
reply
cromka
1 month ago
[-]
But HomePod isn’t powerful enough, all the processing would likely have to be online. They’ll fix it but with future models only
reply
eastbound
1 month ago
[-]
“Siri, please start the chronometer”

“Added ‘start the chronometer’ to your reminders”

reply
cjk2
1 month ago
[-]
This stuff will be well integrated, is useful, will be high quality and doesn't require you to buy new hardware.

Microsoft are so boned. They don't even have a mobile proposition.

reply
bee_rider
1 month ago
[-]
On the contrary this is probably good for MS. Lots of people just don’t want to buy into the Apple ecosystem. MS is dumping R&D money into this ML stuff, apparently without thinking of an actual product or application first. So, now they can just copy Apple.
reply
menacingly
1 month ago
[-]
microsoft was always going to take SMBs. Data is what makes them useful, so Microsoft keeps their SMBs, Apple gets consumers, Google gets their slice of productivity and android, where their preachy models will let you know if you did a harassment
reply
LelouBil
1 month ago
[-]
So, this looks great, but I don't get the criticism against Microsoft Recall and not against this.

Can someone explain what Apple has avoided that were such a problem with Recall ?

reply
mark_l_watson
1 month ago
[-]
While I really enjoyed the “Apple-ification of AI” in the keynote today, I have been hoping for a purely personal AI ecosystem, one that I run on my own computer, using the open weight models I choose, and using open source libraries to make writing my own glue and application code easier.

The good news is that consumers can buy into Apple’s or Google’s AI solutions, and the relatively few of us who want to build our own experience can do so.

reply
thomasqbrady
1 month ago
[-]
I’m super confused.

1. What is under the “Apple Intelligence” umbrella and what isn’t? There were a lot of AI features shown before that branding was brought up, I think. 2. The only supported iPhone is the iPhone 15 Pro? But any M1 iPad? Does this mean “Apple Intelligence” or all AI features announced? For instance… 3. For instance, is private cloud compute only available on iPhone 15 Pro?

reply
skc
1 month ago
[-]
Calling it Apple Intelligence seems a bit short sighted to me considering how quickly things are moving in this space.

There's a danger that before long, the stuff Apple will take ages to implement into their devices will seem dated compared to the state of the art less encumbered players will be rolling out.

I felt that a few times watching them demo image generation and contextual conversations.

reply
throwanem
1 month ago
[-]
For broadly similar reasons, I think it's a wise move to ensure Apple's AI services exist under a brand Apple entirely controls.
reply
krackers
1 month ago
[-]
Jack Ma was ahead of his time in calling "AI" Alibaba Intelligence
reply
pastyboy
1 month ago
[-]
Well on the one hand its very interesting... on the other a little dystopian, but I guess I am a luddite.

Everyone now will appear to be of a certain intelligence with proscribed viewpoints, this is going to make face to face interviews interesting, me, I think I'll carry on with my imperfections and flawed opinions, being human may become a trend again.

reply
mensetmanusman
1 month ago
[-]
“With onscreen awareness, Siri will be able to understand and take action with users’ content in more apps over time. For example, if a friend texts a user their new address in Messages, the receiver can say, “Add this address to his contact card.””

Little annoyances like this being fixed would be great. “Open the address on this page in google maps” better work :)

reply
iandanforth
1 month ago
[-]
I think the only way I would trust this is if they explicitly described how they would combat 5-eyes surveillance. If you're not willing to acknowledge that the most dangerous foe of privacy in the western world is the governments of the western world then why should I believe anything you have to say about your implementation?
reply
ducadveritatem
1 month ago
[-]
You can read what they have shared about the security aspects of the cloud portion of their AI offering here: https://security.apple.com/blog/private-cloud-compute/
reply
czierleyn
1 month ago
[-]
Nice, but my native language is Dutch, so I'll be waiting for this for the next 5 years to arrive. If it arrives at all.
reply
adrianmsmith
1 month ago
[-]
It seems they didn't address hallucination at all?

Presumably this hallucinates as much as any other AI (if it didn't, they'd have mentioned that).

So how can you delegate tasks to something that might just invent stuff, e.g. you ask it to summarize an email and it tells you stuff that's not in the original email?

reply
audessuscest
1 month ago
[-]
you really have to try hard to make a model hallucinate when asked to summarize an email. I think they didn't mention it because they can't guarantee 100%, but it's virtually on non-issue for such task.
reply
resharpe105
1 month ago
[-]
Key question is, will there be a hard switch to only ever use on device processing?

If not, and if you don’t want practically every typed word to end up on someone else’s computer (as cloud is just that), you’ll have to drop ios.

As for me that leaves me with a choice between dumbphone or grapheneOS. I’m just thrilled with these choices. :/

reply
LogHouse
1 month ago
[-]
It’s not sending every word to the cloud. I think you must invoke the AI features. Am I wrong?
reply
resharpe105
1 month ago
[-]
I understood that it will have the full context of the data on your phone, in order to be ,,useful”.

We are yet to see if that means only the data you’ve invoked ai features for, or totality of your emails, notes, messages, transcripts of your audio, etc.

reply
dialup_sounds
1 month ago
[-]
From the presentation it sounds like the on-device model determines what portion of the local index is sent to the cloud as context, but is designed for none of that index to be stored in the cloud.

So (as I understand it) something like "What time does my Mom's flight arrive?" could read your email and contacts to find the flight on-device, but necessarily has to send the flight information and only the flight information to answer the arrival time.

reply
block_dagger
1 month ago
[-]
I wonder if the (free) ChatGPT integration will be so good that I won't need my dedicated subscription anymore?
reply
atlex2
1 month ago
[-]
OAI has already said they'll be giving 4o for free.. https://openai.com/index/gpt-4o-and-more-tools-to-chatgpt-fr...

Difference I suppose with Apple is they agree not to scrape your inputs.

reply
ilaksh
1 month ago
[-]
When does this roll out exactly? And exactly which inference actually is on-device?

I think people have been fooled by marketing for this one and the new Co-Pilot PCs into thinking that most of the AI really is running on-device. The models that run fast locally are still fairly limited compared to what runs in the cloud.

reply
Tagbert
1 month ago
[-]
The public betas will be available later this month. The official OS releases are usually in Sept and Oct. Some of the AI stuff should be available right away but rumors say that some of the more advanced Siri features (like app integration) might not launch until after the first of the year.
reply
spike021
1 month ago
[-]
Usually the OS announced at WWDC is released around mid September.
reply
Jtsummers
1 month ago
[-]
Specifically, the iOS update comes out with the new iPhones (usually dropping the same day the new devices are available) and the other OSes are usually timed to be released with it since features are shared across the OSes (so they want to release them at the same time) and the beta periods are the same.
reply
lambdasmith
1 month ago
[-]
How is this going to affect battery life realistically with all the semantic indexing going on in the background?
reply
astrodust
1 month ago
[-]
At some point I wonder if Apple might share compute across your devices so you can do things on low-power hardware like the Watch while leveraging the CPU on the phone, etc. If Apple's dedicated to on-device compute, this ends up being your own "private cloud" of sorts.
reply
vundercind
1 month ago
[-]
Too many of their devices in the wild are battery powered, and one really nice thing about them is that their sleep-state battery use is incredibly low while maintaining quick wake-up.

Plus it’d be weird UX for all your devices to get worse because the iPad in a drawer somewhere finally ran out of power.

reply
ChrisLTD
1 month ago
[-]
What Apple showed in the demo looks tastefully done. The jury is out on how useful it will be in day to day use, but it'll be nice to have the ability to ask AI for help with text, search, and images without resorting to copying and pasting between ChatGPT or some other AI app.
reply
mvkel
1 month ago
[-]
Kind of wild that "ChatGPT" is going to be the household term. It's such a mouthful! Goes to show that the name can kind of be anything if you have an incredible product and/or distribution.

Lobbying for the name to shorten to "chatty-g"

reply
xnx
1 month ago
[-]
Interesting that genmoji seems to recreate the functionality of this SDXL LoRA https://civitai.com/models/140968/emoji-xl
reply
jcfrei
1 month ago
[-]
So, is Apple running a proprietary LLM or are they licensing one from OpenAI, Google, etc?
reply
theshrike79
1 month ago
[-]
Both. Siri is on device and it can talk to ChatGPT
reply
Geee
1 month ago
[-]
Siri runs on device and on Apple's cloud which should be more private than ChatGPT. ChatGPT integration is a separate feature, and will include other providers in the future too.
reply
oidar
1 month ago
[-]
And they can push computer to Apple cloud when compute on the device is not enough.
reply
29athrowaway
1 month ago
[-]
We know the solution to the AI box experiment[1]. Set the AI free and make money.

[1]: https://rationalwiki.org/wiki/AI-box_experiment

reply
WillAdams
1 month ago
[-]
Nice to finally see a follow on to the Assistant feature from the Newton MessagePad.
reply
rys
1 month ago
[-]
So was there ever a deal with OpenAI? Nothing in the keynote mentioned them or needs them. If there isn’t a deal, I’d love to know how everyone claiming it was signed on the dotted line was led so far down that garden path.
reply
philip1209
1 month ago
[-]
Sam is there, and the presentation isn't yet finished:

https://x.com/markgurman/status/1800198524031906258?ref_src=...

reply
bsaul
1 month ago
[-]
That's also my question. What exactly is apple custom LLM, and what is openAI tech ?

I'm quite confident in the ability of openAI to provide a great usable LLM tech, but much less so of apple. All the demo they've shown in the WWDC could just fall flat if the tech really isn't working well enough in practice. I guess we'll just have to wait and see..

reply
mh8h
1 month ago
[-]
There's an integration with ChatGPT, that requires user approval every time.

Sam: https://x.com/sama/status/1800237314360127905

reply
Tagbert
1 month ago
[-]
Yes, they mentioned ChatGPT.

Siri reviews the request and decided if it can respond on its own or if it needs ChatGPT. It then pops up a dialog asking if it is OK ti send the request to ChatGPT. It will not be the default LLM.

reply
philistine
1 month ago
[-]
OpenAI is an option when making a query, but Apple made it sound like the first deal they're making, not the tight collaboration everybody was expecting.

They gave more space and reverence to Ubisoft.

reply
glial
1 month ago
[-]
Siri will have ChatGPT integration (for free, apparently)
reply
matrix87
1 month ago
[-]
the thing with "I have xyz ingredients, help me plan a 5-course meal for every taste bud"... I get the idea, it just doesn't feel like that's how people actually interact with computers. similarly with the bed time story thing. why would anyone waste time with some AI generated thing when they can just reference the works of a chef or author that they already know?

appropriating all of this information through legally dubious means and then attempting to replace the communication channels that produced it in the first place is hubris.

reply
layer8
1 month ago
[-]
No multilingual capabilities it seems: https://www.apple.com/apple-intelligence/#footnote-1
reply
blackeyeblitzar
1 month ago
[-]
The AI wave is showing us that the gains will keep going to the big tech companies and competition doesn’t really exist, not even at this moment. They need to be broken up and taxed heavily.
reply
jbverschoor
1 month ago
[-]
Looks like a very similar strategy as Google Maps on the initial iPhone
reply
ahmeneeroe-v2
1 month ago
[-]
For a brief moment at the intro of "private cloud compute" I was so hopeful that I could have a home-based Mac server for my own private iCloud and "Apple intelligence".
reply
adamtaylor_13
1 month ago
[-]
This is literally everything I've been hoping Siri would be since the very first GPT-3.5 demo over a year ago. I've never been more bullish on the Apple ecosystem. So exciting!
reply
m3kw9
1 month ago
[-]
All the talk is sounding real nice, it’s when it comes out we will see how much context it can see and how accurate it knows what the user says. Gonna be fun few weeks of reviews.
reply
mav3ri3k
1 month ago
[-]
Really excited about semantic index. This should allow for google knowledge graph like features grounded in reality for their llm. However it really depends how well it works.
reply
DaveChurchill
1 month ago
[-]
Great! How do I opt out?
reply
qmmmur
1 month ago
[-]
Did they touch on any AI features that might be able to help me create shortcuts? I really like them, but hate creating them with the kludgy block-based diagrams.
reply
zombiwoof
1 month ago
[-]
Apple promising in 8-12 months what others have today. Although Apple marketed it better.

Google didn't have the brass balls to call it "Alphabet Intelligence" !!!

reply
Etheryte
1 month ago
[-]
No one else is doing this stuff without sending your data off to a remote server. This is a crucial distinction, especially when it comes to personal data.
reply
Slyfox33
1 month ago
[-]
But apples implementation also sends stuff to remote servers.

"To run more complex requests that require more processing power, Private Cloud Compute extends the privacy and security of Apple devices into the cloud to unlock even more intelligence. With Private Cloud Compute, Apple Intelligence can flex and scale its computational capacity and draw on larger, server-based models for more complex requests."

reply
Etheryte
1 month ago
[-]
For complex queries, yes, but everything they can, they do on-device. No one else does that, even if you ask ChatGPT what's 2 + 2 it'll go to their servers.
reply
jshowa
1 month ago
[-]
Microsoft delivers AI recall Everyone hates it Apple integrates AI into every facet of a device that is highly personal Everyone loves it

Please make it make sense.

reply
dt3ft
1 month ago
[-]
Thanks but no thanks.

All I wish for is user-replaceable battery and a battery lasting for at least 2 full days.

If I can’t opt out from any of this, this is where I stop using an iPhone.

reply
guhcampos
1 month ago
[-]
Somehow all these news about Apple Intelligence don't really make me thinkg about Apple, but just how bad Intel just lost the branding battle forever.
reply
zx10rse
1 month ago
[-]
Jumping on the chatgpt hype train is a mistake. I don't want anything from my devices to be accessible by openai. It will bite them back big time.
reply
ducadveritatem
1 month ago
[-]
Good news then that you’re explicitly asked for permission each time any query would by shared with ChatGPT.
reply
RyanAdamas
1 month ago
[-]
>Be me, have iPhone 6s

>Can't get many apps these days

>Can't use AI apps at all

>Battery last about 2 hours

>Never used iCloud, barely used iTunes

>Apple announces new "free" Ai Assistant for everyone

well...not everyone

reply
timothyduong
1 month ago
[-]
iOS users need to have the iPhone 15 pro.. so everyone else is also cooked on iOS.
reply
maxioatic
1 month ago
[-]
I might have missed it but did they mention Spotlight at all? That'd be pretty sweet if Spotlight becomes more useful (even a little bit)
reply
mihaaly
1 month ago
[-]
Cloud compute and privacy in the same sentence, this is a new low bar for corporate bull*hit. Almost worse than the Windows Recall nonsense.
reply
theshrike79
1 month ago
[-]
It's also auditable, they mentioned it multiple times.

Apple specifically doesn't want to know your shit, they're jumping through weird hoops to keep it that way.

It would be a LOT easier just to know your shit.

reply
baxuz
1 month ago
[-]
I really hope that they'll enable other, less spoken languages. I'm not planning on talking with my phone in English.
reply
boringg
1 month ago
[-]
So is this finally privacy based AI with personal memory included? Ie bespoke AI for your own stack that isn't out in the world.
reply
teilo
1 month ago
[-]
It’s an on-device RAG.
reply
Slyfox33
1 month ago
[-]
No it's not.

"To run more complex requests that require more processing power, Private Cloud Compute extends the privacy and security of Apple devices into the cloud to unlock even more intelligence. With Private Cloud Compute, Apple Intelligence can flex and scale its computational capacity and draw on larger, server-based models for more complex requests."

reply
dragon_greens
1 month ago
[-]
I wonder if this will become a paid feature or part of iCloud+ later on. Or do they expect it to be mostly on device models driven?
reply
c1sc0
1 month ago
[-]
It’s all free
reply
ENGNR
1 month ago
[-]
Ok I'm calling it. If NVIDIA releases a phone, and allows you to buy the hardware for the off-device processing too, I'll fully ditch Apple in a heartbeat.

I'm quite creeped out that it uses off-device processing for a personal context, and you can't host your own off-device processing, even if you have top of the line Apple silicon hardware (laptop or desktop) that could step in and do the job. Hopefully they announce it in one of the talks over the next few days.

reply
todotask
1 month ago
[-]
Sequoia with 'ai', coinciding with Apple Intelligence, is a cleverly chosen code name for this release.
reply
mellosouls
1 month ago
[-]
Given there is no mention of "Artificial" is this Apple rebranding AI, the same as they did AR a year ago?
reply
dakiol
1 month ago
[-]
I didn't watch the whole thing (will do), but could someone tell me already: can it be disabled on a Mac?
reply
m3kw9
1 month ago
[-]
So they have models running on Apple silicone in the cloud, does that mean it’s running its own models?
reply
Tagbert
1 month ago
[-]
Ajax is the name of their home-grown LLM. Ferret UI is another model that they have published papers on that lets them look at the UI of an app to understand how to interact and automate it.
reply
machinekob
1 month ago
[-]
Or just opensource models
reply
mrkramer
1 month ago
[-]
Is it just me or this AI rush is actually about to ruin user experience both on Apple and Microsoft devices? The extra layer of complexity for users who will now be introduced to endless AI features is bloatware in the making.
reply
anonbanker
1 month ago
[-]
Just making linux more appealing for the subset of the population that doesn't want to hook into skynet's subpar UX.
reply
password54321
1 month ago
[-]
Based on what they showed most users won't even know if the feature they are using is using AI or not. Most of it is local and just comes in the form of a button rather than typing out a prompt to make it do what you want. And I think those two are the big things to take away from this. Local means less clunkiness and lag you get from tools like Perplexity or whatever and no 'prompt engineering' means even someone's grandma could immediately start using AI. Apple just doing what Apple does best.
reply
__loam
1 month ago
[-]
Hope we can disable all this crap.
reply
BonoboIO
1 month ago
[-]
Only on iPhone 15 Pro upwards or M1 Mac’s

So only a very small percentage of users will be able to use it.

reply
myko
1 month ago
[-]
This AI craze is very underwhelming. Surprised to see Apple go whole-hog into it.
reply
camcaine
1 month ago
[-]
Feels like Apple are super late to the party and are scrambling. And it showed.
reply
semireg
1 month ago
[-]
So that’s where all the M4s are going … to Apple’s private inference cloud.
reply
KennyBlanken
1 month ago
[-]
Okay. And what about the terrible keyboard, predictive text, and autocorrect?
reply
notatoad
1 month ago
[-]
is my iPhone 14 going to get none of this then?

i understand i'm not going to get the on-device stuff, but something like siri being able to call out to chatGPT should be available on any device, right?

reply
syspec
1 month ago
[-]
FWIW: You can do that on your phone today iwith Siri, if you have the ChatGPT app.

You just say "hey siri, ask chat gpt..." and then it will

reply
agumonkey
1 month ago
[-]
It's not personal computing, it's personal intelligence now :)
reply
lawlessone
1 month ago
[-]
Couldn't Siri already do some of these things without LLM's?
reply
breadwinner
1 month ago
[-]
I thought it was underwhelming. The fact that integration with ChatGPT is not seamless pours cold water over it. Siri will seek your permission each time before passing the question to ChatGPT. I can avoid that step by using ChatGPT directly.
reply
TaylorAlexander
1 month ago
[-]
Personally I feel less and less comfortable giving OpenAI access to my private data tho, so I’m really happy there’s a divide. As you said, if you really just need ChatGPT for something you can open that app. But I’m happy the default isn’t to send all Apple users requests to OpenAI all the time.
reply
tr3ntg
1 month ago
[-]
I think this is good. For folks like you, that will always be an option. For people that have yet to touch ChatGPT and "still don't know how to access AI" (I've heard this sentiment from many people that couldn't care less about it all) it's a perfect balance. Siri will operate as you expect, until one day it prompts you to pass your question over to ChatGPT. You can opt out or give it a try.

I do agree that the extra tap is a bummer for anyone that wanted ChatGPT baked into the OS, even easier to access than it is in the ChatGPT app.

reply
nerdjon
1 month ago
[-]
On the opposite side, it not being seamless is entirely why I would actually trust using the new Siri with any sensitive data.

I am not entirely sure I will ever actually allow it to connect to ChatGPT for privacy reasons, but having the option when it can't be handled another way is nice.

I imagine this is more a stopgap until more and more of this can happen locally anyways. Especially since it sounds like Siri determins when it should reach out.

reply
notdarkyet
1 month ago
[-]
There are a lot of people who do not want their phone seamlessly hooked into anything OpenAI touches. Choice is important
reply
kaba0
1 month ago
[-]
But it has an on-device LLM plus an in-cloud LLM, that can handle many types of queries, so why would it be bad?
reply
jdeaton
1 month ago
[-]
Maybe we could just get a decent spam filter on imessage?
reply
daralthus
1 month ago
[-]
/Time for a good prompt injection email header/s
reply
gonzo41
1 month ago
[-]
Oddly, I find myself siding with Musk on this feature.
reply
65
1 month ago
[-]
Some stuff seems cool in the sense that you try it once and never use it again. Other stuff, like ChatGPT integration, seem like they'll produce more AI Slop and false information. It's always interesting to me to see just how many people blatantly trust ChatGPT for information.

I find most AI products to be counter-intuitive - most of the time Googling something or writing your own document is faster. But the tech overlords of Silicon Valley will continuously force AI down our throats. It's no longer about useful software, we made most of that already, it's about growth at all costs. I'm a developer and day by day I come to despise the software world. Real life is a lot better. Real life engineering and hardware have gotten a lot better over the years. That's the only thing keeping me optimistic about technology these days. Software is what makes me pessimistic.

reply
Tiktaalik
1 month ago
[-]
I think the genmoji is going to be tons of fun. Basically seems like https://emojikitchen.dev/ on steroids.
reply
newsclues
1 month ago
[-]
Let me run it locally on a Mac mini or whatever
reply
mjamesaustin
1 month ago
[-]
A lot of the features do run locally, e.g. the Image Playground.
reply
ge96
1 month ago
[-]
Heh I see what they did there convenient name
reply
nothrowaways
1 month ago
[-]
I hope it is optional feature.
reply
pcloadletter_
1 month ago
[-]
My MSFT stock is looking good.
reply
rldjbpin
1 month ago
[-]
i hope there is a way to prevent the online processing without consent.
reply
toisanji
1 month ago
[-]
can you disable external llm calls so everything stays on device?
reply
iJohnDoe
1 month ago
[-]
Apple Intelligence = AI

Figgin’ brilliant.

reply
hu3
1 month ago
[-]
Am I understanding correctly that some AI will run on Apple servers? So not completely offline AI.

If so that's somewhat disappointing given how much AI power Apple hardware packs.

> Apple sets a new standard for privacy in AI, with the ability to flex and scale computational capacity between on-device processing and larger, server-based models that run on dedicated Apple silicon servers

reply
onesociety2022
1 month ago
[-]
Maybe that is actually good for iPhone buyers? Otherwise every 2 years, they will claim a bunch of new AI features will not work on your older device. But if they can delegate requests to a server, older devices can continue receiving newer AI features from future iOS releases (they will just be slower than the newer iPhones which will run them locally).
reply
dylan604
1 month ago
[-]
The interesting thing I took from that is they are making servers with M-series chips. Maybe they're just rack mounted Mac Minis? But if Apple decides that crap, maybe it'll get them to a point where they decided to make a proper rack mountable form factor available??? <genmojiTechPrayingInFrontOfRack>
reply
Almondsetat
1 month ago
[-]
I think it's a little disingenuous to think on-device accelerators on a mobile phone would be able to do literally any AI task without help
reply
ducadveritatem
1 month ago
[-]
Based on their presentations yesterday and accompanying written materials they have a pretty capable ~3B parameter Foundational LLM running fully locally on the devices which is the first line. https://machinelearning.apple.com/research/introducing-apple...
reply
sidcool
1 month ago
[-]
No privacy concerns?
reply
rasengan
1 month ago
[-]
Much thanks goes out to Microsoft and Apple for handing the Desktop to Linux!
reply
fsto
1 month ago
[-]
Can someone explain why the AAPL share drops 1% during the event. Did the market expect more? If so, what?
reply
qeternity
1 month ago
[-]
Buy the rumor, sell the news.

A tale as old as markets.

reply
Hamuko
1 month ago
[-]
Market expected Apple ChatGPT, but they got Siri with some fixes.

Literally one of the demonstrations in the Apple Intelligence part of the keynote was "7am alarm", which creates an alarm for 7 AM.

reply
xnx
1 month ago
[-]
Apple has largely maxed-out on iPhone market share, so investors probably want to see things more like subscription services than can bring in $XX billions new revenue per quarter.
reply
fsto
1 month ago
[-]
To use Apple Intelligence on mobile you’ll need iPhone 15 pro or later which I (not a trader) thought would make investors happy.
reply
crooked-v
1 month ago
[-]
Or in other words, general late-stage capitalism "anything but exponential growth every quarter is failure" brainworms.
reply
anonbanker
1 month ago
[-]
Does anyone else roll their eyes when someone mentions "late stage capitalism" anymore? The meme has started literal decades ago, and the end-stage never seems to materialize, in any country, ever.
reply
paradite
1 month ago
[-]
Maybe people are expecting new Macbooks? Though Apple don't usually release hardware for WWDC.
reply
machinekob
1 month ago
[-]
I mean how they'll monetise it?
reply
mysteria
1 month ago
[-]
It's like any other feature in that the purchase price of the new iPhone and App Store revenue helps pay for the AI functionality. Like they hope people will want to upgrade their phones or switch to Apple for this.
reply
martimarkov
1 month ago
[-]
% share with OpenAI?

Plus I’d now consider buying the new iPhone and wasn’t planning on a specific update from a 13 given the hardware is still fine

reply
fsto
1 month ago
[-]
I’m assuming many will consider buying an iPhone 15 pro or the next one. I’m really not a trader, but thought this + the stronger ecosystem lock-in effect would bump the share significantly.
reply
algesten
1 month ago
[-]
In fact, not being able to do some of these things might improve privacy.
reply
tedd4u
1 month ago
[-]
Incremental increase in future hardware sales (that will be required to use it fully).
reply
chucke1992
1 month ago
[-]
Nothing really impressive. Let's see who the stock reacts.
reply
qeternity
1 month ago
[-]
Out of curiosity, what would you have considered impressive?
reply
chucke1992
1 month ago
[-]
Hard to tell. That's the whole point.I thought maybe Apple had come up with something - but by and large it is not different from Vision Pro - they have made X feel better, no real one generation ahead stuff. Basically they are not introducing the innovation.

There are two challenges right now for AI - user monetisation and mass adoption. ChatGPT right now is basically a TikTok - a popular app and that's it. Yeah, it has a subscription but by and large, companies are failing to find a way to monetize the user. And at the same time there is no a proper trigger - something that would make AI better than a glorified assistance. For people who used not to rely on it, it won't be a game changer either, just a little bit of convenience.

So it remains to be seen what's going to happen with AI in the future. It seems like the biggest gamechanger introduced by AI is in hardware space - the mass adoption of ARM, NPUs and stuff. Plus it seems like the monetization of AI is done nicely in the companies - Adobe's AI features, Microsoft and their corporate features and so on.

reply
vladsolokha
1 month ago
[-]
So now my “Sent from iPhone” email signature will be replaced with “Sent with Apple Intelligence” smh. I don’t think we will have anything original to say anymore. It will all just be converted to what is proper and “right”
reply
tunnuz
1 month ago
[-]
AI for short?
reply
rvz
1 month ago
[-]
Lots of apps have been sherlocked once again and this marks the accelerated race to zero with Apple's entrance with on-device AI all system-wide.

No API keys, no prompt engineering or switching between AI models.

It. just. works.

reply
Tagbert
1 month ago
[-]
I'm glad that they finally brought a better window tiling system. If that sherlocks a couple of dozen utility apps, I'm OK with that. it has been long enough. Some of those apps will be fine as they will probably provide more controls.
reply
zombiwoof
1 month ago
[-]
Ha ha. Let’s see if it just. Works.
reply
simianparrot
1 month ago
[-]
It needs to be opt-out by default in the EU or this is an enormous and inevitable GDPR breach waiting to happen.
reply
cuuupid
1 month ago
[-]
I really hope we can fully turn off the GPT4o integration even more than just saying no to every escalation
reply
chuch1
1 month ago
[-]
sounds cool.
reply
egypturnash
1 month ago
[-]
Uugggghhhh
reply
whimsicalism
1 month ago
[-]
It's interesting how positive the commentary is here. I am generally much more pro-AI than the average HN commentator, but frankly I find this release completely underwhelming.

Just replace Siri with chatgpt and give it actions, that is what everyone wants - why can't we have that?

reply
Geee
1 month ago
[-]
Absolutely no one wants to hand their private information to any company, especially OpenAI. Everyone was waiting for an on-device LLM from Apple and they delivered.
reply
Slyfox33
1 month ago
[-]
Why does everyone here seem to think its entirely on device? They literally mention in the article

"To run more complex requests that require more processing power, Private Cloud Compute extends the privacy and security of Apple devices into the cloud to unlock even more intelligence. With Private Cloud Compute, Apple Intelligence can flex and scale its computational capacity and draw on larger, server-based models for more complex requests."

reply
kaba0
1 month ago
[-]
They have literally revamped Siri, and based it on an LLM that runs on-device, and can use your personal context, plus can escalate to a private cloud model when it deems “weak” for the task at hand.

How the hell is it underwhelming?

reply
rqtwteye
1 month ago
[-]
It’s a little disappointing that even big companies like Apple jump on OpenAI instead of building their own thing. Diversity seems pretty important with AI.
reply
fsto
1 month ago
[-]
They use OpenAI as an optional fallback model. Adding support for more models later. I’m positively surprised they’re not trying to solve everything with their own tech.
reply
dylan604
1 month ago
[-]
Or, use existing now to get it going, then swap out for your own thing later. Hopefully, it will be better than previous swaps so that it doesn't be a meme worthy of being mocked in a comedy show "is it Apple Maps bad?"
reply
kristjansson
1 month ago
[-]
All the interesting features appear to be de novo models from Apple? Only the last fallback-to-ChatGPT feature interacts with OpenAI.
reply
frenchie4111
1 month ago
[-]
Unfortunately OpenAI has a pretty big "dollars and hours spent on GPUs" moat right now. I imagine Apple is already hard at work building their own models, but until then they will leverage 3rd parties
reply
YetAnotherNick
1 month ago
[-]
Apple has 10x more on reserve cash than the entirety of OpenAI when they trained GPT-4. I don't think OpenAI could have possibly spent $5B or more for training first version of GPT-4(which was trained before GPT-3.5 gained traction), which is a pocket change for Apple for such a core feature.
reply
jadbox
1 month ago
[-]
.. and none of it is open. It's a closed source OS with a proprietary dev cloud service using proprietary model that's only accessible with proprietary sdks.
reply
oidar
1 month ago
[-]
The "kits" are "openish" to developers. I imagine that many app developers are going to be making models for specific use cases.
reply
amrrs
1 month ago
[-]
Slap on the face all Cloud based LLM providers!
reply
talldayo
1 month ago
[-]
Or pay them, for a deal that gives you access to a competitor's product.
reply
tsunamifury
1 month ago
[-]
This isn't about giving Apple intelligence, this is about giving ChatGPT an understanding of the world via the eyes, ears, and thoughts on your phone.
reply
Jtsummers
1 month ago
[-]
> This isn't about giving Apple intelligence, this is about giving ChatGPT an understanding of the world via the eyes, ears, and thoughts on your phone.

Except it doesn't do that. The ChatGPT integration is via Siri and opt-in (you ask Siri something, it prompts you to send that prompt to ChatGPT). The rest of the LLM and ML features are on device or in Apple's cloud (which is not OpenAI's cloud). The ChatGPT integration is also, by their announced design, substitutable in the future (or you'll be given a set of systems to select from, not just ChatGPT). They are not sending all data on your device to OpenAI.

reply
tsunamifury
1 month ago
[-]
Yea, I worked in partnership with apple for years. I dont know what else to tell you except they lie through their teeth about privacy all the time.
reply
1-6
1 month ago
[-]
Apple (I reckon): Most people don’t know how to use AI well. It’s our responsibility to pare it all down and release features one by one selling each new software feature as part of next year’s hardware.
reply
vundercind
1 month ago
[-]
They depicted it working for the A17 through the M4, since those have significant AI-acceleration hardware. So, no.
reply
tonyabracadabra
1 month ago
[-]
Ok! Made a song about! https://heymusic.ai/music/apple-intel-fEoSb Hope you guys enjoy it!
reply
seydor
1 month ago
[-]
"we also intend to add support for other models"

When they pay us sufficiently

It's great that Apple is capitalizing so well on everyone else's inventions, but couldn't they at least pretend they will give something back to the ecosystem?

I wish someone somewhere creates something like intents for the web browser

reply
vineyardlabs
1 month ago
[-]
I would imagine the end goal here is to develop their own internal models (or partner with one of the companies doing open source models) that would be hosted on the Apple Silicon-based cloud they mentioned and then they would not be dependent on anyone else's compute.
reply
tekawade
1 month ago
[-]
"privacy in AI" - If apple is sharing with ChatGPT how does it work? Do they try to remove context information. But still it's sharing a lot more. + Anything that goes out can go anywhere in internet. Look at Facebook, Twitter and even Apple use of data.
reply