The way Siri can now perform actions based on context from emails and messages like setting calendar and reservations or asking about someone’s flight is so useful (can’t tell you how many times my brother didn’t bother to check the flight code I sent him via message when he asks me when I’m landing for pickup!).
I always saw this level of personal intelligence to come about at some point, but I didn’t expect Apple to hit it out of the park so strongly. Benefit of drawing people into their ecosystem.
Nevermind all the thought put into private cloud, integration with ChatGPT, the image generation playground, and Genmoji. I can genuinely see all this being useful for “the rest of us,” to quote Craig. As someone who’s taken a pessimistic view of Apple software innovation the last several years, I’m amazed.
One caveat: the image generation of real people was super uncanny and made me uncomfortable. I would not be happy to receive one of those cold and impersonal, low-effort images as a birthday wish.
It's the benefit of how Apple does product ownership. In contrast to Google and Microsoft.
I hadn't considered it, but AI convergence is going to lay bare organizational deficiencies in a way previous revolutions didn't.
Nobody wants a GenAI feature that works in Gmail, a different one that works in Messages, etc. -- they want a platform capability that works anywhere they use text.
I'm not sure either Google or Microsoft are organizationally-capable of delivering that, at this point.
Your quote really hit me. I trust Apple to respect my privacy when doing AI, but the thought of Microsoft or Google slurping up all my data to do remote-server AI is abhorrent. I can't see how Microsoft or Google can undo the last 10 years to fix this.
I'm actually a little gobsmacked anyone on this forum can type those words without physically convulsing.
The even more terrible part is I'm sure it's common. And so via network externalities the rest of us who do NOT trust any of these companies on the basis that all of them, time and again, have shown themselves to be totally untrustworthy in all possible ways, will get locked into this lunacy. I now can't deal with the government without a smartphone controlled by either google or apple. No other choice. Because this utter insanity isn't being loudly called out, spat upon, and generally treated with the withering contempt that these companies have so richly and roundly earned this decision is being made for all society by the most naive among us.
Rather, I think they meant "trust" as in "Apple is observably predictable and rational in how they work toward their own self-interest, rarely doing things for stupid reasons. And they have chosen to center their business on a long-term revenue strategy involving selling high-margin short-lifetime hardware — a strategy that only continues to work because of an extremely high level of brand-image they've built up; and which would be ruined instantly if they broke any of the fundamental brand promises they make. These two factors together mean that Apple have every reason to be incentivized to only say things if they're going to mean them and follow through on them."
There's also the much simpler kind of "trust" in the sense of "I trust them because they don't put me in situations where I need to trust them. They actively recuse themselves from opportunities to steal my data, designing architectures to not have places where that can happen." (Of course, the ideal version of this kind of trust would be a fully-open-source-hardware-and-software, work-in-the-open, signed-public-supply-chain-ledger kind of company. You don't get that from Apple, nor from any other bigcorp. Apple's software is proprietary... but at least it's in your hand where you can reverse-engineer it! Google's software is off in a cloud somewhere where nobody can audit changes to it.)
They are anti right to repair & they have their walled garden on their mobile devices. Their vertically integrated model also leads to unusually high prices. This website in particular would also directly feel the pain of Apple killing apps to implement themselves later, the greedy apple store cut and also not allowing use of hardware features that Apples themselves can use. Consumers feel this indirectly (higher prices, less competition).
Also, don't get it twisted, Apple is still collecting all of your data, even if you ask them not to [0].
0 - https://mashable.com/article/apple-data-privacy-collection-l...
Their vertically integrated model leads to very good customer service. I don't pay extra for Apple Care and I still get treated like an adult if I show up to an Apple Store with some need.
Even when Apple makes a mistake and collects more data than they should, I don't expect that data to influence ads that I see or to be sold to the highest bidder. (As a developer myself, I find that I can be quite lenient on app internal telemetry.) I can also see that ad review is barely a small side hustle to them in their Quaterly Reports and I can also see that most of their ad revenue is from untargeted campaigns. (Microsoft is a bigger ad company than Apple. Google is an ad company deep into its DNA at this point with everything else a side hustle.)
There is a beauty to a well maintained walled garden. Royalty invested a lot of money into walled gardens and Apple maybe doesn't treat you exactly like royalty, but there's a lot of similar respect/dignity there in their treatment of customers, even if they want you to trust them not to touch the flowers or dig below the walls too much. They want you to have a good time. They want their garden to be like a Disney World, safer than the real world.
You may not appreciate those sorts of comforts and that's fine. Plenty of people prefer the beauty and comfort of a walled garden than the free-for-all of a public park (or the squatter's rights of an abandoned amusement park if you don't mind playing unpaid mechanic more often than not). There's a lot of subjective axes to evaluate all of this on.
You should temper your expectations: https://gizmodo.com/apple-iphone-france-ads-fine-illegal-dat...
France’s data protection authority, CNIL, fined Apple €8 million ... for illegally harvesting iPhone owners’ data for targeted ads without proper consent.
If ads are a small percentage of Apple's quarterly reports, that isn't sound reasoning; after all, they make a lot of money in general in a lot of areas.
Fundamentally, ads is an incredibly high margin business with lots of room for growth (particularly when you own the platform and can handicap your competitors) so over time, all tech companies will become ads companies.
I don't think this applies to their watch or tablet business where the limiting factor on lifetime in the market is security/os updates. Most alternatives in that space have significantly worse support cycles.
This used to be true of their phones as well, but the android market seems to be catching up in ways that tablets/wearables have not (see google's 7 year commitment for pixels).
Not sure if it applies to general purpose. Certainly there are non mac computers that we can throw linux on and use for 10+ years and there are examples of apple laptops getting cut off earlier than I'd like (RIP my beloved 12" macbook), but there are often some pretty serious tradeoffs to machines older than 7 years anyway. Also, I'm not sure if apple's strategy re: support lifecycles on products after the AS migration have shifted. It wouldn't surprise me if the first gen m1 products get 10 years of security updates.
Do they though? Battery performance that 'lies' to you intentionally, planned obsolescence, locked in ecosystems, overtly undercutting the alternatives, marketing that hypes up rather bland features...I admit I don't see your point.
Apple, if anything, seem about as user hostile as Microsoft is these days.
Everything is relative. Apple generally supports their devices with OS updates for longer than most Android phone makers. Their incentives here are well aligned: they get a decent profit from Apple Store no matter how long you use their phone.
I think a lot of the reporting on Apples actions is very click-baity and lack nuance. Take the case when Apple throttled CPU performance of their phones when the battery got old and degraded. It was reported as a case of planned obsolescence, but it was in fact the exact opposite: by limiting the power consumption of the CPU they avoided unexpected phone shutdown due to battery voltage going too low during bursts of high power consumption. A phone that randomly shuts down is borderline use-less. A phone that is slower can at least be used for a while longer. Apple didn't have to do this. They would have spent less R&D money, and had a much lower chance of bad PR backlash, if they just simply did nothing. Yet they did something to keep old phones useful for longer.
> locked in ecosystems
That's a fine balance. Creating a good ecosystem is part of what makes Apple so user friendly. And it's a lot harder to create open ecosystems than having closed ones. Especially when you factor in security and reliability. If Apple diverted resources to making their ecosystems more open I think their ecosystem integration would have been significantly worse, which would have made them lose the thing most users considers Apples primary advantage.
Apple is a mixed bag. They were one of the first to go all-in on USB-C. Sometimes they push aggressively for new open standards that improve user experience. Yet they held on to Lightning for far too long on their phones. But here you get back to the planned obsolescence factor: there's a HUGE amount of perfectly fine Lightning accessories out there that people/companies are using with iPhones. If they killed Lightning too fast I can guarantee you they would have gotten a lot of hate from people who couldn't use their Lightning accessory anymore. With laptops that wasn't a big issue. Adapters are significantly less convenient to use with phone accessories.
Microsoft have no consistency and Google wants you to pray at the altar of advertising.
Apple tells a pretty compelling lie here. Rather than execute logic on a server whose behavior can change moment to moment, it executes on a device you "own" with a "knowable" version of its software. And you can absolutely determine no network traffic occurs during the execution of the features from things announced this week and going back a decade.
The part that Apple also uploads your personal information to their servers on separate intervals both powering their internal analytics and providing training data is also known, and for the most part, completely lost on people.
- They upload your data to their servers. This is a requirement of iCloud and several non-iCloud systems like Maps.
- Where analytics is concerned, data is anonymized. They give examples of how they do this like by adding noise to the start and end of map routes.
- Where training is concerned, data is limited to purchased data (photos) and opted-in parties (health research).
My point is that Apple's code executing on device can be verified to execute on device. That concept does not require trust. Where servers are involved and Apple does admit their use in some cases, you trust them (as much as you trust Google) their statements are both perfectly true and ageless. Apple transitions seamlessly between two true concepts with wildly different implications.
We have different ideas. That’s all. There’s no need to look down on each other for it.
Going back to a Linux desktop would be the end of me. I know it.
The one exception I made recently was to dump Windows and move to a Fedora immutable build after seeing how capable the Steam Deck (and Linux) was for all the games I play. I’ll get shot of that if it causes me grief though - or just stop playing games on my PC.
I just don’t have the energy to mess about with it all these days and Apple is the 2nd best option in lieu of that.
I did have a ton of issues with NVidia in the same environment, but after putting a Radeon card in it has been smooth sailing. That’s to be expected I guess.
I doubt I would have tolerated this even a few years ago though and would have ended up like yourself with a dual boot setup.
...is a thing I experience on a regular basis (and that I only really gained confidence in once I actually saw the mistakes cause problems, e.g. password managers)
I do not have either Google or Apple accounts and I do not intend to ever open such accounts (despite owning some Android smartphones and having owned Apple laptops).
Because of this, I am frequently harassed by various companies or agencies, which change their interaction metods into smartphone apps and then deprecate the alternatives.
Moreover, I actually would be willing to install such apps, but only if there would be some means to download them, but most of them insist on providing the app only in the official store, from which I cannot install it, because I have no Google account.
I have been forced to close my accounts in one of the banks that I use, because after using their online banking system in the browser for more than a decade, including from my smartphone, they have decided to have a custom app.
In the beginning that did not matter, but then they have terminated their Web server for online banking and they have refused to provide directly their app, leaving the Google store as the only source of it.
I have been too busy to try to fight this legally, but I cannot believe that their methods do not break any law. I am not an US citizen, I live in the European Union, and when an European bank (a Societe Generale subsidiary) refuses to provide its services to anyone who does not enter in a contractual relationship with a foreign US company, such discrimination cannot be legal.
However, to quibble with your last analysis, you're almost certainly entering an agreement with the EU registered legal entity of a multinational company, and you almost certainly already had to do that to obtain the hardware, run the OS, use the browser, etc. The degree to which any of those contracts are enforceable is another matter.
I find unbelievable that a bank has the arrogance of conditioning their services on whether their customers accept to do business or not with some third party.
I see no difference between the condition of having a Google account and for instance a condition that I should buy my car from Audi or from any other designated company, instead of from wherever I want. It is none of my bank's business what products or services I choose to buy or use (outside of special circumstances like when receiving bank credits).
I trust Apple more than I trust Google to not share my data with a large group of corporate entities who want to sell me things I do not wish to buy.
I believe both - and if required, organizations like Mozilla, Ubuntu, Redhat/Oracle, whoever - to comply with law enforcement requests made of them to hand over any data relating to me that they might hold. I'm OK with that. I think Apple has less of that data than Google, and works actively to have less of it. Google works actively to increase the amount of data they have about me.
I think even if you had a functional device using entirely open software, that any organisation you share that data with or use to communicate with using that device - including cloud service providers, network providers, and so on - would also comply with law enforcement.
"Ah!", you say, "But I get to choose which crypto to use! I know it won't have backdoors!". To which I will reply you are unlikely to have read and truly understood the source code to the crypto software you're using, and that such software is regularly shown to have security issues. It's just not true that open source means that all bugs become shallow, and the "many eyes" you're hoping for to surface these issues are likely employed at, err, Apple, Google, Redhat, Ubuntu, Mozilla...
I look at the landscape and I conclude that true open source environments have a ton of issues, Google/Android have far more (for my taste), and that I am more confident in Apple than I am in either myself (even as an experienced tech expert), or Google, or Microsoft, to keep my data private to me to the greatest extent legally permissible.
Do I think "legally permissible" should be extended? Sure. Do I wish a multi-billionaire would throw 50% of the net worth at making open source compete on the same level? Yeah, cool. Do I think any of that is realistic in the next 5 years? No. So, I make my bets accordingly, eyes wide open, balancing the risks...
Using the undocumented but accessible control registers, all the memory protections of the Apple devices could be bypassed. Using this hardware backdoor, together with some software bugs in the Apple system libraries and applications, for many years, until the end of 2023, it has been possible to remotely take complete control of any iPhone, with access to its storage and control of the camera and microphone, in such a way that it was almost impossible for the owner to discover this (the backdoor bugs have been discovered only as a consequence of analyzing some suspicious Internet traffic of some iPhones that were monitored by external firewalls).
It is hard to explain such a trivial security error as not disabling a testing backdoor after production, for a company that has claimed publicly for so long that they take the security of their customers very seriously and that has provided a lot of security theater features, like a separate undocumented security processor, while failing to observe the most elementary security rules.
It is possible that the backdoor was intentional, either inserted with the knowledge of the management at the request of some TLA, or by a rogue Apple employee who was a mole of such a TLA, but these alternative explanations are even worse for Apple than the explanation based on negligence.
https://www.howtogeek.com/746588/apple-discusses-screeching-...
Sorry, but this seems like a very vague claim to me. Can you specifically point out a time where Apple proved itself untrustworthy in a way that impacts personal privacy?
When Apple says they treat my data in a specific way, then yes I do trust them. This promise is pretty central to my usage of them as a company. I'd change my mind if there was evidence to suggest they're lying, or have betrayed that trust, but I haven't seen any, and your post doesn't provide any either.
Here's a few pointers, to get you up to speed [1-5]. Of course there's nothing wrong with monetizing their own user base and selling ads based on their 1PD (or, in the case of Safari, monetizing the search engine placement). But I find it ironic that they make a ton of money by selling ads based on the exact same practices they demonize others for -- user behavior, contextual, location, profile.
[1] https://searchads.apple.com/
[2] Apple’s expanding ad ambitions: A closer look at its journey toward a comprehensive ad tech stack - https://digiday.com/media-buying/apples-expanding-ad-ambitio...
[3] Apple’s Ad Network Is The Biggest Beneficiary Of Apple’s New Marketing Rules: Report -- https://www.forbes.com/sites/johnkoetsier/2021/10/19/apples-...
[4] Apple Privacy Suits Claim App Changes Were Guise to Boost Ad Revenue - https://www.hollywoodreporter.com/business/business-news/app...
[5] Apple is becoming an ad company despite privacy claims - https://proton.me/blog/apple-ad-company
Advertising isn't anti-privacy. Apple's fight was with tracking by third parties without user knowledge or consent. That is independent of, but often used for, advertising purposes.
This is different from say Google determining ads on Youtube based on what you are watching on Youtube.com, and from Amazon or Apple promoting products based on your product searches solely within their respective stores.
Advertising works much better when there is no privacy.
I hope this current fad dies and people return to that older marketing "common sense". Over-targeting is bad for consumers and bad for advertisers, the only people truly benefiting seem to be Google and Meta.
His current state really has made me think about my own tech, about what should be locked down and what really should not be - things that we lock down out of habit (or by force) rather than out of necessity.
Might be interesting if companies offered the ability for someone to be a “steward” over another when it came to sensitive choices (like allowing new logins, sending money, etc). Of course that itself is a minefield of issues with family members themselves taking advantage of their elderly members. But maybe power of attorney would have to be granted?
Rather than putting all of our personal data and accesses under a thick virtual fire blanket, perhaps it is perfectly fine if some of it isn't protected at all, or is protected in ways that could be easily circumvented with just a tiny bit of finagling.
This is now how I'm approaching my own digital foot print, that some not secret things are nowadays wide open, unencrypted and you just need to know where to look to access all of it.
There is sometimes a point to inconvenience in that it requires time and assessment.
For most people the only security they need is actually access to their money, everything else is mostly irrelevant, nobody really cares about weird habits or whatever.
End to end encryption? Sure, but we’re sending your location and metadata in unencrypted packets.
Don’t want governments to surveil your images? Sure, they can’t see the images - but they’ll send us hashes of illegal images, and we’ll turn your images into hashes, check them against each other, and report you to them if we find enough.
Apple essentially sells unbreakable locked doors while being very careful to keep a few windows open. They are a key PRISM member and have obligations under U.S. law that they will fulfil. Encryption backdoors aren’t needed when the systems that they work within can be designed to provide backdoors.
I fully expect that Apple Intelligence will have similar system defects that won’t be covered properly, and will go forgotten until some dissident gets killed and we wonder why.
For a look at their PR finesse in tricking media, see this, over the CSAM fiasco that has been resolved, in Apple’s favour.
https://sneak.berlin/20230115/macos-scans-your-local-files-n...
> I fully expect that Apple Intelligence will have similar system defects
Being able to scan devices for CSAM at scale is a "defect" to you?
- it's anti-user: a device spying on you and reporting back to a centralized server is a bad look
- it's a slippery slope: talking about releasing this caused them to get requests from governments to consider including "dissident" information
- it's prone to abuse: within days, the hashing mechanism they were proposing was reverse engineered and false positives were embedded in innocent images
- it assumes guilt across the population: what happened to innocent by default?
and yes, csam is a huge problem. And btw, apple DOES currently scan for it- if you share an album (and thus decrypt it), it is scanned for CSAM.
There is just a general hypocrisy about Apple that is hilarious.
Nothing wrong there per se, its just good to realize it.
Ah yes, blame the simple-minded plebes who foolishly cast their noses up at Windows Phone. If only Ballmer were still in charge, surely he'd have saved us from this horrible future of personal, privacy-respecting AI at the edge...
Apple is very intrusive. Macos phones home all the time. ios gives you zero control (all apps have internet access by default, and you cannot stop it)
Apple uses your data. you should be able to say no.
And as for your data, they do other things too, a different way. Everything goes to icloud by default. I've gotten new devices and boom, it's uploading everything to icloud.
I've seem privacy minded parents say no, but then they get their kid an iphone and all of their stuff goes to icloud.
I think apple should allow a personal you-have-all-your-data icloud.
The platform is heavily internet-integrated, and I would expect it to periodically hit Apple servers. There are a lot of people claiming to be security researchers reporting what Little Snitch told them. There are drastically fewer who would introspect packets and look for any gathered telemetry.
I really haven't seen evidence Apple is abusing their position here.
> Everything goes to icloud by default. I've gotten new devices and boom, it's uploading everything to iCloud.
You need to enable iCloud. You are prompted.
Also, a new device should have next to nothing to upload to iCloud, as its hard disk is still in the factory configuration.
> I think apple should allow a personal you-have-all-your-data iCloud
They have desktop backup. Maybe they should allow third party backup Apps on iPhone, although I suspect data would be encrypted and blinded to prevent abuses by third parties, and recovery would be challenging because today recovery is only possible on a known-state filesystem. The recovery aspect is what really has limited it to the handful of approaches implemented directly by Apple.
I don’t think any large tech company is morally good, but I trust Apple the most out of the big ones to not do anything nefarious with my info.
Just about everyone else other than the tech companies are actually selling your data to various brokers, from the DMV to the cellphone companies.
First-hand account from me that this is not factual at all.
I worked at a major media buyer agency “big 5” in advanced analytics; we were a team of 5-10 data scientists. We got a firehose on behalf of our client, a major movie studio, of search of their titles by zip code from “G”.
On top of that we had clean roomed audience data from “F” of viewers of the ads/trailers who also viewed ads on their set top boxes.
I can go on and on, and yeah, we didn’t see “Joe Smith” level of granularity, it was at Zip code levels, but to say FAANG doesn’t sell user data is naive at best.
So you got aggregated analytics instead of data about individual users.
Meanwhile other companies are selling your name, phone number, address history, people you are affiliated with, detailed location history, etc.
Which one would you say is "selling user data"?
Their privacy stories are marketing first.
Google and Samsung do.
They went from “Don’t be evil” to a cartoonish “Doctor Evil” character in a decade.
So in other words, "companies operating within a nation are expected to abide by the laws of that nation"?
Apple structures their systems to limit the data they can turn over by request, and documents what data they do turn over. What else do you believe they should be doing?
Much like every other tech company you test the request.
Apple never does.
Citation needed?
I trust Apple about as far as I can throw them too. They are inherently anti-consumer rights everywhere in their ecosystem. The "Privacy" angle is just PR.
None of the big companies expressly sell your information. Not because they are altruistic, but because it is an asset that they want to protect so they can rent to the next person.
2) I have booted macOS VMs without iCloud. I'm not sure of the nags though. I believe signing out of iCloud will prevent iCloud from contacting Apple.
2) that is entirely NOT true. You should install little snitch and see what happens even if you NEVER sign into icloud. note that the phone home contact is not immediate, it happens in the background at random intervals from random applications.
just some random services blocked by little snitch on a mac:
accountsd, adprivacyd, airportd, AMPLibraryAgent, appstoreagent, apsd, AssetCacheLocatorService.xpc, cloudd, com.apple.geod.xpc, com.apple.Safari.SafeBrowsing.Service, commerce, configd, familycircled, mapspushd, nsurlsessiond, ocspd, rapportd, remindd, Safari, sntp, softwareupdated, Spotlight, sutdentd, syspolicyd, touristd, transparencyd, trustd, X11.bin
(never signed into an apple id)
There's also this one: https://discussions.apple.com/thread/250727947
I eventually just gave in to stop the nags.
Doesn't have to be bright red, or even there at all.
Technically you can by turning off wi-fi and disabling cellular data, bluetooth, location services, etc. for the app.
To your point though, wi-fi data should also be a per-app setting, and it is an annoying omission. macOS has outgoing firewalls, but iOS does not (though you could perhaps fake it with a VPN.)
> Apple uses your data.
> they do other things too, a different way
What specifically do you mean? Their frankly quite paranoid security and privacy white papers are pretty comprehensive and I don’t think they could afford to lie in those.
> Apple should allow a personal you-have-all-your-data iCloud
Advanced Data Protection[0] applies e2ee for basically everything, with the exception email, and doesn’t degrade the seamless multi-device experience at all. For most people this is the best privacy option by a long shot, and no other major platform can provide anything close.
They’ve hampered product experience for a long time because of their allergy against modelling their customers on the cloud. The advent of AI seems to have caught them a bit off guard but the integrated ecosystem and focus on on-device processing looks like it may pay off, and Siri won’t feel 5 years behind Google Assistant or Alexa.
A couple of years ago Apple was busted when it was discovered that most Apple first-party apps weren't getting picked up by packet sniffer or firewalls on macOS.
Apple tried deflecting for a while before finally offering up the flimsy claim that it "was necessary to make updates easier". Which isn't a really good explanation when you're wondering why TextEdit.app needs a kernel network extension.
The user-mode replacement APIs allowed by sandboxed apps had a whitelist for Apple's apps, so you couldn't install some App Store firewall app that would then disable the App Store and screw everything up.
After the outrage, in a point release a few months later, they silently emptied out the whitelist, resolving the issue.
They never issued any kind of statement.
That makes no sense.
Even if it did, the app the would need protection is the App Store, not every single Apple app. In many cases, the fix for the worst case scenario would be "remove firewall app".
Also, given that TextEdit was not an AppStore app, for but one example, but a base image app.
> They never issued any kind of statement.
Shocking. I've had at least two MBPs affected by different issues that were later subject to recall, but no statement there. radar.apple.com may well be read by someone, but is largely considered a black hole.
Depends on where you are. Apple will bend over backwards when profits are affected, as you can see in China.
Ironically, the only time a large company took a stand at the cost of profits was in 2010 when Google pulled out of China over hacking and subsequently refused to censor. Google has changed since then, but that was the high watermark for corporates putting principles over profits. Apple, no.
My impression is that they had little chance to survive in a Chinese market, competing with a severely limited product against state-sponsored search products while also being a victim of state-sponsored cyberattacks.
It was the morally correct decision, but I don't know if they were leaving any money on the table doing so. I suspect the Google of today would also decide not to shovel cash into an incinerator.
Google wants to track even my physical security key across sites to track me.
How can I trust their AI systems with my data?
So they were effectively asking for the make and model.
There are non-certified authenticators which may have unfortunate behaviors here, such as having attestations containing a hardware serial number. Some browsers maintain a list and will simply block attestations from these authenticators. Some will prompt no matter what.
There is also a bit of an 'Open Web' philosophy at play here - websites often do not have a reason to make security decisions around the make and models of keys. Having an additional prompt in a user conversion path discourages asking for information they don't need, particularly information which could be used to give some users a worse experience and some vendors a strong 'first-mover' market advantage.
In fact, the motivator for asking for this attestation is often for self-service account management. If I have two entries for registered credentials, it is nice if I have some way to differentiate them, such as knowing one of them is a Yubico 5Ci while the other is an iPhone.
Many parties (including Google) seem to have moved to using an AAGUID lookup table to populate this screen in order to avoid the attestation prompt. It also winds up being more reliable, as software authenticators typically do not provide attestations today.
Moreover, none of the service providers auto-named my keys with make/model, etc.
> If I have two entries for registered credentials, it is nice if I have some way to differentiate them, such as knowing one of them is a Yubico 5Ci while the other is an iPhone.
First, Google doesn't differentiate the security keys' name even if you allow for that data to be read, plus you can always rename your keys to anything you want, at any of the service providers I enrolled my keys, so it doesn't make sense.
Moreover, Firefox didn't warn me for any other services which I enrolled my keys, and none of them are small providers by any means.
So, it doesn't add up much.
It’s unlikely latency would permit them to proxy every request to fully mask end-user IPs (it’s unclear what “obscured” means), and they would probably include device identifiers and let Microsoft maintain your shadow profile if that could improve ChatGPT output (it may not require literally storing your every request, so denying that is weasel phrasing).
First, it takes much less compute to serve a page than to run an LLM query. LLMs are slow even if you eliminate all network.
Second, your expectations when browsing are not the same as when using a personal assistant.
Right now even when I simply ask Siri to set a timer it takes more than a couple of seconds. Add an actual GPT in the mix and it’s laughable.
In any case, even with a private relay, Apple’s phrasing does not deny sending device identifiers and allowing ClosedAI/Microsoft to build your shadow profile (without storing requests verbatim).
I first bought some devices for myself, then those devices got handed off to family when I upgraded, and now we're at a point where we still use all of the devices we bought to date - but the arbitrary obsolescence hammer came down fairly hard today with the intel cut-off and the iPhone 15+ requirement for the AI features. This isn't new for Apple, they've been aging perfectly usable devices out of support for years. We'll be fine for now, but patch support is only partial for devices on less-than-latest major releases so I likely need to replace a lot of stuff in the next couple of years and it would be way too expensive to do this whole thing again. I'll also really begrudge doing it, as the devices we have suit us just fine.
Some of it I can live without (most of the AI features they showed today), but for the parts that are sending off to the cloud anyway it just feels really hard to pretend it's anything other than trying to force upgrades people would be happy without. OCLP has done a good job for a couple of homework Macs, I might see about Windows licenses for those when they finally stop getting patches.
I'd feel worse for anyone that bought the Intel Mac Pro last year before it got taken off sale (although I'm not sure how many did). That's got to really feel like a kick in the teeth given the price of those things.
Could also be a memory problem. The A17 Pro in the iPhone 15 Pro comes with 8 GB of memory while everything before that has 6 GB or less. All machines with the M1 or newer come with at least 8 GB of memory.
PS: The people who bought the Intel Mac Pro after the M1 was released knew very well what they were getting into.
Which introduces a funny aspect, of the whole NPU/TPU thing. There's a constant stairstepping in capability; the newer models improving only obsoletes older ones faster. It's a bit of a design paradox.
This would be a lot easier to argue if they hadn't gimped their Neural Engine by only allowing it to run CoreML models. Nobody in the industry uses or cares about CoreML, even now. Back then, in 2017, it was still underpowered hardware that would obviously be outshined by a GPU compute shader.
I think Apple would be ahead of everyone else if they did the same thing Nvidia did by combining their Neural Engine and GPU, then tying it together with a composition layer. Instead they have a bunch of disconnected software and hardware libraries; you really can't blame anyone for trying to avoid iOS as an AI client.
Not all those were available from day, except FaceID.
In retrospect though, it may be best that I don't know what I missed.
The iPhone 15 Pros were the first iPhones with 8GB. All M1+ Macs/iPads have at least 8GB of ram.
LLMs are very memory hungry, so frankly I'm a little surprised they support such low memory requirements (especially knowing that the system is running other tasks, not just ML). Microsoft's Copilot+ program has a 16GB minimum.
The new AI features will be available on the iPad Air I just ordered, and on my M1 MacBook Air, and I'll be able to play with them there until I'm ready to upgrade my phone. I think these new features sound great, but I'm not in any hurry to adopt them wholesale.
And if you don’t like them you don’t have to use them. I don’t use Siri and it doesn’t bother me that Apple includes it on all their machines.
That's likely true. Unless you were careful to do a lot more than just disabling it, it does use you though, slurping up quite a bit of data.
Separately, the data Siri sends isn't held to the same differential privacy standards as some of Apple's other diagnostics. They just give you a unique ID and yolo it [1]. Unless personalized device behaviors are somehow less identifiable than all the other classes of data subject to deanonymization attacks (demographics, software/hardware version fingerprints, ...), that unique ID is just to be able to pretend to the courts that they tried (give or take Hanlon's razor).
[0] https://news.ycombinator.com/item?id=39927657
[1] https://www.apple.com/legal/privacy/data/en/ask-siri-dictati...
What is it about this release that has lost your support? Specifically gating the Apple Intelligence stuff to the most modern hardware?
My MBP hasn't hasn't been _fully_ supported for many years. The M1-specific features started rolling out in 2021 - ability to run iPad apps being the most obvious one, the ability to get the globe view in maps being the most questionable. IIRC, my MBP did not yet have a M1 Pro/Max version available for sale when they announced the M1-specific features.
The point being, having AI features unavailable doesn't make the Mac unusable any more than it makes an iPhone 15 unusable. Those parts should continue to operate the way they do today (e.g. with today's Siri).
Hardware matters again now in a way it hasn’t for a couple of decades.
The old Macs can still install Linux/Windows/ChromeOS Flex. iPads/iPhones not so much.
Apple has a clear intent that allows the subsequent groups to work towards and contribute to it. Google and Microsoft don’t. They have a vague idea, but not something tangible enough for subordinate leaders to meaningfully contribute to.
There was also likely no team on calculator at all (are there bugs that justify a maintenance team?), so it needed a big idea like 'Math Text' to be green-lit or it would simply never come. This is despite missing calculator being an obvious deficiency, and solving it via a port to be a relatively tiny lift.
[0] https://libgen.is/book/index.php?md5=60B714243482AE3D7B9A83B...
Totally agree on the AI points. Google may have incredible research, but Apple clearly is playing to their strengths here.
Like what's going on inside Google, it's getting stupidly political and infighty. If someone tries to build a comprehensive LLM that touches on gmail, youtube, docs, sheets etc, it's going to be an uphill battle.
None of them would work on-device, all would leak your data into the training set.
It showed you contextual cards based on your upcoming trips/appointments/traveling patterns. E.g. Any tube delays on your way home from work. How early you should leave to catch your flight.
This alongside Google Inbox was among the best and most "futuristic" products.
I was glad to see today Apple implementing something similar to both of these.
No wonder they killed it.
There'd be constant sabotage.
Lots of middle management power groups that would prevent a cohesive top down vision from easily being adopted.
The more I spend time in mid-large companies, the more I'm amazed that that Apple somehow managed to avoid releasing three different messaging apps that do the same thing.
Make no mistake, Google is Enterprise.
Anyway, while I see all of your points, none of the things I've read in the news make me excited. Recapping meetings or long emails or suggesting how to write are just...not major concerns to me at least.
Microsoft seems to have lost all internal cohesion and the ability to focus the entire company in one direction. It's just a collection of dozens of small fiefdoms only caring about hitting their own narrow KPIs with no overall strategic vision. Just look at the mess of competing interest that Windows 11 and Edge have turned into.
Quick, what’s “copilot”?
Visual Studio Code, appears to be a code editor with support for C, C#, C++, Fortran, Go, Java, JavaScript, Node.js, Python, Rust, and Julia
Visual Studio, appears to be a code editor with support for 36 languages including C, C++, C++/CLI, Visual Basic .NET, C#, F#, JavaScript, TypeScript, XML, XSLT, HTML, and CSS.
Visual Studio Code, appears to be liked by almost every user and the favorite in a bunch of online polls.
Visual Studio, appears to be unusable junk, widely hated in almost every survey, and unable to even display its own error messages correctly in 2022.
Visual Studio Code is supposedly a "related product" according to Wikipedia: https://en.wikipedia.org/wiki/Visual_Studio#Related_products
How are these related? They seem like Microsoft's internal fiefdoms again.
It's a traditional windows application. .net/WPF I think. Configured via XML
VSCode is free, an electron app, has a plugin store with lots of niche language plugins. Configured via json.
Surely they're using VSCode to exert influence in their Microsofty way, but it feels much less like a prison.
VSCode still feels like a bit much to me (though of a monster than Visual Studio). I'm pretty happy with helix.
https://crmtipoftheday.com/wp-content/uploads/2017/07/ms-org...
they are irrelevant on the mobile ecosystem, a place where almost all this features are most relevant and useful
As a(n amateur) fiction writer who pays too much for ProWritingAid each year, I'd love to see if this feature is any good for fiction. I take very, very few of PWA's AI-suggested rewrites, but they often help me see my own new version of a bit of prose.
This Apple AI presupposes their strong ecosystem, which no one has anything similar to. Google was in a good position years ago, but they are criminally unfocused nowadays.
But for some reason, they decided to just stick to feature tidbits here and there and chose not to roll out quality-of-life UI features to make Gemini use easier on normal apps and not just select Google apps. And then it's also limited by various factors. They were obviously testing the waters and were just as cautious, but imho it was a damn shame. Even summarization and key points would've been nice if I could invoke it on any text field.
But yeah, this is truly the ecosystem benefit in full force here for Apple, and they're making good use of it.
RCS isn't fair, Google wanted the carriers to work on that, but in a disparate ecosystem they also couldn't come to a decision
Also, apple bought up more ML startups than google or microsoft.
Neither is great, but at least the megacorp has a financial incentive to maintain some of my privacy.
Sounds like some crazy level of meta where your brilliance is applicable to any pair of mega corps...which I don't buy.
I'd _love_ to be able to pull down my email from Fastmail, Calendars from iCal, notes from Google Notes etc to a single LLM for me to ask questions from, but it would require all of the different sources to have a proper API to fetch the data from.
Apple already has all it on device and targeted by ML models since multiple iPhone versions ago. Now they just slapped a natural language model on top of it.
That's a little premature, let's try not to be so suckered by marketing.
They really hammered in the fact that every bit is going to be either fully local or publicly auditable to be private.
There's no way Google can follow, they need the data for their ad modeling. Even if they anonymise it, they still want it.
All the stuff that works on your private data is Apple models that are either on-device or in Apple's private cloud (and they are making that private cloud auditable).
The OpenAI stuff is firewalled off into a separate "ask ChatGPT to write me this thing" kind of feature.
They announced it in the same keynote where they announced the partnership with OpenAI (and stated that sharing your data with OpenAI would be opt-in, not opt-out).
Apples big thing is privacy, i doubt they'd randomly lie about that
There are people and orgs out there who (justifiably or not) are paranoid enough that they factor this into their threat model.
This is a bit academic right now, but it's also worth mentioning that in the coming years, as quantum computing becomes more and more practical, snapshots of data encrypted using quantum-unsafe cryptography, or with symmetric keys protected by quantum-unsafe crypto (like most Diffie-Hellman schemes) will be decryptable much more easily. Whether a motivated bad actor has access to the quantum infrastructure needed to do this at scale is another question, though.
As long as the data is processed externally, no software solutions make it safe, unless you yourself are in control of the premises.
That is a huge stretch and a signal as to how good Apple is with their marketing.
If they are still letting apps like GasBuddy to sell your location to insurance companies then they are no where near "100% privacy".
The default Apple apps (maps, messaging, safari) are solid from a privacy perspective, and I don't think you can say the same about the default apps on competitors phones.
But let's get back to Apple...if it was functioning at "100% user privacy" would it be able to give access to your data to law enforcement? As an example, I consider MullvadVPN to be 99% user privacy.
https://en.wikipedia.org/wiki/Apple–FBI_encryption_dispute#A...
That was concerning unlocking the phone. I’m talking about the data that they store on iCloud.
The difference between that and this is extremely clear is it not?
Imagine if we had a smart phone maker that Cared about this so we didn’t have to worry about it all the time?
Apple has done its privacy work here; now it's up to the end user to make the final choice.
That's tangibly different.
Example that should be super trivial: try to setup a sync of photos taken on your Iphone to a laptop (Mac or Windows or Linux) without going through Apple's cloud or any other cloud?
With an Android phone and Windows laptop (for example) you simply install the Syncthing app on both and you're done.
My point is not "Apple is worse", instead I'm just trying to point out that Apple definitely seems eager to have their users push a lot of what they do through their cloud. I don't see why their AI will be any different, even if their marketing now claims that it will be "offline" or whatever.
"Sync my files without using Apple's cloud" is not a user requirement. Delivering features using their cloud is a very reasonable way for Apple to provide services.
Now, "Sync my files without compromising my privacy" is a user requirement. And Apple iCloud offers a feature called 'advanced data protection" [1] that end to end encrypts your files, while still supporting photo sharing and syncing. So no, you can't opt out of using their cloud as the intermediary, but you can protect your content from being decrypted by anyone, including Apple, ooff your devices.
It has the downside that it limits your account recovery options if you lose the device where your keys are and screw up on keeping a recovery key, so it isn't turned on by default, but it's there for you to use if you prefer. For many users, the protections of Apple's standard data protection are going to be enough though.
[1] https://support.apple.com/en-us/102651#:~:text=Advanced%20Da....
Last I checked the more expensive Macbooks had three USB ports, and the cheap ones have two.
Since Macbooks no longer have ethernet ports, those USB ports are useful for plugging in the dongle when I want to connect the Macbook to an ethernet wire. Good times.
The first hit on Google makes it look trivial with iPhone too?
https://support.apple.com/guide/devices-windows/sync-photos-...
> With an Android phone and Windows laptop (for example) you simply install the Syncthing app on both and you're done.
And with iPhone you just install the "Apple Devices" app: https://apps.microsoft.com/detail/9np83lwlpz9k
Install jottacloud and enable the photos backup feature.
I also sync my photos onto my NAS via sftp, using the Photosync app.
I think the private compute stuff to be really big. Beyond the obvious use the cloud servers for heavy computing type tasks, I suspect it means we're going to get our own private code interpreter (proper scripting on iOS) and this is probably Apple's path to eventually allowing development on iPad OS.
Not only that, Apple is using its own chips for their servers. I don't think the follow on question is whether it's enough or not. The right question to ask is what are they going to do bring things up to snuff with NVDIA on both the developer end and hardware end?
There's such a huge play here and I don't think people get it yet, all because they think that Apple should be in the frontier model game. I think I now understand the headlines of Nadella being worried about Apple's partnership with OpenAI.
Are we sure there is a Siri team in Apple? What have they been doing since 2012?
Though I don’t know if I would use my iPad for programming even if it was possible, when I have a powerful Macbook Pro with a larger screen.
The most important question to me is how reliable it is. Does it work every time or is there some chance that it horribly misinterprets the content and even embarrasses the user who trusted it.
https://www.theguardian.com/us-news/2024/apr/16/house-fisa-g...
Two features I really want:
“Position the cursor at the beginning of the word ‘usability’”
“Stop auto suggesting that word. I never use it, ever”
"Can you meet tonight at 7?" Me "oh yes" Siri "No you can't, your daughter's recital is at 7"
It's these integrations which will make life easier for those who deal with multiple personas all through their day.
But why partner with an outside company ? Even though it's optional on the device etc, people are miffed about the partnership than being excited by all that Apple has to offer.
Just randomly sprinkled eyes on the sides. I wonder why they chose to showcase that.
On the side with 5, they are overlapping. On the side with 4, some of them are half missing. On the side with 3, they are arranged in triangle instead of a straight line.
Not to talk about that 2 and 5 should be on opposing sides, same with 3 and 4.
It's basically like early AI being unable to generate hands, or making 6 fingers.
Maybe, but this class of jokes/riffs is going to get old, fast.
Given that this will apparently drop... next year at the earliest?... I think it's simply quite a tease, for now.
I literally had to install a keyboard extension to my iPhone just to get Whisper speech to text, which is thousands of times better at dictation than Siri at this point, which seems about 10 years behind the curve
Yup! The hardest part of operationalizing GenAI has been, for me, dragging the "ring" of my context under the light cast by "streetlamp" of the model. Just writing this analogy out makes me think I might be putting the cart before the horse.
Apple products tend to feel thoughtful. It might not be a thought you agree with, but it's there.
With other companies I feel like im starving, and all they are serving is their version of grule... Here is your helping be sure to eat all of it.
https://assets.horsenation.com/wp-content/uploads/2014/07/dw...
No-one is hitting anything out of the park, this is just Apple the company realising that they're falling behind and trying to desperately attach themselves to the AI train. Doesn't matter if in so doing they're validating a company run by basically a swindler (I'm talking about the current OpenAI and Sam Altman), the Apple shareholders must be kept happy.
I kind of feel like their walled garden and ecosystem might just have created the perfect environment for an AI integrated directly to the platform to be really useful.
I’m encouraged, but I am already a fan of the ecosystem…
I also expect it to fail miserably on names (places, restaurants, train stations, people), people that are bilingual, non-English, people with strong accents from English not being their first language, etc.
I did not see the announcement. Can Siri also send emails? If so then won't this (like Gemini) be vulnerable to prompt injection attacks?
Edit: Supposedly Gemini does not actually send the emails; maybe Apple is doing the same thing?
We'll find out later if there's an API to do something like that at all or are external communications always behind some hard limit that requires explicit user interaction.
- Proofread button in mail.
- ChatGPT will be available in Apple’s systemwide Writing Tools in macOS
I expect once you'll get used to it, it'll be hard to go without it.
I can't think of something less exciting than a feature that Gmail has supported for a decade.
Overall there's not a single feature in the article that I find exciting (I don't use Siri at all, so maybe it's just me), but I actually see that as a good thing. The least they add GenAI the better.
Detecting an appointment from an email doesn't even require AI.
You're also over-indexing on the fact that some processing will be done on device. The rest will go to Apple's servers just the same as Google. And you will never know how much goes or doesn't.
Most of the things shown during the keynote, can already be done with older iPhones - on device, but they need to be "talked to" like a computer, not with natural language that's not completely perfect.
That's only half true. If you get a text saying "Yo let's meet tomorrow at lunch?" it will offer an option to create an event from it, so even now it's possible in non-perfect scenarios.
Now the real question is: does getting the next 5% that wasn't possible justify sending potentially all you data to Apple's servers? I think the answer is a pretty resounding "fuck no".
Overall the announcement is extremely low value proposition (does anyone really use their stupid Bitmoji thing?) but asks for a LOT from the user (a vague "hey some stuff will be sent to our servers").
Are you saying this type of scenario kills the app, or are you saying the app needs to die, replaced by an API that AIs can interact with, thus homogenizing the user experience, and avoiding the bad parts of Apps?
Better yet the system should know about all the commercial options available to you and be a partner in getting food you like, taking advantage of discounts, all of that.
Which at the backend means unifying necessary data from different product silos, into organized and usable sources.
Yeah but what about people going to the wrong airport, or getting scammed by taking fake information uncritically? "Well it worked for me and anyway AI will get better.". Amen.
I would contrast this with the trend over the last year of just adding a chatbot to every app, or Recall being just a spicy History function. It's AI without doing anything useful.
But it runs in their cloud.
If you spend an hour drawing a picture for someone for their birthday and send it to them, a great deal of the value to them is not in the quality of the picture but in the fact that you went to the effort, and that it's something unique only you could produce for them by giving your time. The work is more satisfying to the creator as well - if you've ever used something you built yourself that you're proud of vs. something you bought you must have felt this. The AI image that Tania generated in a few seconds might be fun the first time, but quickly becomes just spam filling most of a page of conversation, adding nothing.
If you make up a bedtime story for your child, starring them, with the things they're interested in, a great deal of the value to them is not in the quality of the story but... same thing as above. I don't think Apple's idea of reading an AI story off your phone instead is going to have the same impact.
In a world where you can have anything the value of everything is nothing.
We've been building this up for some time, this tiny universe is the most common thing for me to respond to "will you tell me a story?" (something that is requested sometimes several times a day) since it is so deeply ingrained in both our heads.
Yesterday, while driving to pick up burritos, I dictated a broad set of detailed points, including the complete introductory sequence to the story to gpt-4o and asked it to tell a new adventure based on all of the context.
It did an amazing job at it. I was able to see my kid's reaction in the reflection of the mirrors and it did not take away from what we already had. It actually gave me some new ideas on where I can take it when I'm doing it myself.
If people lean on gen ai with none of their own personal, creative contributions they're not going to get interesting results.
But I know you can go to the effort to create and create and create and then on top of that layer on gen AI--it can knock it out of the park.
In this way, I see gen AI capabilities as simply another tool that can be used best with practice, like a synthesizer after previously only having a piano or organ.
Maybe the fact that you did the dictation together with your child present is also notable. Even though you used the AI, you were still doing an activity together and they see you doing it for them.
I view this as just extending that to custom reactions and making them more flexible expanding their range of uses.
Context will be more important when the gift itself is easy.
ai spam, especially the custom emoji/stickers will be interesting in terms of whether they will have any reusability or will be littered like single-use plastic.
Same thing for your kid, the kid likes both stories, gives 0 shit that you used GenAI or sat up for 8 hours trying to figure out the rhyme, those things are making YOU feel better not the person receiving it.
I was brought up that the thought matters, if i think to call my mom she appreciates it i don't need to make some excess effort to show her i love her or show her more love.
You read your daughter a book off your phone you got for free, is that somehow worth less than a book you went to barnes and noble and paid full price for?
I guess in Apple's example it looks like they're writing it as a document on MacOS, so I suppose they are writing it ahead of time.
It's the difference between calling your mom and just saying "Hi mom, this is me thinking to call you. bye." vs calling her with a prepared thing to say/ask about that you had to take extra time to think about before calling. Effort went into that. You don't need to tell her "HOW" you came up with what you wanted to talk about, but there is a difference in how your call will be received as a result.
If you really believe that sending a text versus a hand written card will have no difference on how the message is interpreted, you should just know that you are in the minority.
I don't think this is true at all. Love is proportional to cost; if it costs me nothing, then the love it represents is nothing.
When we receive something from someone, we estimate what it cost them based on what we know of them. Until recently, if someone wrote a poem just for us, our estimation of that would often be pretty high because we know approximately what it costs to write a poem.
In modern times, that cost calculation is thrown off, because we don't know whether they wrote it themselves (high cost) or generated it (low/no cost).
If your calculating "cost" for if someone is showing nuts, i feel sad for you lol, if my wife buys or makes me something or just says "i love you" they are equivalent, I don't give a shit if she "does something for me that costs her something" she loves me she thought of me.
The thought is what matters, if you put extra worth on people struggling to do something meaning more love... thats... ya
> if it costs me nothing, then the love it represents is nothing.
You could read this as meaning that every action has to be super costly or else the love isn't there. I admit that it's poorly phrased and it's not what I meant.
What I should have said is that if it costs you nothing, then it doesn't in itself indicate love. It costs me nothing to say "I love you" on its own, and you wouldn't believe me if I just walked up to you in the street and said that. But your mom has spent thousands of hours and liters of blood, sweat and tears caring for you, so when she says "I love you," you have all that to back those words up. It's the cost she paid before that imbues the words with value: high cost, deep love.
Hopefully that makes more sense.
You are obviously willfully misinterpreting what the OP meant by "cost".
You say "the thought is what matters" - this is 100% true, and "the thought" has a "cost". It "costs" an hour of sitting down and thinking of what to write to express your feelings to someone. That's what he is saying is "proportional" to love.
It "costs" you mental space to love your mom, and that can definitely happen with $0 entering the equation.
And with respect to "extra worth on people struggling to do something meaning more love" - if you spend the time to sit down and write a poem, when that's something that you don't excel at, someone will think: "oh wow you clearly really love me if you spent the time to write a poem, I know this because I know it's not easy for you and so you must care a lot to have done so anyway". If you can't see that... thats... ya
So if you sit down and thinking of what to write to express your love to your mom for two hours, then you love your mom twice than the person who only sit down for one hour loves his mom?
It's what "proportional" means. Words have meanings.
You also may be shocked to learn this, but "proportional" doesn't mean 1:1. It can mean 1:2, 5:1, or x:x^e(2*pi). All of those are proportions. Words do have meanings, and you'll note that - while I didn't even misuse the word proportional - the quotations also indicate I'm using it more loosely than it's textbook definition. You know, like how a normal person might.
I'm getting the vibe from you and the other commentator that, to you, this is about comparing how much two people love their respective mothers. That's not at all what this is even about? You can't compare "how much" two people love a respective person/thing because love isn't quantifiable.
I'm really not sure what you're even taking issue with? The idea that more time and effort from the giver corresponds to the receiver feeling more appreciation? That is not exactly a hot take lmfao
Do you feel disappointed in that answer? If yes, then surely you see that appreciation of something can be relative to effort.
If love is proportional to cost, then rapists and psychos who kill their SOs are the true lovers since the cost is 20 years of jail time to life sentence. Do you want to live by this standard?
In that scenario certainly there'll be times when using the AI option will make more sense, since you usually don't have hours to spare, and you also want to make the stories that your kid likes the most, which in this scenario are the AI ones.
But even then there's still that benefit to yourself from spending time on creating things, and I'd encourage anyone to have a hobby where they get to make something just because they feel like it. Even if it's just for you. It's nice to have an outlet to express yourself.
I really enjoyed the explanation for how they planned on tackling server-enabled AI tasks while making the best possible effort to keep your requests private. Auditable server software that runs on Apple hardware is probably as good as you can get for tasks like that. Even better would be making it OSS.
There was one demo where you could talk to Siri about your mom and it would understand the context because of stuff that she (your mom) had written in one of her emails to you... that's the kind of stuff that I think we all imagined an AI world would look like. I'm really impressed with the vision they described and I think they honestly jumped to the lead of the pack in an important way that hasn't been well considered up until this point.
It's not just the raw AI capabilities from the models themselves, which I think many of us already get the feeling are going to be commoditized at some point in the future, but rather the hardware and system-wide integrations that make use of those models that matters starting today. Obviously how the experience will be when it's available to the public is a different story, but the vision alone was impressive to me. Basically, Apple again understands the UX.
I wish Apple the best of luck and I'm excited to see how their competitors plan on responding. The announcement today I think was actually subtle compared to what the implications are going to be. It's exciting to think that it may make computing easier for older people.
Remember this ad? https://www.youtube.com/watch?v=sw1iwC7Zh24 12 years ago, they promised a bunch of things that I still wouldn't trust Siri to pull off.
Even something as simple as setting the time, Siri will bork it at least 1 in 10 times. I know that for sure, since I worked at a friend's restaurant 2 summers ago and was heavily using Siri's timer to time french fries blanching (many batches for at least 2 hours every day or every 2 days); this dam thing would regularly use wrong time or not understand at all even though it was always the same dam time and the conditions were always similar.
On the other hand, the Google home at my cousin's place operates at my command without mistakes even though he doesn't even have the luxury of knowing my voice.
People who think Siri is good either are delusional or have special godlike skills. But considering how many hilarious "demos" I have gotten from Apple fans friends; I will say it's the former.
I myself use iPhone/Apple Watch/Macs since forever so it's not like I'm free hating. I just goddam suck like too many Apple stuff recently...
Or maybe they just have good experiences? Why do they have to be delusional?
So, they think it's good and it's a delusion because it would not be objectively considered good if it was compared side by side with the competition.
I know that for sure because I have spent a lot of time with people like that and I used to be a bit like that. It's much easier to see the world in black and white for most, just like religions are with good/bad and people who really like Apple stuff are very often like that.
I can't but feel all of this super creepy.
I remember vividly the comment on Windows Recall that said if the same was done by Apple it would be applauded. Here we are.
Microsoft on the other hand… well, I understand they just pulled the recall feature after it was discovered the data wasn’t even encrypted at rest?!
I'm not saying it's not an awful feature, I will disable it as soon as it is installed.
The fact that it's not encrypted at rest really is the least of my concerns (though it does show the lack of care and planning). For this to be a problem, an attacker already has all the necessary accesses to your computer to either get your encryption key or do devastating damage anyway.
> At the risk of sounding like an Apple apologist, Apple has a pretty good (though not perfect) track record for privacy and security.
"Not perfect" is enough to be concerned. I would also not be surprised that their good reputation is more due to their better ability at hiding their data collection and related scandals rather than due to any care for the user.
This Apple AI is not storing anything new, it’s just processing the data that you already stored. As long as they pay close attention to invalidation on the index when things get deleted.
The cloud processing is a little concerning but presumably you will be able to turn it off, and it doesn’t seem much different to using iCloud anyway.
The distinction is made by people who seem hell bent on trashing Microsoft for everything and glorifying everything Apple does.
Here’s an example. I always use a random password when creating accounts for (eg) databases, but not every UI supports this, so I have a little shell script that generates one. I then copy and paste it from the terminal. Once I close the terminal window and copy something else, that password is stored only once.
With recall, it’s now stored permanently. Someone who gets access to my screen history is a step closer to getting into my stuff.
Of course there are workarounds. But the expectation I have around how my screen work informs the actions I take on it.
Here’s another example. I recently clicked on a link from HN and ended up on a page containing explicit images. At work. As soon as I realised what I was looking at, I clicked away.
How long until my visual history is to be interrogated, characterised, and used to find me guilty of looking at inappropriate material in the workplace? Such a system is not going to care about my intentions. Even if I’m not disciplined, I’d certainly be embarrassed.
Some people view house keepers the same way. “I can’t let someone going through and touch all of my personal belongings. That’s just creepy.”
There’s a wide range of what people find creepy and also what people can and do get used to.
Why would this translate to everybody wanting to have one?
Also no competitor is going to be as good at integrating everything, as none of those have as integrated systems.
i might have missed it but there has not been much talk about guardrails or ethical use with their tools, and what they are doing about it in terms of potential abuse.
imo, apple will gain expertise to serve a monster level of scale for more casual users that want to generate creative or funny pictures, emojis, do some text work, and enhance quality of life. I don't think Apple will be at the forefront of new AI technology to integrate those into user facing features, but if they are to catch up, they will have to get into the forefront of the same technologies to support their unique scale.
Was a notable WWDC, was curious to see what they would do with the Mac Studio and Mac Pro, and nothing about the M3 Ultra or M4 Ultra, or the M3/M4 Extreme.
I also predicted that they would use their own M2 Ultras and whatnot to support their own compute capacity in the cloud, and interestingly enough it was mentioned. I wonder if we'll get more details on this front.
(see "Apple Foundation Model Human Evaluation" here: https://machinelearning.apple.com/research/introducing-apple...)
But you CAN ask it to show you all pictures you took of your kids during your vacation to Cabo in 2023 and it'll find them for you.
The model "underperforms", but not in the ways that matter. This is why they partnered with OpenAI, to get the generic stuff included when people need it.
We see this play out with the ChatGPT integration. Rather than hosting GPT-4o themselves, OpenAI are. Apple is providing NVIDIA powered AI models through a third party, somewhat undermining the privacy first argument.
I was personally holding out for a federated learning approach where multiple Apple devices could be used to process a request but I guess the Occam's razor prevails. I'll wait and see.
Apple also has a long track record of "you're holding it wrong". I don't expect an amazing AI assistant out of them, I expect something that sometimes does what the user meant.
And yet this was never said.
Closest was this:
> Just don't hold it that way.
Or maybe this:
> If you ever experience this on your iPhone 4, avoid gripping it in the lower left corner in a way that covers both sides of the black strip in the metal band, or simply use one of many available cases.
The fact that Apple changed their stance from “here’s a workaround” to “here’s a free bumper” is a sign they reacted to something, and that could have been anything from the conclusion of internal testing to a PR job to keep customers happy.
If they had said there was no design flaw from the start and stuck with that the whole way then I’d understand people’s reaction, but all I see is a company that said “don’t hold it that way” as a workaround then eventually issued free bumpers, thus confirming the issue. That doesn’t suggest they were blaming the user for doing something wrong. The sentiment just wasn’t there.
Everyone makes fun of Sammy batteries exploding, but forget antennagate, bendgate, software gimping of battery life, butterfly keyboards, touch disease, yellow screens (which I believe were when Apple had to split supply Samsung/LG), exploding Macbook batteries (not enough to cause a fuss tho). Etc.
Other companies can of course be ne'er-do-wells, but people actively defend Apple for the company's missteps.
> Apple don't react to anything until there's a large enough outcry about it, rather than immediately address the issue they wait to see how many people complain to decide if it's worth the negative press and consumer perception or not.
You can’t immediately address any issue. You need time to investigate issues. You might not even start investigating until you hit some sort of threshold or you’ll be chasing your tail in every bit of customer feedback. It takes time to narrow down anything to a bad component, bad batch, software bug or whatever it is.
As for weighing whether the issue is worth addressing at all - this is literally every company. If you did a recall of every bit of hardware at the slightest whiff of an issue you’d go bankrupt very quickly. There are always thresholds.
I wish we would just criticise apple in the same way we do with other companies. There is no need to invent things like “you’re holding it wrong” or intentionally misunderstanding batterygate into “they slowed down phones to sell you a new one”. They already do other crappy things, inventing fake ones isn’t necessary.
To be fair, this was just the keynote -- details will be revealed in the sessions.
They repeated this so many times they've made it true.
Giving users control works for the slim percentage of power users. Most users will end up obliterated by scammers and other unsavory characters.
Perhaps there is a way to give control to today's users (that includes my non-technical mother) and still secure them against the myriad of online threats. If anyone knows of a paper or publication that addresses this, I'd love to read it.
That’s not 100% true, and where it is, there is a good reason, and pretty much every other store does it (being able to revoke malware)
I mean they have great PR, but in terms of privacy, they extract more information from you than google does.
Google is an ad company, they have a full model of what you like and dont like at different states of your life built.
What does Apple have that's even close?
they generate between $5-10B on ads alone a year now and more importantly that is one their fastest growing revenue segment .
Add the context of declining revenue from iPhone sales. That revenue and its potential will have enormous influence on decision making .
The thesis that Apple doesn’t have ads business so there is no use to collect the data is dead for 5years now
For Google, over 80% of their revenue comes from ads.
Apple's revenue is around 380 billion, 5-10 billion in ads is in the "other" category if you draw a pie chart of it... They make 30 billion just selling iPads - their worst selling product.
Apple can lose the ad category completely and they won't even notice it. If Google's ads go away because of privacy protections, the company will die.
There is reason why NVDIA, TSLA or stocks with growth[1] potential gets the P/E multiple that their peers do not or an blue chip traditional company can only dream of. The core of Apple revenue the biggest % chunk of iPhone sales is stagnant at best falling at worst. Services is their fastest growing revenue segment and already is ~$100 B of the $380B. Ads is a key component of that, 5 years back Ads was less < $1B, that is the important part.
Also margins matter, even at Apple where there is enormous margin for hardware, gross margins for services is going to be higher, that is simple economics of any software service cost of extra user or item is marginal. The $100B is worth lot more than equivalent $100B in iPhone, iPad sales where significant chunk will go to vendors.
Executives are compensated on stock performance, stock valuation depends on expected future growth a lot. Apple's own attempts and the billions invested to get into Auto, Healthcare, or Virtual Reality are a testament to that need to find new streams of revenue.
It would be naive to assume a fast growing business unit does not get outsized influence, any middle manager in a conglomerate would know how true this is.
A Disney theme park executive doing even 5x revenue as say the Disney+ one will not get the same influence, budgets,resources or respect or career paths.
[1] Expected Growth, doesn't have to be real,when it does not materialize then market will correct as is happening to an extent with TSLA.
Thats not what I was saying. I was saying that Apple extract more information than google does. I was not saying that Apple process it to make a persona out of you. Thats not the issue here. Apple is saying that they are a "Privacy first" company. To be that, you need to not be extracting data in the first place.
Yes, they make lots of noise about how they do lots of things on device. Thats great and to be encouraged. But Apple are still extracting your friend list, precise location, financial records, various biometrics, browsing and app history. ANd for sure, they need some of that data to provide services.
But whats the data life cycle? are they deleting it on time? who has access to it, what about when a new product wants to use it? how do they stop internal bad actors?
All I want you to do is imagine that Facebook has made iOS, and the iphone, and is now rolling out these features. They are saying the same things as Apple, do you trust them?
Do you believe what they say?
I don't want Apple to fail, I just want people to think critically about a very very large for profit company. Apple is not our friend, and we shouldn't be treating them like they are.
I believe (but could be wrong) they also treat that data in a way that prevents it from being accessed by anyone besides the user (see: https://en.wikipedia.org/wiki/Apple%E2%80%93FBI_encryption_d...
Not saying you're wrong, I'm just curious what sources or info you're using to make that claim.
o who you message, when you message.
o Your locations (find my devices)
o your voice (siri)
o the location of your items (airtags)
o what you look at (App telemetry)
o What websites you visit (Safari telemetry)
o what you buy (Apple Pay)
o Who your with (location services, again)
o your facial biometrics (apple photos tags people with similar faces, something FAcebook got fined for)
o Who emails you, who you email
With these changes, you'll need to allow apple to process the contents of the messages that you send and receive. If you read their secuirity blog it has a lot of noise about E2E security, then admit that its not practical for things other than backups and messaging.
they then say they will strive to make userdata ephemeral in the apple private cloud.
I'm not saying that they will abuse it, I'm just saying that we should give apple the same level of scrutiny that we give people like Facebook.
Infact, personally I think we should use Facebook as the shitty stick to test data use for everyone.
You should look more into their security architecture if you’re curious about stuff like this. The way Secure Enclave, E2EE (including the Advanced Data Protection feature for all iCloud data), etc. The reality is that they use a huge range of privacy enhancing approaches to minimize what data has to leave your device and how it can be used. For example the biometrics you mention are never outside the Secure Enclave in the chip on your phone and nobody except you can access them unless they have your passcode. Things like running facial recognition on your photos library is handled locally on your device with no information going up to the cloud. FindMy is also architected in a fully E2E encrypted way.
You can browse their hundreds of pages of security and privacy documentation via the table of contents here to look up any specific service or functionality you want to know more about: https://support.apple.com/guide/security/welcome/web
Moreover, because apple has great PR, you don't hear about privacy breeches. Everyone seems to forget they made a super cheap and for a long time undetectable stalking service. Despite the warnings. (AirTag)
Had that been Facebook or Google, it would have been the end of the feature. They have improved the unauthorised tracking flow, but its really quite unreliable with ios, and really bad in android still.
> You should look more into their security architecture if you’re curious about stuff like this.
I have, and its a brilliant manifesto. I especially love the documentation on PCC.
but, its crammed full of implied actions that aren't the case For example: https://support.apple.com/en-gb/108756
> If you choose to enable Advanced Data Protection, the majority of your iCloud data – including iCloud Backup, Photos, Notes and more – is protected using end-to-end encryption.
Ok good, so its not much different to normal right?
> When you turn on Advanced Data Protection, access to your iCloud data on the web at iCloud.com is disabled
Which leads me to this:
> It seems like you think this means Apple somehow has access and stores all that information in their cloud and we just have to hope/trust that they don’t decide they want to poke around in it?
You're damn right I do. Its the same with Google, and Facebook. We have no real way of verifying that trust. People trust Apple, because they are great at PR. But are they actually good at privacy? We have no real way of finding out, because they also have really reactive lawyers.
and thats my point, we are basically here: https://www.reddit.com/r/comics/comments/11gxpcu/our_little_... but with apple.
1. On-device AI
2. AI using Apple's servers
3. AI using ChatGPT/OpenAI's services (and others in the future)
Number 1 will pass to number 2 if it thinks it requires the extra processing power, but number 3 will only be invoked with explicit user permission.
[Edit: As pointed out below, other providers will be coming eventually.]
This #2, so-called "Private Cloud Compute", is not the same as iCloud. And certainly not the same as sending queries to OpenAI.
Quoting:
“With Private Cloud Compute, Apple Intelligence can flex and scale its computational capacity and draw on larger, server-based models for more complex requests. These models run on servers powered by Apple silicon, providing a foundation that allows Apple to ensure that data is never retained or exposed.“
“Independent experts can inspect the code that runs on Apple silicon servers to verify privacy, and Private Cloud Compute cryptographically ensures that iPhone, iPad, and Mac do not talk to a server unless its software has been publicly logged for inspection.”
“Apple Intelligence with Private Cloud Compute sets a new standard for privacy in AI, unlocking intelligence users can trust.”
iOS won't send any data to a PCC that isn't running a firmware that's been made public in their transparency logs and compute nodes have no way to be debugged in a way that exposes user data[1]
And at the end of the day, this is going to give the warrant holder a handful of requests from a specific user? Why wouldn't they use that same warrant to get onto the target's device directly and get that same data plus a ton more?
0: https://help.apple.com/pdf/security/en_US/apple-platform-sec... 1: https://security.apple.com/blog/private-cloud-compute/
Warrants to hack devices are a lot less common and generally harder to obtain. That's why police will send Google warrants for "give us info on every device who has been in a radius of x between y and z time".
I'm sure Apple did their very best to protect their users, but I don't think their very best is good enough to warrant this kind of trust. A "secure cloud" solution will also tempt future projects to use the cloud over local processing more, as cloud processing is now readily available. Apple's local processing is a major advantage over the competition but I doubt that'll stay that way if their cloud solution remains this integrated.
Your example indicates a situation where law enforcement does not know which device belongs to their suspect, if they even have one. That's a very different scenario from a targeted "tell us the requests belonging to this individual".
Warrants to search a device are extremely common place, otherwise the likes of Grayshift and Cellebrite would not be around.
From a threat modeling perspective compromising PCC is high risk (Apple's not just going to comply and the fight will be very public, see the FBI San Bernardino fight) , high effort (Long protracted court case), low reward (I only see requests that are going to get shipped off to the cloud). If I were law enforcement I'd explore every other avenue available to me before I go down that particular rabbit hole which is exactly what this design is intended to achieve.
At least with gmail and chat clients etc. things are somewhat put in compartments, one of the services might screw up and do something with your emails but your Messenger or WhatsApp chats are not affected by that, or vice versa. But when you bake it into the OS (laptop or phone) you're IMHO taking a much bigger risk, no matter what the intentions are.
Whereas with apps like Gmail and WhatsApp on an iPhone, you must trust Google and Meta in addition to Apple, not in place of Apple. It doesn't distribute who you trust, it multiplies it.
In essence, what you're doing is training an assistant to learn all of your details of your life and habits and the question is if that "assistant" is really secure forever. Taken to the extreme, the assistant becomes a sort of "backup" of yourself. But yeah it's an individual decision with the pro's and con's of this.
Yeah, pretty much
Also, your grandma might not setup a VM, but it sounds like the off-device processing is essentially stateless, or at most might have a very lightweight session. It seems like the kind of thing one person could setup for their family (with the same tamper-proof signatures, plus physical security), or provide a privacy focused appliance for anyone to just plug into a wall, if they wanted to.
There have been many cases recently of compromised code being in the wild for quite some time and then only known about by accident.
Auditing helps the company writing it, the auditors are usually experts in breaking stuff in fun ways, and it's good for business - we could slap "code security audited by XXX" on the sales pitch.
You're on the precipice of discovering the problem of incentives when it comes to audits.
Audits are good, but they're inferior to source available
I have no idea what code is running on a server I can't access. I can't exactly go SSH into siri.apple.com and match checksums. Knowing Apple's control freak attitude, I very much doubt any researcher permitted to look at their servers is going to be very independent either.
Apple is just as privacy friendly as ChatGPT or Gemini. That's not necessarily a bad thing! AI requires feeding lots of data into the cloud, that's how it works. Trying to sell their service as anything more than that is disingenuous, though.
That's like... the whole point? You have some kind of hardware-based measured boot thing that can provide a cryptographic attestation that the code it's running is the same as the code that's been reviewed by an independent auditor. If the auditor confirms that the data isn't being stored, just processed and thrown away, that's almost as good as on-device compute for 99.999% of users. (On-device compute can also be backdoored, so you have to trust this even in the case that everything is local.)
The presentation was fairly detail-light so I don't know if this is actually what they're doing, but it's nice to see some effort in this direction.
E: I roughly agree with this comment (https://news.ycombinator.com/item?id=40638740) later down the thread -- what exactly the auditors are verifying is the key important bit.
Apple has developed some of the strongest anti tampering compute on existence to prevent people from running code they don't want on hardware they produce. However, that protection is pretty useless when it comes to protection from Apple. They have the means to bypass any layer of protection they've built into their hardware.
It all depends on what kind of auditing Apple will allow. If Apple allows anyone to run this stuff on any Mac, with source or at least symbols available, I'll give it the benefit of the doubt. If Apple comes up with NDAs and limited access, I won't trust them at all.
Signal has the added benefit that it doesn't need to read what's in the messages you send. It needs some very basic routing information and the rest can be encrypted end to end. With AI stuff, the contents need to be decrypted in the cloud, so the end-to-end protections don't apply.
Apple's thrown stones come back to hunt their glass ceiling.
Secure enclave stuff can be used to build a trust relationship if it's designed well, but Apple is the party hosting the service and the one burning the private keys into the chip.
Once the data is out of your possession it's out of your control.
Drow "nation state is after me" from the threat model and you'll be a lot happier.
- TLA agency deploys scarce zero days or field ops because you're particularly interesting, vs..
- TLA agency has everything about you in a dragnet, and low level cop in small town searches your data for a laugh because they know you, and leaks it back to your social circle or uses it for commercial advantage
The history of tech is the history of falling costs with mass production. Expensive TLA surveillance tech for nation states can become broadly accessible, e.g. through-wall WiFi radar sensing sold to millions via IEEE 802.11bf WiFi 7 Sensing in NPU/AI PCs [1], or USB implant cables [2] with a few zeros lopped off the TLA price.
Instead of adversary motives, threat models can be based on adversary costs.
As adversary costs fall, threat models need to evolve.
[1] https://www.technologyreview.com/2024/02/27/1088154/wifi-sen...
Actually, once your e2e key that encrypts your data is out of your possession, it's out of your control.
Over the past decade it's become commercially feasible to be NSL-proof.
But in summary 1. The servers run on Apple Silicon hardware which have fancier security features 2. Software is open source 3. iOS verifies that the server is actually running that open source software before talking to it 4. This is insane privacy for AI
The security features are meant to prevent the server operator (Apple) from being able to access data that's being processed in their farm. The idea is that with that + E2E encryption, it should be way closer to on-device processing in terms of privacy and security
Here's also a great summary from Matthew Green: https://x.com/matthew_d_green/status/1800291897245835616
Furthermore how private do you think Siri is? Their privacy policy explicitly states they send transcripts of what you say to them. That cannot be disabled.
Ten minutes ago i set up a new Apple device and it not only asked me if I wanted to enable Siri, but whether I wanted to contribute audio clips to improve it. What, exactly, cannot be disabled?
"When you use Siri, your device will indicate in Siri Settings if the things you say are processed on your device and not sent to Siri servers. Otherwise, your voice inputs are sent to and processed on Siri servers. In all cases, transcripts of your interactions will be sent to Apple to process your requests."
It's pretty clear and not in dispute that your transcripts are always sent to Apple.
Nonetheless, Siri is trivial to disable altogether.
Those that won’t use those won’t use this either.
Apple has taken a markedly different approach, and has done so for years - E2E encryption, hashing and segmenting routes on maps, Secure Enclave, etc.
While I think it’s perfectly reasonable to “trust no one”, and I fully agree that there may be things we don’t know, I don’t think there it’s reasonable to put Apple on the same (exceedingly low) level as Google.
Apples motives are different, selling premium hardware and MORE premium hardware, they wouldn't dare fuck that up, their nestegg is hardware and slowly more services tied to said hardware ecosystem (icloud subs, tv subs etc). Hence the privacy makes sense to pull people into the ecosystem.
Google... everything google does even phones, is for more data gathering for their advertising revenue.
Apple's business model is to entice people into a walled garden ecosystem where they buy lots of expensive hardware sold on high margins. They don't need user data to make this work, which is why they can more comfortably push features like end-to-end and no-knowledge encryption.
I think also a bunch of people will trust Apple’s server more (but not completely) than other third parties.
There is no way to access it without destroying the chip, and even in this scenario it will be extremely expensive and imo unlikely, certainly impossible at scale. Some scientists may be able to do it once in a lab.
Now, is any of that actually true?
I am worried about the reliability, if you are relying on it giving important information without checking the source (like a flight) than that could lead to some bad situations.
That being said, the polish and actual usefulness of these features is really interesting. It may not have some of the flashiest things being thrown around but the things shown are actually useful things.
Glad that ChatGPT is optional each time Siri thinks it would be useful.
My only big question is, can I disable any online component and what does that mean if something can't be processed locally?
I also have to wonder, given their talk about the servers running the same chips. Is it just that the models can't run locally or is it possibly context related? I am not seeing anything if it is entire features or just some requests.
I wonder if that implies that over time different hardware will run different levels of requests locally vs the cloud.
Notice what's missing? A photorealistic style.
It seems like a good move on their part. I'm not that wild about the cartoon-ification of everything with more memes and more emojis, but at least it's obviously made-up; this is oriented toward "fun" stuff. A lot of kids will like it. Adults, too.
There's still going to be controversy because people will still generate things in really poor taste, but it lowers the stakes.
I think it shows the context for the information it presents. Like the messages, events and other stuff. So you can quickly check if the answer is correct. So it's more about semantic search, but with a more flexible text describing the result.
I bet that’s going to be the case. I think they added the servers as a stop-gap out of necessity, but what they see as the ideal situation is the time when they can turn those off because all devices they sell have been able to run everything locally for X amount of time.
If you have a M6 MacBook/ipad pro it’ll run your AI queries there if you’re on the same network in two-four years.
I am worried at the infinite ability of teenagers to hack around the guardrails and generate some probably not safe for school images for the next 2 years while apple figures out how to get them under control.
This can be never. LLMs fail fast as you move away from high resourced languages.
They said the models can scale to "private cloud compute" based on Apple Silicon which will be ensured by your device to run "publicly verifiable software" in order to guarantee no misuse of your data.
I wonder if their server-side code will be open-source? That'd be positively surprising. Curious to see how this evolves.
Anyway, overall looks really really cool. If it works as marketed, then it will be an easy "shut up and take my money". Siri seems to finally be becoming what it was meant to be (I wonder if they're piggy-backing on top of the Shortcuts Actions catalogue to have a wide array of possible actions right away), and the image and emoji generation features that integrate with Apple Photos and other parts of the system look _really_ cool.
It seems like it will require M1+ on Macs/iPads, or an iPhone 15 Pro.
The way it works is that when the on-device model decides "this could better be answered by chatgpt" then it will ask you if it should use that. They described it in a way which seems to indicate that it will be pluggable for other models too over time. Notably, ChatGPT 4o will be available for free without creating an OpenAI account.
To me that implies 4o by default, but I guess we'll find out.
- on-device models, which will power any tasks it's able to, including summarisation and conversation with Siri
- private compute models (still controlled by apple), for when it wants to do something bigger, that requires more compute
- external LLM APIs (only chatgpt for now), for when the above decide that it would be better for the given prompt, but always asks the user for confirmation
Being best in class for on-device AI is a huge market opportunity. Trying to do it all would be dumb like launching Safari without a google search homepage partnership.
Apple can focus on what they are good at which is on device stuff and blending AI into their whole UX across the platform, without compromising privacy. And then taking advantage of a market leader for anything requiring large external server farms and data being sent across the wire for internet access, like AI search queries.
If the system doesn't say "I'm gonna phone a friend to get an answer for this", it's going to stay either 100% local or at worst 100% within Apple Intelligence, which is audited to be completely private.
So if you're asking for a recipe for banana bread, going to ChatGPT is fine. Sending more personal information might not be.
You know what it's really reminiscent of? The EU cookies legislation. Do you like clicking "Yes I accept cookies" every single time you go to a new website? It enhances your privacy, after all.
Think about it: even a government agency isn't able to produce a simple static web page without having to display that cookie banner. If their definitions of "bad cookies that require a banner" is so wide that even they can't work around it to correctly inform citizens, without collecting any private data, displaying any ad or reselling anything; maybe the definition is wrong.
For all intent and purposes, there is a cookie banner law.
Someone somewhere figured out that it might be a good idea and others just copied it.
I want to use perplexity from siri too!
More specifically "is openai seeing my personal data or questions?" A: "No, unless you say it's okay to talk to OpenAI everything happens either on your iPhone or in Private Compute"
They said the models can scale to "private cloud compute" based on Apple Silicon which will be ensured by your device to run "publicly verifiable software" in order to guarantee no misuse of your data.
I wonder if their server-side code will be open-source? That'd be positively surprising. Curious to see how this evolves.
Anyway, overall looks really really cool. If it works as marketed, then it will be an easy "shut up and take my money". Siri seems to finally be becoming what it was meant to be (I wonder if they're piggy-backing on top of the Shortcuts Actions catalogue to have a wide array of possible actions right away), and the image and emoji generation features that integrate with Apple Photos and other parts of the system look _really_ cool.
It seems like it will require M1+ on Macs/iPads, or an iPhone 15 Pro.
No, but they said it'll be available for audit by independent experts.
They specifically stated it required iPhone 15 Pro or higher and anything with a m1 or higher.
iphone 15 Pro 8 GB RAM (https://www.gsmarena.com/apple_iphone_15_pro-12557.php)
iphone 15 6 GB Ram (https://www.gsmarena.com/apple_iphone_15-12559.php)
https://en.wikipedia.org/wiki/Apple_A17
Per the comparison table on that page, the "Neural Engine" has double the performance in the A17 compared to the A16, which could be the critical differentiator.
I can follow workout instructions in english, as can my kids. But Apple has decided that Apple One is more shit over here for some reason.
And my concern isn’t from a privacy perspective, just a “I want less things cluttering my screen” perspective.
So far though it looks like it’s decent at being opt-in in nature. So that’s all good.
Siri can now be that assistant, that summarises or does things, that would instead make you go through various screens or apps. Feels like it rescues clutter, not increases it to me imo
Aside: When the presenter showed the demo of her asking Siri to figure out the airport arrival time and then gloat it "would have taken minutes" to do on her own... I sat there and just felt so so strongly that I don't want to optimize out every possible opportunity to think or work out a problem in my life for the sake of "keeping on top of my inbox".
I understand value of the tools. But I think overall nothing about them feels very worth showing even more menus for me to tick through to make the magic statistical model spit out the tone of words I want... when I could have just sat there and thought about my words and the actual, real, human person I'm talking to, and rephrase my email by hand.
Completely agree. My first thought on seeing this stuff is that it suggests we, as an industry, have failed to create software that fulfils users’ needs, given we’re effectively talking about using another computer to automate using our computers.
My second thought is that it’s only a matter of time before AI starts pushing profitable interests just like seemingly all other software does. How long before you ask some AI tool how to achieve something and it starts pitching you on CloudService+ for only 4.99 per month?
And good news! You can clear your homescreen too fully from all icons now =)
I'm sure that there will be lots of genuinely useful things that come out of this AI explosion, but can't help but be a bit saddened by what we're losing along the way.
Of course I can choose not to use all of these tools and choose to spend time with folks of a similar mindset. But in the working world it is going to be increasingly impossible to avoid entirely.
It seems like this is what Rabbit's LAM was supposed to be. It is interesting to see it work, and I wonder how it will work in practice. I'm not sold on using voice for interacting with things still.
Image Generation is gross, I really didn't want this. I am not excited to start seeing how many horrible AI images I'm going to get sent.
I like Semantic Search in my photos.
This does seem like the typical Apple polish.I think this might be the first main stream application of Gen AI that I can see catching on.
This does look like a real-world implementation of the concept promoted by Rabbit. Apple already had the App Intents API mechanism in place to give them the hooks into the apps. They have also publish articles about their Ferret UI LLM that can look at an app's UI and figure out how to interact with it, if there are no available intents. This is pretty exciting.
At https://openadapt.ai we rely on users to demonstrate tasks, then have the model analyze these demonstrations in order to automate them. The goal is similar to Rabbit's "teach mode", except it's desktop only and open source.
1. Yes, App Intents feel like the best version of a LAM we'll ever get. With each developer motivated to log their own actions for Siri to hook into, it seems like a solid experience.
2. Image Gen - yeah, they're pretty nasty, BUT their focus on a "emoji generator" is great. Whatever model they made is surprisingly good at that. It's really niche but really fun. The lifelessness of the generations doesn't matter so much here.
3. Polish - there's so much polish, I'm amazed. Across the board, they've given the "Intelligence" features a unique and impressive look.
Apple may have actually nailed this one.
Edit: except for image generation. That one sucks
Say more? it's just the media thing about intellectual property rights?
Nor was their issue that the images were in the style of Ansel Adams, but rather that they used his name. That's not a copyright issue. It's a trademark one.
1. Imitation of artists' styles (Make an image in the style of...). The restricted styles are pretty generic, so harder to pin down as being a copy of or imitation of some artist.
2. It's cartoony, which avoids photorealistic but fake images being generated of real people.
Followed by maybe search engines once it gets to a certain level of quality (which we seem to be a bit far from).
Then either desktop or home(alexa).
I'll be curious to see if Apple gets caught with some surprise or scandal from unexpected behavior in their generative models. I expect that the rollout will be smoother than Bing or Google's first attempts, but I don't think even Apple will prove capable of identifying and mitigating all possible edge cases.
I noticed during the livestream that the re-write feature generated some pretty bad text, with all the hallmarks of gen AI content. That's not a good sign for this feature, especially considering they thought it was good enough to include in the presentation.
This is the curse of small-language models. They are better suited for constrained output like categorization. Using them for email generation takes... well, that takes courage.
Thankfully, there is an option to use GPT-4o for many of the text generation tasks.
The output is almost worse, dripping in a passive aggressive tone.
- HN Guidelines.
Assume any part, and assume none of your business.
That said, Apple generally gives people very fine-grained controls over what software features they want enabled, at least compared to other closed-source software vendors.
My question "what part and why" was intended to open up a discussion about privacy in regards to Apple's AI. But if your answer is simply "none of your business", then my answer to the question "can I turn it off" is simply "nobody has any way of knowing yet." Neither of those answers are great discussion openers.
Your username seems to check out.
Period.
Reason: again, nobody's business.
If you don't get this then, a) you're not in a high-risk group for discrimination, or b) you've never been subjected systemic polices designed to keep you "in line".
And the sentiment behind your comment seems very reasonable, reading past its non-sequitur tone.
However in this case I'm also concerned about needless power consumption. Especially on battery.
And knowing Apple, the RAG-stuff will be done overnight when the phone is charging, not during use.
I'll be disabling everything I can. I don't use Siri or anything of that sort as well.
This is scary stuff that should not be happening on anything that is closed-source and unaudited publicly. The pervasiveness of surveillance it enables is astounding.
Should we start auditing wallets next? People's driver licenses are sitting insecure and unencrypted in their pockets! Anyone could grab it!
Security is important, but being alarmist toward thoughtful progress hurts everyone.
But a lot of the other features actually seem useful without feeling shoehorned in. At least so far.
I am hoping that I can turn off the ability to use a server while keeping local processing, but curious what that would actually look like. Would it just say "sorry can't do that" or something? Is it that there is too much context and it can't happen locally or entire features that only work in the cloud?
Edit:
OK how they handle the ChatGPT integration I am happy with. Asks me each time if I want to do it.
However... using recipe generation as an example... is a choice.
but my biggest concern is that I think they look tacky, and putting it right in the messaging apps is gonna be ... irritating.
Regardless, even if it wasn't ChatGPT, given the recent problems I would not have used that as one example given that regardless of who it came from.
I don't even look at this stuff any more and see the upside to any of it. AI went from, "This is kinda cool and quaint." to "You NEED this in every single aspect of your life, whether you want it or not." AI has become so pervasive and intrusive, I stopped seeing the benefits of any of this.
All of these "make your life easier" features really show that no tech is making our lives simpler. Task creation is maybe easier but task completion doesn't seem to be in the cards. "Hey siri, summarize my daughters play and let me know when it is and how to get there" shows there's something fundamentally missing in the way we're living.
- So far, the quality has been very hit or miss, versus places where I intentionally invoke generative AI.
- I'm not ready to relinquish my critical thinking to AI, both from a general perspective, and also because it's developed by big companies who may have different values and interests than me.
- It feels like they're trying to get me to "just take a taste", like a bunch of pushers.
- I just want more/better of the right type of features, not a bunch of inscrutable magic.
It's not unlike the first spreadsheets. Sure, they will some day benefit the entire finance department, but at the beginning only people who loved technology for the sake of technology learned enough about them to make them useful in daily life.
Apple has always been great at broadening the audience of who could use personal computing. We will see if it works with AI.
I think it remains to be seen how broadly useful the current gen of AI tech can be, and who it can be useful for. We are in early days, and what emerges in 5-10 years as the answer is obvious to almost no one right now.
This barely scratches the surface on how much AI integration there's going to be in the typical life of someone in the 2030s.
After a random update my bank's app has received AI assistant out of blue to supposedly help their clients.
At first I was interested how these algorithms could enhance apps and services but now, this does indeed feels like shoving AI everywhere it's possible even if it doesn't makes any sense; as if companies are trying to shake a rattle over your baby's cradle to entertain it.
Aside above, I was hoping that after this WWDC Siri would get more languages so I could finally give it instructions in my native language and make it actually more useful. But instead there are generated emoticons coming (I wonder if people even remember that word). I guess chasing the hottest trends seems more important for Apple.
I'll go back to a dumbphone before I feed the AI
it is now almost exclusively anti-AI, which funnily enough I don't mind them training on
My e-mail, my documents, my photos, my browsing, my movement. The first step for me was setting up Syncthing and it was much smoother than I initially thought. Many steps to go.
I can’t help but think it’ll get worse with AI
Mostly I see no point in things like email self hosting if half my contacts are on Gmail and the other half on Microsoft.
My suggestion (as someone that tried to escape for some time) is to build a private system for yourself (using private OS and networks) and use a common system to interface with everyone else.
This hardly increases security, and does not increase privacy at all. If anything it provides Apple with an excuse that they will throw at you when you ask "why can't I configure my iOS device to use my servers instead of yours?" , which is one of the few ways to actually increase privacy.
This type of BS should be enough to realize that all this talk of "privacy" is just for the show, but alas...
Also a good thread from Matthew Green, a privacy/cryptography wonk, on the subject: https://x.com/matthew_d_green/status/1800291897245835616?s=4...
> ChatGPT will come to iOS 18, iPadOS 18, and macOS Sequoia later this year, powered by GPT-4o.
What's a promise from Sam Altman worth, again?
So I think it's worth as much as Apple is willing to spend enforcing it, which I imagine would be quite a bit.
I also don't really care, but it's understandable why some people do.
do you have a source on this or are you just assuming?
Of course there is some agreement…
How much would you be willing to bet, on a statement like this? I love a sporting chance.
Can you match it the other way around? :)
But I guess the list of grievances could be longer:
https://garymarcus.substack.com/p/what-should-we-learn-from-...
If OpenAI actually went against that, Apple would unleash the mother of all lawsuits.
Their brand is equally about creativity as it is about privacy. They wouldn't chop off one arm to keep the other, but that's what you're suggesting they should have done.
And yes, I know generative AI could be seen specifically as anti-creativity, but I personally don't think it is. It can help one be creative.
On the contrary, I'm shocked over the last few months how "on device" on a Macbook Pro or Mac Studio competes plausibly with last year's early GPT-4, leveraging Llama 3 70b or Qwen2 72b.
There are surprisingly few things you "need" 128GB of so-called "unified RAM" for, but with M-series processors and the memory bandwidth, this is a use case that shines.
From this thread covering performance of llama.cpp on Apple Silicon M-series …
https://github.com/ggerganov/llama.cpp/discussions/4167
… "Buy as much memory as you can afford would be my bottom line!"
And whilst the LLM's running locally are cool, they're still pretty damn slow compared to Chat-GPT, or Meta's LLM.
If I want some help coding or ideas about playlists, Gemini and ChatGPT are fine.
But when I'm writing a novel about an assassin with an AI assistant and the public model keeps admonishing me that killing people is bad and he should seek help for his tendencies, it's a LOT faster to just use an uncensored local LLM.
Or when I want to create some people faces for my RPG campaign and the online generator keeps telling me my 200+ word prompt is VERBOTEN. And finally I figure out that "nude lipstick" is somehow bad.
Again, it's faster to feed all this to a local model and just get it done overnight than fight against puritanised AIs.
They will go off device without asking you, they just ask if you want to use ChatGPT.
In 2024 they don't have to wiretap anything. It's all being sent directly to the cloud. Their job has been done for them.
There are big risks to having a cloud digital footprint, yet clouds can be used “somewhat securely” with encryption depending on your personal threat model.
Also, it’s not fair to compare clouds to wiretapping. Unless you are implying that Apple’s infrastructure is backdoored without their knowledge? One does not simply walk into an Apple datacenter and retrieve user data without questions asked. Legal process is required, and Apple’s legal team has one the stronger track records of standing up against broad requests.
So by default, user data is not protected.
With ADP if your mom loses her encryption keys, it's all gone. Forever. Permanently.
And of course it's Apple's fault somehow. That's why it's not the default.
Yes, perhaps broad dragnet type of might be scoffed down by some judges (outside of Patriot act FISA judges ofc)
I would warn you about the general E2E encryption and encrypted at rest claims. They are in-fact correct, but perhaps misleading? At some point, for most, the data does get decrypted server-side - cue the famous ":-)"
Edit:
This line from the keynote is also suspect: "And just like your iPhone , independent experts can inspect the code that runs on the servers to verify this privacy promise.".
First off, do "independent experts" actually have access to closed source iOS code? If so we already have evidence that this is sufficient (https://www.macrumors.com/2024/05/15/ios-17-5-bug-deleted-ph...).
The actual standard for privacy and security is open source software, anything short of that is just marketing buzz. Every company has an incentive to not leak data, but data leaks still happen.
>> Independent experts can inspect the code that runs on Apple silicon servers to verify privacy, and Private Cloud Compute cryptographically ensures that iPhone, iPad, and Mac do not talk to a server unless its software has been publicly logged for inspection. Apple Intelligence with Private Cloud Compute sets a new standard for privacy in AI, unlocking intelligence users can trust.
If Apple says it, do they have any disincentives to deliver? Not really. Their ad business is still relatively small, and already architected around privacy.
If someone who derives most of their revenue from targeted ads says it? Yes. Implementing it directly negatively impacts their primary revenue stream.
IMHO, the strategic genius of Apple's "privacy" positioning has been that it doesn't matter to them. It might make things more inconvenient technically, but it doesn't impact their revenue model, in stark contrast to their competitors.
It could be a very large practical increase in assurance, but it's not what they're saying it is.
It's a decent minimum bar that other companies should also be aiming for.
Edit: ref https://security.apple.com/blog/private-cloud-compute/
However, to me, the off-device bit they showed today (user consent on every request) represents a strategic hedge as a $3T company.
They are likely buying time and trying to prevent people from switching to other ecosystems while their teams catch up with the tech and find a way to do this all in the “Apple Way”.
That does not necessarily mean better, just different. I reserve judgment until I see how it shakes out.
but if I don't like this feature, and can't turn it off, I guess it's sadly back to Linux on my personal laptops.
If you don't specifically activate it, it won't do shit.
Of course we've only seen examples from an overly produced hype/propaganda video, but it looks to me of yet another example of Apple taking products and making them usable to the masses
And I agree with their goal here:
Dall-e how much now? it can do what? text to image? can it do emojis? vs Genmoji: oh .. it can do emojis. nice!
same with "Rewrite" and so on.
This is a bit obsequious to Apple. I find it hard to give a cogent argument of how ChatGPT is not "usable to the masses" at this point (and being -used- by the masses).
You can't just log in to ChatGPT and ask it what was on your calendar 2 weeks ago.
Who cares how your flight information shows up at the right time in the right place? the only thing that should matter is that it does.
People already carry around a device with a GPS, camera, and microphone that has access to most of their intimate and personal communications and finances. Adding AI capabilities doesn't seem like a bridge too far, that's for sure.
Private Cloud - Isn't this what Amazon did with their tablet - Fire? What is the difference with Apple Private Cloud?
https://en.wikipedia.org/wiki/Sherlock_(software)#Sherlocked...
> Sherlocked as a term
> The phenomenon of Apple releasing a feature that supplants or obviates third-party software is so well known that being Sherlocked has become an accepted term used within the Mac and iOS developer community.[2][3][4]
I switched to Apple Passwords and have been using the official Chrome extension for a few months. It's not as seamless as some of the password manager extensions, but has been working well enough.
https://chromewebstore.google.com/detail/icloud-passwords/pe...
There's a term for that, it's called being Sherlocked: https://www.howtogeek.com/297651/what-does-it-mean-when-a-co...
I don't use Grammarly, really, but I think at least that one is more automatic?
Auto transcripts for calls (with permission) is another feature I really liked.
I was a little surprised to see/hear no mention of inaccuracies, but for ChatGPT they did show the "Check for important facts" notice.
The only thing Humane was trying to do was scam their investors. Let us never speak of them again.
I'm not going into cycling/running and talking .. that's just not how things work when you need to breathe.
Driving and talking to a phone to then have it recite back to you 10 minutes of details you can just glance at but would be dangerous to?
If I'm relaxing on a couch .. i'm using a device. And please don't come back with "play me a chill song" as a fancy use case.
What I'm trying to say is that voice is not it and the only other kind of interaction I'm looking forward to see evolve is neuralink-style. In the sense that it needs to be wireless / non-invasive for mass adoption. That's it.
- siri text chat now on the lock screen
- incoming a billion developer app functions/APIs
- better notifications
- can make network requests
Why even open any other app?
This was my first thought when I saw Rabbit r1 - will all of us become backend developers just glueing various API between legacy services and LLMs? Today seems like another step in that direction.
You open your phone, it just shovels content. And it does absolutely nothing but optimize on addiction.
No apps, only masters.
When I ask Siri Pro what I'm doing on the weekend, she plans a dinner with a mix of friends and compatible strangers. Any restaurant is fine: the food is going to be personalized anyway.
>> - incoming a billion developer app functions/APIs
That would be cool, but the App Intents API is severely crippled. Only a few hardcoded use cases are supported.So any _real agent_ which has full access to all Apps can still blow Siri out of the water.
Your phone won't do anything else. For 99% of people, they pick up their phone, AI will just decide what they want to see. And most will accept it.
Someday everyone in the room will all pick up their phones when they all ring at once. It will be some emotional trigger like a live feed from a school shooting. Everyone in the room will start screaming at the totally different experiences they're being presented. Evil liberals, clueless law enforcement, product placement being shown over the shooter's gun. You'll sit horrified because you returned to a dumbphone to escape.
That will be the reality if this AI assistant stuff isn't checked hard now. AI is getting better at addiction an order of magnitude faster than it's getting better at actual tasks.
(Sharing because I had trouble finding it).
I'm curious to try some of the Siri integrations - though I hope Siri retains a 'dumb mode' for simple tasks.
But I think Apple is going to limit iPhones from doing something like that to boost sales of the 15 Pro and the future gens.
Better yet, no more dealing with overpriced subscriptions or programs that do not respect user privacy.
Kudos to the Apple software team making useful stuff powered by machine learning and AI!
By the end of the year maybe 1% of the content you interact with will be human made.
Even now in HN maybe 20-30% of the comments are generated by various transformers, but it seems every input box on every OS now has a context aware 'generate' button, so I suspect it will be way more in few months.
The Eternal September is coming. (and by ironic coincidence it might actually be in September haha)
I don't want generative AI in my phone. I want someone, or something to book a meeting with my family doctor, the head of my son's future primary school, etc. I don't need AI to do that. I need the other industries (medical/government/education) to wake up and let us automate them.
Do you know that my family doctor ONLY take calls? Like in the 1970s I guess? Do you know it takes hours to reach a government office, and they work maybe 6 hours a day? The whole world is f**ing sleeping, IT people, hey guys, slow down on killing yourselves.
AI is supposed to get rid of the chores, now it leaves us with the chores and take the creative part away. I don't need such AI.
But that’s in a perfect world.
Even to this day, post ChatGPT, I still can’t imagine how I would ever use this AI stuff in a way that really makes me want to use it. Maybe I am too simple of a mind?
Maybe the problem is in the way that it is presented. Too much all at once, with too many areas of where and how it can be used. Rewriting emails or changing invitations to be “poems” instead of text is exactly the type of cringe that companies want to push but it’s really just smoke and mirrors.
Companies telling you to use features that you wouldn’t otherwise need. If you look at the email that Apple rewrote in the keynote - the rewritten version was immediately distinguishable as robotic AI slop.
I'm sure there are usecases for this and the other GenAI features, but they seem more like mildly useful novelties than anything revolutionary.
There's risk to this as well. Making it easier to produce low value slop will probably lead to more of it and could actually make communication worse overall.
My job can be largely "AIed" away if such AI gets better and the company feeds internal code to it.
The first company to offer their models for offline use, preferably delivered in shipping container you plug in, with the ability to "fine tune" (or whatever tech) with all their internal stuffs, wins the money of everyone that has security/confidentiality requirements.
> the existing cloud tos and infrastructure fulfill all the legal and practical requirements
No, because the practical requirements are set by the users, not the TOS. Some companies, for the practical purposes of confidentiality and security, DO NOT want their information on third party servers [1].
Top third party LLM are usually behind an API, with things like retention, in those third party servers, for content policy/legal reasons. On premise, while being able to maintain content policy/legal retention on premise, for any needed retrospection (say after some violation threshold), will allow a bunch of $$$ to use their services.
[1] Companies That Have Banned ChatGPT: https://jaxon.ai/list-of-companies-that-have-banned-chatgpt/
edit: whelp, or go this route, and treat the cloud as completely hostile (which it should be, of course): https://news.ycombinator.com/item?id=40639606
Somebody still needs to make those decisions that it can't make well. And some of those decisions doesn't require seniority.
What happens is if you don’t need junior people, you eliminate the junior people, and just leave the senior people. The senior people then age out, and now you have no senior people either, because you eliminated all the junior people that would normally replace them.
This is exactly what has happened in traditional manufacturing.
In my mind Google is now a second class search like Bing. Kagi has savagely pwned Google.
You misspelled "ads"
You know I hadn't considered that and I think that's very insightful. Thank you
> I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes
I actually think exactly what should happen is already happening. All the low hanging fruit from software should be completed over the next decades, then the industry will undergo massive consolidation and there will be a large number of skilled people not needed in the industry anymore and they can move onto the next set of low hanging hard(er) problems for humanity.
The last time our IRS wanted sth from me, they just e-mailed me, I replied and the issue was solved in 5 minutes.
Oh, and you don’t need any paper ids within the country - driver license, car registration and official citizen id are apps on your phone, and if you don’t have your phone when say police catches you, you give them your data and they check it with their database and with your photo to confirm.
Lol, that will never happen in the USA. We have companies like Intuit actively lobbying against making things easy because their entire business is claiming to deal with the complexity for you.
On the upside, they are removing the requirements to change plates when you buy a used car, so there’s that.
In essense, we've saved 50 lives a year by avoiding certain mistakes with better record keeping and killed 5000 since the medical queues are too long due to busy doctors so people don't bother getting help in time.
Doctors have exams, residencies, and limited licenses to give out to protect their industry. Meanwhile, tech companies will give an engineering job to someone who took a 4 month bootcamp.
And despite that it's still your family doctor.
I fully agree with your vision. It's obvious once laid out in words and it was a very insightful comment. But the incentives are not there for other industries to automate themselves.
And government offices don't even care to begin with, you have no other choice.
If someone can do that more productively with Gen AI, do you care?
Google has a few different features to handle making calls on your behalf and navigating phone menus and holds.
The funny thing is, these auto-callers don't even need to be successful. They just need to become common enough for restaurants and doctors to get annoyed to the point where they finally bring their processes to the 21st century.
The only thing I'd add: I don't think the responsibility for lack of automation is solely on these other industries. To develop this kind of automation, they need funds and IT experts, but (i) they don't have funds, especially in the US, since they aren't as well funded as IT industry, (ii) for the IT industry this kind of automation is boring, they prefer working on AI.
In my view, the overall issue is that capitalism is prone to herding and hype, and resulting suboptimal collective decision-making.
Basically all your information is sucked into a semantic system, and your apps are accessible to a LLM. All with closed models and trusted auditors.
Also funny how they pretend it's a great breakthrough when Siri was stupid-Siri for so many years and only now is lately coming to the AI party.
I really hope those gen-images won't be used to ridicule and bully other people. I think it's kind of daring to use images of known people without their consent, relying on the idea that you know them.
And it's dawning on me that we are already neck-deep in AI. It's flowing through every app and private information. They obliterate any privacy in this system, for the model.
Apple could use it to sell more devices - every new generation can have more RAM = more privacy. People will have real reason to buy a new phone more often.
https://forums.macrumors.com/threads/do-m4-ipad-pros-with-8g...
One reason could be future AI models.
I'm not sure if this has been verified independently, but interesting nonetheless and would make sense in an AI era.
Technically, the sentence could be read that experts inspect the code, and the client uses TLS and CA's to ensure it's only talking to those Apple servers. But that's pretty much the status quo and uninteresting.
It sounds like they're trying to say that somehow iPhone ensures that it's only talking to a server that's running audited code? That would be absolutely incredible (for more things than just running LLMs), but I can't really imagine how it would be implemented.
People do stuff that they claim implements it using trusted, "tamperproof" hardware.
What they're ignoring is that not all of the assurance is "cryptographic". Some of it comes from trusting that hardware. It's particularly annoying for that to get glossed over by a company that proposes to make the hardware.
You can also do it on a small scale using what the crypto types call "secure multiparty computation", but that has enormous performance limitations that would make it useless for any meaningful machine learning.
Combining the existing aspects of Siri with an LLM will, I expect, make it the best voice assistant available.
Hey siri play classical work X a randomly selected version starts playing
Hey siri play a different version same version keeps playing
Hey siri play song X some random song that might have one similar keyword in the lyrics starts playing
No play song X I don’t understand
Hey siri play the rangers game do you mean hockey or baseball?
Only one is playing today and I’ve favorited the baseball team and you always ask me this and I always answer baseball I can’t play that audio anyway
>car crashes off of bridge
(All sequences shortened by ~5 fewer tries at different wordings to get Siri to do what I want)
Other than that, using an LLM to handle cross-app functionality is music to my ears. That said, it's similar to what was originally promised with Siri etc. initially. I do believe this technology can do it good enough to be actually useful though.
Any data sent to 3rd party AI models requests your consent first.
The details will need to emerge on how they live up to this vision, but I think it's the best AI privacy model so far. I only wish they'd go further and release the Apple Intelligence models as open source.
- the cpu arch of the servers
- mentioning that you have to trust vendors not to keep your data, then announcing a cloud architecture where you have to trust them not to keep your data
- pushing the verifiability of the phone image, when all we ever cared about was what they sent to servers
- only "relevant" data is sent, which over time is everything, and since they never give anyone fine-grained control over anything, the llm will quietly determine what's relevant
- the mention that the data is encrypted, which of course it isn't, since they couldn't inference. They mean in flight, which hopefully _everything_ is, so it's irrelevant
They talk about "independent experts" a bit, which I remember being hindered (and sued?) by them rather than supported.
(TSMC for hardware, but it seems very un-Apple to be so dependent upon someone else for software capabilities like OpenAI)
Anyway it seems like a small subset of Siri queries utilize ChatGPT, the vast majority of functionality is performed either locally or with Apple's cloud apparently.
They were also pretty explicit about planning to support other backend AI providers in the future.
I believe they will just provide an interface in the future to plugin as a Backend AI provider to trusted parties (like the search engine) but will slowly build their own ChatGPT for more and more stuff.
Also, it seems that most of Siri's improved features will still work without it (though perhaps less well in same cases) -- and therefore Apple is not fully dependent on it.
That’s the only case I can think of where it’s an external tech you’re making requests to, usually it’s things like Rosetta made out of Apple IIRC but integrated internally
Don’t think that’s right. I think Rosetta was always made inside Apple.
https://en.wikipedia.org/wiki/Rosetta_(software)
Perhaps mixing it up because of Rosetta Stone?
https://www.cnet.com/tech/services-and-software/the-brains-b...
https://en.wikipedia.org/wiki/QuickTransit
Rosetta 2 may have been developed in-house, though. That bit isn’t yet clear.
This AI capability is integrated throughout the entire OS and Apps.
It's now part of the "fabric" of iOS.
Are they though?
I just setup ubuntu 24 for my son to play games and it's comparatively a very unpolished experience. I'm being very polite when I say that.
Gamers should absolutely be heading towards Nobara Linux (Fedora-based, created by GloriousEggroll of Proton-GE fame). Developers should be trying Omakub. Grandma and Grandpa should be using Linux Mint.
I also missed the part of the linked article where it says that my Mac is going to take a screenshot every few seconds and store it for three months.
EDIT: Yes, I'm wrong.
I think this further confirms that they think these AI services are a commodity that they don't feel a need to compete with for the time being.
Who is to say they aren't eventually going to replace the OpenAI integration with an in-house solution later down the line? Apple Maps was released in 2012, before that they relied on Google Maps.
> Notes can record and transcribe audio. When your recording is finished, Apple Intelligence automatically generates a summary. Recording and summaries coming to phone calls too.
So the functionality exists, maybe just not in the Voice Memos app?
For example:
Imagine a simple Amazon price tracker I have in my menu bar. I pick 5 products that I want to have their price tracked. I want that info to be exposed to Siri too. And then I can simply ask Siri a tricky question: "Hey Siri, check Amazon tracker app, and tell me if it's a good moment to buy that coffee machine." I'd even expect Siri to get me that data from my app and be able to send it over my email. It doesn't sound like rocket science.
In the end of the day, the average user doesn't like writing with a chatbot. The average user doesn't really like reading (as it could be overwhelming). But the average user could potentially like an assistant that offloads some basic tasks that are not mission critical.
By mission critical I mean asking the next best AI assistant to buy you a plane ticket.
Oh boy. Someone is going to make a lot of money in court finding people who did this.
If you’re somewhere where contracts have meaning, it’s a true statement.
Local LLMs + Apple Private Cloud LLMs + OpenAI LLMs. It’s like they can’t decide on one solution. Feels very not Apple.
The OpenAI integration also seems setup to data mine ChatGPT. They will have data that says Customer X requested question Q and got answer A from Siri, which he didn't like and went to ChatGPT instead, and got answer B, which he liked. Ok, there's a training set.
I'm always wrong in prediction and will be wrong here but I'd expect openAI is a bad spot long term, doesn't look like they have a product strong enough to withstand the platform builders really going in AI. Once Siri works well, you will never open ChatGPT again.
You can clearly see only people objecting to this new technological integration are the people who don't have a use case for it yet. I am a college student and I can immediately see how me and my friends will be using these features. All of us have ChatGPT installed and subscribed already. We need to write professionally to our professors in e-mail. A big task is to locate a document sent over various communication channels.
Now is the time you'll see people speaking to their devices on street. As an early adopter using the dumb Siri and ChatGPT voice chat far more than average person, it has always been weird to speak to your phone in public. Surely the normalization will follow the general availability soon after.
Who will trust in anything coming from anyone through electonic channels? Not me. Sooner start to talk to a teddy bear or a yellow rubber duck.
This is a bad and dangerous tendency that corporate biggheads piss up with glares and fanfares so the crowd get willing to drink with amaze.
The whole text is full of corporate bullsh*t, hollow and cloudy stock phrases from a thick pipe - instead of facts or data - a generative cloud computing server room could pour at us without a shread of thoughts.
If we assume AI will get even 3-4x better, at a certain point, I can't help but think this is the future of computing.
Most users on mobile won't even need to open other apps.
We really are headed for agents doing mostly everything for us.
Some general OS rethinking is overdue. Or maybe Android is ready for this? Haven't looked into it since they made development impossible via gradle.
Despite this negativity the announcements were better than expected, rebranding AI is bold and funny. But the future will belong to general Agents, not a hardcoded one as presented.
Of course LLMs will quickly show their bounds, like they can’t reason, etc - but for the everyday commands people might ask their phones this probably won’t matter much. The next generation will have a very different stance towards tech than we do.
That seems like an effective guardrail if you don't want people trying to pass off AI generated images as real.
The problem is, regardless how hard they try, I just don't believe their statements on their private AI cloud. Primarily because it's not under their control. If governments or courts want that data they are a stroke of the pen away from getting it. Apple just can't change that - which is why it is surprising for me to see them give up on local device computing.
Two orders of magnitude improvement in 6 months? Not possible. Have you heard of Moore's Law? Maybe in 20 years.
- Doesn't understand not every LLM needs to be ChatGPT
- Links Moores Law wikipedia
I give up.
"I fully expected them to announce a stupendous custom AI processor that would do state of the art LLMs entirely local."
State of the art LLM means GPT-4 or equivalent. Trillion+ parameters. You won't run that locally on an iPhone any time soon.
However, The GPT integration feels forced and even dare I say unnecessary. My guess is that they really are interested in the 4o voice model, and they're expecting openAI to remain the front runner in the ai race.