Private Cloud Compute: A new frontier for AI privacy in the cloud
617 points
1 month ago
| 48 comments
| security.apple.com
| HN
hexage1814
1 month ago
[-]
The thing with cloud and with anything related to it, anything that connects to the internet somehow... is that, unless it's open source and the servers decentralized, you are always trusting SOMEONE. Sure, Apple might make their best to ensure nobody – but them – have access to your data... but Apple controls all the end points. It controls the updates your iPhone receives, it controls the servers where this happens. Like, they are so many opportunities for them to find what you are doing. It reminds me of this article "Web-based cryptography is always snake oil"

https://www.devever.net/~hl/webcrypto

And to be fair, this doesn't apply only to this case. Even the data you have stored locally, Apple could access it if they wanted, they sure have power to do it if they so wish or were ordered by the government. They might have done it already and just didn't told anyone for obvious reasons. So, I would argue the best you could say is that it's private in the sense that only Apples knows/can know what you are doing rather than a larger number of entities .

Which, you could argue it's a win when the alternatives will leak your data to many more parts... But still far away from being this unbreakable cryptography that it's portrayed it to be.

reply
noahtallen
1 month ago
[-]
I don’t think that’s completely fair. It basically puts Apple in the same bucket as Google or OpenAI. Google obviously tracks everything you do for ads, recommendations, AI, you name it. They don’t even hide it, it’s a core part of their business model.

Apple, on the other hand, has made a pretty serious effort to ensure that no employee can access your data on these AI systems. That’s hugely different! They’re going as far as to severely restrict logging and observability and even building and designing their own chips and operating systems. And ensuring that clients will refuse to talk to non-audited systems.

Yes, we can’t take Apple’s word for it. But I think the third party audits are a huge part of how we trust, and also verify, that this system will be private. I don’t think it’s far to claim that “Apple knows what you’re doing.” That implies that some one, at some level at Apple can at some point access the data sent from your device to this private cloud. That does not seem to be true.

I think another facet of trust here is that a rather big part of Apple’s business model is privacy. They’ve been very successful financially by creating products that generate money in other ways, and it’s very much not necessary or even a sound business idea for them to do something else.

While I think it’s fair to be skeptical about the claims without 3rd party verification, I don’t think it’s fair to say that Apple’s approach isn’t better for your data and privacy than openAI or Google. (Which I think is the broad implication — openAI tracks prompts for its own model training, not to resell, so it’s also “only openAI knows what your doing.”)

reply
chem83
1 month ago
[-]
What makes you think that internal access control at Apple is any better than Google's, Microsoft's or OpenAI's? Google employees have long reported that you can't access user data with standard credentials, for example.

Also, what makes you think that Apple's investments on chip design and OS is superior to Google's? Google is known for OpenTitan and other in-house silicon projects. It's also been working in secure enclave tech (https://news.ycombinator.com/item?id=20265625), which has been open-source for years.

You're making unverifiable claims about Apple's actual implementation of the technical systems and policies it is marketing. Apple also sells ads (App Store, but other surfaces as well) and you don't have evidence that your AI data is not being used to target you. Conversely, not all user data is used by Google for ad targeting.

reply
Spooky23
1 month ago
[-]
It’s not about technology. It’s about their business.

Apple generally engineers their business so that there isn’t an incentive to violate those access controls or principles. Thats not where the money is for them.

Behavior is always shaped by rewards and punishments. Positive reinforcement is always stronger.

reply
whynotminot
1 month ago
[-]
One hundred percent this.

All these conversations always end up boiling down to someone thinking they’re being clever for pointing out you have to trust a company at the end of the day when it comes to security and privacy.

Yes. Valid. So if you have to trust someone, doesn’t it make sense for it to be someone who has built protecting privacy into their core value proposition, versus a company that has baked violating your privacy into their value prop?

reply
talldayo
1 month ago
[-]
It's not about being clever, it's about being perceptive. Apple's cloud commitment has a history of being sketchy, whether it's their government alliance in China, the FIVE-EYES/PRISM membership in America, or their obsession with creating "private" experiences that rely on the benefit of the doubt.

Apple doesn't care about you, the individual. Your value as a singular customer is worthless. They do care about the whole; a whole that governments can threaten to exclude them from if they don't cooperate with domestic surveillance demands. How far off do you really think American iCloud is from China? If Apple is willing to backdoor one server, what's stopping them from backdooring them all? If they're willing to lie about notification security, what's stopping them from lying about server integrity too?

And worst off, Apple markets security. That's it; you can't go verify their veracity outside the dinky little whitepapers they publish. You can't know for sure if they have privacy violation baked-in to their system because you can't actually verify anything. You simply have to guess, and the best guess you can make gets based off whatever Apple markets as "true" to you. In reality, we can do better with security and should probably expect more from one of the largest consumer technology brands in the world. Simply assuming that they aren't violating user privacy is an absurd thing to gamble your security on.

reply
39896880
1 month ago
[-]
If you are the target of a nation state level actor, you are already fucked. Most of us just don’t want our behavior sold to our insurance companies or whatever. Apple doesn’t do that because it would kill their brand for very little return.
reply
whynotminot
1 month ago
[-]
This is the part that’s always so humorous to me about the super tinfoil hat security crowd. They think they’re in the plot of Mr Robot or something. When for the most part, no one actually cares about them at all.

My dad fits into this category. So worried about being “tracked by the government.” He’s not a dissident. He’s not a journalist. Not a freedom fighter. Just deeply inconveniencing his kids with some of his tech choices.

But if these people were the targets of APTs, all the massive technology lifestyle changes they’ve made to supposedly protect themselves wouldn’t really matter.

reply
snaeker58
1 month ago
[-]
I really also don’t bother about security, but I hate that any argument against people caring about privacy is along the lines of „I have nothing to hide“. Especially on the note of Apple, I remember when a dad was flagged as a pedophile because Apple found photos of his kid in his iCloud and their algorithm decided to get him raided. It’s about control, when you hand your data over to 3rd parties of any kind you are giving up control and one day that will bite you in the ass in some way. I am will to take that risk, you too, but I still think not wanting that is totally valid. A type of angst which I find much more stupid is people being scared of AI taking over the world HAL x Terminator style…
reply
talldayo
1 month ago
[-]
So this whole thing is about you being angry that your dad doesn't use iMessage?

Sounds like your dad is the cool dude, and you're the tech-obsessed weirdo. Do you visit him often?

reply
whynotminot
1 month ago
[-]
Nah he uses iMessage. He’s not that obstinate.

He’s otherwise a good dude. Just makes some tech choices here and there as if he’s a former CIA agent on the run that sort of just make you chuckle and shake your head.

reply
talldayo
1 month ago
[-]
That's the convenient line of blind apathy they rely on, to sell iPhones. If people cared, they would object to owning an iPhone just from the material and labor cost of it... but they don't. It's a running joke that nobody cares what next year's iPhone looks like as long as the trade-in value is good. Apple couldn't kill their brand if they tried, past this point. People don't pay attention anyways.

Which is why it's good for us to demand more from capable companies. Apple looks good when they're scared, and the market wins when they're forced to compete in novel and interesting ways. Success breeds complacency, the rest is distant history.

reply
blackqueeriroh
1 month ago
[-]
> And worst off, Apple markets security. That's it; you can't go verify their veracity outside the dinky little whitepapers they publish. You can't know for sure if they have privacy violation baked-in to their system because you can't actually verify anything.

Oh, boy, but this is deeply false. Apple literally provides security researchers models of their devices to verify their security claims on their most important cash cow, the iPhone.

This is just an incredibly bold and verifiably false claim.

Wow.

reply
talldayo
1 month ago
[-]
Apple has tried suing researchers, before: https://www.theverge.com/2021/8/11/22620014/apple-corellium-...

On top of that, they fail to commit to iOS security on the level of AOSP and don't let researchers create hardened variants or custom patches. With actively-distributed exploits like Pegasus still being used, that's the sort of behavior that turns your userbase into a stationary target. Giving researchers iPhones is insultingly usel

Apple vehemently opposes the concept of anyone securing their iPhone except them. They have a well-documented habit of ignoring vulnerabilities and offering zero compensation for the discovery of zero-days. Apple's ambivalence towards the security research sector is like one of the only things they're known for, among hacker communities. It is "verifiably false" in the sense that Apple spends quite a lot of money marketing the opposite of what they actually do in reality (not that you should be surprised by that).

reply
saagarjha
1 month ago
[-]
Can you explain to me how I might use such a device to verify the security properties of iBoot?
reply
Spooky23
1 month ago
[-]
You lose all credibility when you start yakking about FIVE EYES, etc. If you're the target of intelligence services, the advice you need is eloquently delivered in the movie "Goodfellas". That is: "Don't talk on the fucking phone."

American companies are subject to US law, full stop. Global technology companies have to balance interests to operate globally. China requires a local partner to operate services in the PRC, thus Apple and Microsoft (and others) operate with a business partner in that market.

From a business perspective, there's little or no incentive for Apple to take measures to collect information on you systematically - they do not monetize it and won't devote resources to its collection. However, not being responsive to government requests, demands, or order for information will result in punitive action. So they comply.

No company cares about you. They don't love or hate you. There's no moral purity - the competitive platform is owned by a company that owns the advertising market and has a long history of extracting every sinew of data to create profiles that allow for maximally efficient ad delivery. Engaging in whataboutism isn't productive.

reply
TremendousJudge
1 month ago
[-]
That's a false dichotomy. You may have to trust someone but that someone could be something else than an opaque for-profit company.
reply
whynotminot
1 month ago
[-]
Give me some examples of benevolent non profits that provide anywhere near the level of consumer services as a company like Apple.
reply
talldayo
1 month ago
[-]
I'll do better, here's a benevolent nonprofit that goes beyond what Apple provides to ensure top-notch consumer service: https://grapheneos.org/
reply
Teever
1 month ago
[-]
They're not trying to be clever, they're trying to point out the very important philisophy of maximizing self reliance that so many people like you eschew.

How do you distinguish between a company who 'has built protecting privacy into their core value proposition' and one who just says they've done so?

What are you going to do if a major privacy scandal comes out with Apple at the center? If you wouldn't jump ship from Apple after a major privacy scandal then why does your input on this matter at all?

Some people feel that is inevitable so it's best to just rip that bandaid off now.

reply
whynotminot
1 month ago
[-]
I'm taking aim at the Google bros who try to raise these arguments to muddy the waters into a sort of false equivalence between Apple and Google.

If you're already using a dumb phone and eschewing modern software services, then I'm not really talking to you. Roll on brother/sister, you are living your ideals.

> How do you distinguish between a company who 'has built protecting privacy into their core value proposition' and one who just says they've done so?

The business incentives. Apple's brand and market valuation to some extent depends on being the secure and privacy oriented company you and your family can trust. While Google's valuation and profit depends almost entirely on exploiting as much of your personal data as they possibly can get away with. The business models speaks for themselves.

Does this guarantee privacy and security? Does Apple have a perfect track record here? No of course not, but again if these are my two smartphone choices it seems fairly clear to me.

reply
talldayo
1 month ago
[-]
> but again if these are my two smartphone choices it seems fairly clear to me.

If you really perceive this as a binary choice, I have no idea how you could conclude that iOS is more secure than the Android Open Source Project.

...of course, it's not just a choice between a Google-spyware phone or an Apple-spyware phone. Many people like to reduce it to that so they can rationalize whichever company they pick, but in reality you have many choices including no smartphone at all. On Android's side, the Open Source images have enabled rigorous cross-referencing in OS capability, as well as forks that reduce the already-limited attack surface. Apple has a long track-record of letting zero-days fester in their inbox and failing to communicate promptly to security researchers, even for actively-exploited vulnerabilities.

It's not a "false equivalency" to highlight how Google, Apple and Microsoft all fold over like wet paper when the intelligence agencies come around. It's not a coincidence, either; all of those companies are enrolled in the NSA's domestic warrantless surveillance program.

reply
whynotminot
1 month ago
[-]
> but in reality you have many choices including no smartphone at all.

Oh come on man. This is why these conversations often aren’t even worth having.

reply
talldayo
1 month ago
[-]
I'm sorry, hopefully you come back to reality soon. I just went 2 weeks without touching a smartphone, I'm certain you can too.
reply
whynotminot
1 month ago
[-]
I think you’re the one not living in reality.

But, hey, at least the NSA won’t get ya.

reply
amplex1337
1 month ago
[-]
If you can live without a cellphone, you're not living in reality? Interesting argument.

I wonder how all those people did it in the 90s and 00s and before the age of smartphones.

reply
talldayo
1 month ago
[-]
In those dark derelict days, before the brilliant shining light of creation endowed man with the Subway App.
reply
whynotminot
1 month ago
[-]
Simple, everyone around them also didn’t have cellphones.

Reality is based in a context.

Or are we going to go to even more “get off my lawn” kind of places and talk about how ancient man survived quite fine without the internet?

reply
Teever
1 month ago
[-]
You know this is a growing trend with teens, right?

Like to eschew smartphones and just use basic feature phones and to interact in real physical settings and not digital ones.

There's a growing and warranted push back to pervasive and addictive digital technology.

reply
talldayo
1 month ago
[-]
Alright. Take care.
reply
cromwellian
1 month ago
[-]
I worked for Google for almost 14 years. Never did they, any other engineer, or even product manager I know of, ever suggest to snoop into cloud customer data, especially those using Shielded VMs and Customer Managed Encryption Keys for attached storage (https://cloud.google.com/kubernetes-engine/docs/how-to/using...). I've never seen even the slightest hint, and the security people at Google are incredibly anal to a T about the design and enforcement of these things.

This stuff is all designed so that even an employee with physical access to the machine would find it very difficult to get data. It's encrypted at rest by customer keys, stored in enclaves in volatile RAM. If you detached the computer or disk, you'd lose access. You'd have to perform an attack by somehow injecting code into the running system. But Shielded VMs/GKE instances makes that very hard.

I am not a Google employee anymore but this common tactic of just throwing out "oh, their business model contains ad model ergo, they will sell anything and everything, and violate contracts they sign to steal private data from your private cloud" is a bridge too far.

reply
robmccoll
1 month ago
[-]
That's becoming less the case. As Apple's advertising and services revenue grows and hardware sales slow, they have increasing incentive to mine your data the same as any company does. They already use quite a bit data on the location and content personalization front. I would argue that Apple perhaps cares about protecting your data more from malicious third parties (again like any company should - it's never good for FAANG when data leaks or is abused), but they are better at it (and definitely better at marketing it).
reply
theshrike79
1 month ago
[-]
> What makes you think that internal access control at Apple is any better

There are multiple verified stories on the lengths Apple goes internally to keep things secret.

I saw a talk years ago about (I think) booting up some bits of the iCloud infrastructure, which needed two different USB keys with different keys to boot up. Then both keys were destroyed so that nobody knows the encryption keys and can't decrypt the contents.

reply
p_l
1 month ago
[-]
The stories about Apple keeping things secret usually go about protecting their business secrets from normal people, up to doing probably illegal actions.

Using deniable, one-time keys etc. are... not that unusual. In fact I'd say I'm more worried about the use of random USB keys there instead of proper KMS system.

(There are similar stories with how doing a cold start can be difficult when you end up with a loop in your access controls, from Google, where a fortunately simulated cold-start showed that they couldn't access necessary KMS physically to bootstrap the system... because access controls depended, after many layers, on the system to be cold-started).

reply
milkshakes
1 month ago
[-]
they used smartcards, not usb keys
reply
p_l
1 month ago
[-]
Which probably were just key transport devices from offline secured KMSes
reply
padolsey
1 month ago
[-]
What's funny is that, in all these orgs, it ends up being the low-tech vulns that compromise you in the end. Physical access, social engineering, etc. However, I'm really impressed by the technical lengths Apples goes to though. The key-burning thing reminds me of ICANN' Root KSK Ceremonies.
reply
treprinum
1 month ago
[-]
Destroyed? Where? In all places where they were stored? Or just in some of them? How can you tell? You still need to trust them they didn't copy them somewhere.
reply
theshrike79
1 month ago
[-]
It's impossible to use any technology if you don't trust anyone.

Any piece of technology MAY have a backdoor or secondary function you don't know of and can't find out without breaking said device.

reply
treprinum
1 month ago
[-]
That was the point of my response. Somewhere in the chain one must trust something without any proof.
reply
cdata
1 month ago
[-]
That's not even getting to the fact that Apple is also running a display ads business: https://searchads.apple.com/
reply
woadwarrior01
1 month ago
[-]
Indeed. Apropos to this: new features[1] to insert ads into videos in native apps.

[1]: https://developer.apple.com/videos/play/wwdc2024/10114/

reply
musictubes
1 month ago
[-]
Such a lazy take. Yes, they show ads based on what you search for in the App Store. They will also show apps based on location if the customer opts in to that feature. No other data is used. No browsing history, no purchase history, nothing like what other companies are collecting.

https://searchads.apple.com/privacy

reply
mbs159
1 month ago
[-]
Glancing at your comment history I can't help but notice that most of your comments are related to defending Apple, even at points where the consensus on HN is that Apple is obviously in the wrong. I applaud you, sir.
reply
cdata
1 month ago
[-]
Eventually the addressable market for iPhones will saturate, but the growth imperative will remain.

If I were king of Apple and I truly valued user privacy, I would be careful not to tie any revenue streams to products that entail the progressive violation of user privacy.

reply
1vuio0pswjnm7
1 month ago
[-]
"I think another facet of trust here is that a rather big part of Apple's business model is privacy. They've been very successful financially by creating products that generate money in other ways, and it's very much not necessary or even a sound business idea for them to do something else."

If a third party wants that data, whether the third party is an online criminal, government law enforcement or a "business partner", this idea that Apple's "business model" will somehow negate the downsides of "cloud computing", online advertising and internet privacy is futile. Moreover, it is a myth. Apple is spending more and more on ad services, we can see this in its SEC filings. Before he died, Steve Jobs was named on an Apple patent application for showing ads during boot. The company uses "privacy" as a marketing tactic. There is no evidence of an ideological or actual effort to avoid the so-called "tech" company "business model". Apple follows what these companies do. It considers them competitors. Apple collects a motherload of user data and metadata. A company that was serious about privacy would not do this. It's a cop out, not a trade off.

To truly avoid the risks of cloud computing, online advertising and associated privacy issues, choosing Apple instead of Google is a half-baked effort. Anyone who was serious about it would choose neither.

Of course, do what is necessary, trust whomever; no one is faulting anyone for making practical choices, but let's not pretend choosing Apple and trusting it solves these problems introduced by so-called "tech" company competitors. Apple pursues online advertising, cloud computing and data collection. All at the expense of privacy. With billions in cash on hand, it is one of the wealthiest companies on Earth, does it really need to do that.

In the good old days, we could call Apple a hardware company. The boundaries were clear. Those days are long gone. Connect an Apple computer to a network and watch what goes over the wire wth zero user input, destined for servers controlled by the mothership. There is nothing "private" about that design.

reply
troyvit
1 month ago
[-]
> Of course, do what is necessary, trust whomever; no one is faulting anyone for making practical choices, but let's not pretend choosing Apple and trusting it solves these problems introduced by so-called "tech" company competitors. Apple pursues online advertising, cloud computing and data collection. All at the expense of privacy. With billions in cash on hand, it is one of the wealthiest companies on Earth, does it really need to do that.

Yeah. I feel like the conversation needs some guard rails like, "Within the realm of big tech, which has discovered that one of its most profitable models is to make you the product, Apple is really quite privacy friendly!"

reply
dmattia
1 month ago
[-]
Disclaimer: I used to work on Google Search Ads quality models

> Google obviously tracks everything you do for ads, recommendations, AI, you name it. They don’t even hide it, it’s a core part of their business model.

This wasn't the experience I saw. Google is intentional about which data from which products go into their ads models (which are separate from their other user modeling), and you can see things like which data of yours is used in ads personalization on https://myadcenter.google.com/personalizationoff or in the "Why this ad" option on ads.

> and it’s very much not necessary or even a sound business idea for them to do something else

I agree that Apple plays into privacy with their advertising and product positioning. I think assuming all future products will be privacy-respecting because of this is over-trusting. There is _a lot_ of money in advertising / personal data

reply
Sporktacular
1 month ago
[-]
"ensuring that clients will refuse to talk to non-audited systems."

I'm trying to understand if this is really possible. I know they claim so but is there any info on how this would prevent Apple from executing different code to what is presented for audit?

reply
brookst
1 month ago
[-]
The servers provide a hash of their environment to clients, who can compare it to the published list of audited environments.

So the question is: could the hash be falsified? That’s why they’re publishing the source code to firmware and bootloader, so researchers can audit the secure boot foundations.

I am sure there is some way that a completely malevolent Apple could design a weakness into this system so they could spend a fortune on the trappings while still being able to access user information they could never use without exposing the lie and being crushed under class actions and regulatory assault.

But I reject the idea that that remote possibility means the whole system offers no benefit users should consider in purchasing decisions.

reply
Sporktacular
1 month ago
[-]
Sure I'm missing something, but isn't that just an untrusted server self-reporting its own hash? Apple publishes the bootloader source and we'd have to assume it's what's actually running and reporting honestly the hash of the OS it's hosting. So we need to go earlier in the chain. In the end, from afar, we don't know if we're communicating with an actual Secure Enclave/SGX whatever or something that just acts like one.

Matt Green's posts about it so am sure it's been thought out - but hard to understand how it doesn't just depend on employees doing the right thing, when if you could, you would need all the rigmarole.

reply
saagarjha
1 month ago
[-]
You're not missing anything.
reply
Sporktacular
1 month ago
[-]
* wouldn't need all the...
reply
p_l
1 month ago
[-]
Unless they pass all keys authorized by the system to third parties that ensure appropriate auditing, none.

And at least after my experiences with T2 chip, I consider Apple devices to be always owned by Apple first...

reply
verisimi
1 month ago
[-]
It's completely fair, because regardless of third party audits, chips, etc, there are backdoors right along the line, that are going to provide Apple and the government with secret legal access to your data. They can simply go to a secret court, receive a secret judgment, and be authorised to secretly view your data. Does anyone really think this is not already the case? There is no transparency. A licensed third party auditor would not be able to tell you this. We have to operate with the awareness that all data online is already not private - no need to pretend/imagine that Apple's marketing is actually true, and that it is possible to buy online privacy utopia.
reply
theshrike79
1 month ago
[-]
The best protection against "secret orders" is to use mathematics.

Build your system so that it can't be decrypted, don't log anything etc. Mullvad has been doing this with VPNs and law enforcement has tested it - there's nothing for them to get.

Same has been proven with Apple not allowing FBI to open an iPhone, because it'd set a precedent. Future iPhone versions were made so that it's literally impossible for even Apple to open a locked iPhone.

There's no reason why they wouldn't go to same lengths on their private cloud compute. It's the one thing they can do that Google can't.

reply
Xelynega
1 month ago
[-]
> Same has been proven with Apple not allowing FBI to open an iPhone, because it'd set a precedent.

I thought the outcome of that case was that no precedent was set, since the iPhone was unlocked before the FBI could test their argument in court.

> Future iPhone versions were made so that it's literally impossible for even Apple to open a locked iPhone.

Firmware signed by apple is what runs to verify your biometrics and decide whether or not to unlock the device. At any point apple could sign firmware with a backdoor for this processor which lets them unlock any phone. How did they prevent this in future iPhone versions?

> theshrike79 18 hours ago | parent | context | flag | on: Private Cloud Compute: A new frontier for AI priva...

The best protection against "secret orders" is to use mathematics.

Build your system so that it can't be decrypted, don't log anything etc. Mullvad has been doing this with VPNs and law enforcement has tested it - there's nothing for them to get.

Same has been proven with Apple not allowing FBI to open an iPhone, because it'd set a precedent. Future iPhone versions were made so that it's literally impossible for even Apple to open a locked iPhone.

> There's no reason why they wouldn't go to same lengths on their private cloud compute. It's the one thing they can do that Google can't.

They did go to the same length, they have the ability to see your data whenever they choose to since they own the signing keys.

reply
KaiserPro
1 month ago
[-]
> Build your system so that it can't be decrypted

Now you can't debug anything.

> Mullvad has been doing this with VPNs

Mullvad do not need to store any data at all. Infact any data that they store is a risk. Minimising the data stored minimises their risk. The only thing they need to store is keys.

Look, if you want to ask an AI service if this photo has a dog in, thats simple and requires no state other than the photo. If you want to ask it does it have my dog in, thats a whole 'nother kettle of fish. How do you communicate the descriptors that describe your dog? how do you generate them? on device? that'll drain your battery in a very short order.

> Apple not allowing FBI to open an iPhone, because it'd set a precedent

Because they didn't follow process.

> Future iPhone versions were made so that it's literally impossible for even Apple to open a locked iPhone.

They don't need to, just hack the icloud backup. plus its not impossible, its just difficult. If you own the key authority then its less hard.

reply
verisimi
1 month ago
[-]
> Same has been proven with Apple not allowing FBI to open an iPhone, because it'd set a precedent. Future iPhone versions were made so that it's literally impossible for even Apple to open a locked iPhone.

Right, but I have no reason to think that this isn't a marketing ploy either, just another story. There is simply no way that Apple is as big as it is, without providing whatever data the government requires. Corporations and governments are not your friend.

reply
theshrike79
1 month ago
[-]
Apple will obey government orders to give data they have and can access.

No government order short of targeting a specific backdoored update to a specific person will allow them to give data they can't access.

And if you're doing something that can make a TLA force Apple to create a targeted iOS update just for you, it's not something regular people can or should worry about.

Apple keeps normal people safe from mass surveillance, being protected from CIA/NSA required going Full Snowden and it's not a technological problem, you need to change the way you live.

reply
hexage1814
1 month ago
[-]
> No government order short of targeting a specific backdoored update to a specific person

I'm failing to see the what would be the challenge here. Apple can technically do that. The government can force them to do that.

reply
verisimi
1 month ago
[-]
Do you not remember Edward Snowden? Eg this sort of info:

> The scandal broke in early June 2013, external when the Guardian newspaper reported that the US National Security Agency (NSA) was collecting the telephone records of tens of millions of Americans.

> The paper published the secret court order directing telecommunications company Verizon to hand over all its telephone data to the NSA on an "ongoing daily basis".

https://www.bbc.com/news/world-us-canada-23123964

You seem to think that 10 years, under cover of secret orders, that this is NOT going on now. Not Apple!

People's lovely trusting natures in corporations and government never ceases to amaze me.

reply
theshrike79
1 month ago
[-]
"telephone data" != "contents of every phone call"
reply
digging
1 month ago
[-]
Contents of communications aren't as important as you may think; metadata is extremely dangerous.
reply
verisimi
1 month ago
[-]
You and I have no idea.
reply
dwaite
1 month ago
[-]
> Does anyone really think this is not already the case?

I don't think this is already the case, and I think the article is an example of safeguards being put into place (in this particular scenario) to prevent it.

reply
verisimi
1 month ago
[-]
On the basis of not having information, cos all this occurs out of sight, you believe this is not the case. Ok.
reply
brookst
1 month ago
[-]
If you’re presenting a conspiracy theory, you have to at least poke holes in the claims you consider false.

Under the system described in the linked paper, your scenario is not possible. In fact, the whole thing looks to be designed to prevent exactly that scenario.

Where do you see the weakness? How could a secret order result in undetectable data capture?

reply
verisimi
1 month ago
[-]
No. The information is all out there - secret courts, secret judgements, its all been put out there. I don't need to dissect any technical information, to recognise that I cannot know what I do not know.

In case anyone was uncertain about whether to trust what we are told - we heard that the US government was taping millions of phone records from the Snowden revelations.

So, we are told there are secrets, and we are told that there are mechanisms in place to prevent this information from being made public.

You are also free to believe that the revelations are no longer relevant... I'd like to hear the reason.

IMO - the reverse is the case - in that you need to show why Apple have now become trustworthy. Why would Apple not be subject to secret judgements?

I know there is a lot of marketing spin about Apple's privacy - but do you really think that they would actually confront the government system, in a way that isn't some further publicity stunt? Can one confront the government and retain a license to operate, do you think? Is it not probable that the reality is that Apple have huge support from the government?

Perhaps this kind of idea is hard to understand - that one can make a big noise about privacy, and how one is doing this or that to prevent access, and all the while ensuring that access is provided to authorised parties. Corporations can say this sort of thing with a straight face - its not a privacy issue to private information - its a (secret) legal issue!

Sorry, but secret courts and secret judgements, along with existing disclosure that millions were being spied upon, means one needs to expect the worst.

reply
brookst
1 month ago
[-]
Fair, go ahead and expect the worse, and handwave away any attempts to mitigate.

But I'm not sure where that leaves you. Is it just a nihilistic "no security matters, it's all a show" viewpoint?

reply
verisimi
1 month ago
[-]
It is fair, I don't accept attempts to mitigate. The trust is gone, and nothing can recover it. The idea of trusting government and corporations was ridiculous in the first place as these entities are not your friends.

You wouldn't expect a repeat abuser to stop abusing just because of 'time' or a marketing campaign. And yet this is the case here. People keep looking to their tormentors for solutions.

Not expecting healing from those also inflicting the trauma, ie changing one's expectations, seems like a minimum effort/engagement in my view, but it's somehow inconceivable.

reply
wdr1
1 month ago
[-]
Apple uses your information for advertising as well.

https://www.apple.com/legal/privacy/data/en/apple-advertisin...

It also exempts itself from normal tracking opt-outs in iOS. It has _another_ set of settings you need to opt out off to disable _their_ advertising tracking.

https://support.apple.com/en-us/105131

reply
devjab
1 month ago
[-]
I think it’s pretty fair. This example isn’t about Apple but about Microsoft, but we’ve had a decade long period where Microsoft has easily been the best IT-business partner for enterprise organisations. I’ve never been much of a fan of Microsoft personally, but it’s hard to deny just how good they are at building relationships with enterprise. I can’t think of any other tech company that knows enterprise the way Microsoft does, but I think you get the point… anyway they too are beginning to “snoop” around.

Every teams meeting we have is now transcribed by AI, and while it’s something we want, it’s also a lot of data in the hands of a company where we don’t fully know what happens with it. Maybe they keep it safe and only really share it with the NSA or whichever American sneaky agency listens in on our traffic. Which isn’t particularly tin-foil-hat. We’ve semi-recently had a spy scandal where it somewhat unrelated (this wasn’t the scandal) was revealed that our own government basically lets the US snoop on every internet exit node our country has. It is what it is when you’re basically a form of vassal state to the Us. Anyway, with the increased AI monitoring tools build directly into Microsoft products, we’re now handing over more data than ever.

To get the point, we’re currently seeing some debate on whether Chromebooks and Google education/workspaces should be allowed in schools. Which is a good debate. Or at least it would be if the alternative wasn’t Microsoft… Because does it really matter if it’s Google or Microsoft that invades your privacy?

Apple is increasingly joining this trend. Only recently it was revealed that new Apple devices have some sort of radio build into them, even though it’s not on their tech sheets. Or in other words, Apple has now joined the trend of devices that can form their own internet by being near other Apple devices. Similar to how Samsung and most car manufacturers have operated for years now.

And again if sort of leads to… does it really matter if it’s Google or Apple that intrudes on your privacy? To some degree it does, of course, I’d personally rather have Microsoft or Apple spy on me, but I would frankly prefer if no one spied on me.

reply
TeMPOraL
1 month ago
[-]
> unless it's open source and the servers decentralized, you are always trusting SOMEONE

Specifically, open-source and self-hostable. Open source doesn't save you if people can't run their own servers, because you never know whether what's in the public repo is the exact same thing that's running on the cloud servers.

reply
dwaite
1 month ago
[-]
You can by having an attestation of the signed software components up from the secure boot process, and having the client device validate said attestation corresponds to the known public version of each component, and randomize client connections across infrastructure.

Other than obvious "open source software isn't perfectly secure" attack scenarios, this would require a non-targeted hardware attack, where the entire infrastructure would need to misinterpret the software or misrepresent the chain of custody.

I believe this is one of the protections Apple is attempting to implement here.

reply
andersa
1 month ago
[-]
Usually this is done the other way around - servers verifying client devices using a chip the manufacturer put in them and fully trusts. They can trust it, because it's virtually impossible for you (the user) to modify the behavior of this chip. However, you can't put something in Apple's server. So if you don't trust Apple, this improves the trust by... 0%.

Their device says it's been attested. Has it? Who knows? They control the hardware, so can just make the server attest whatever they want, even if it's not true. It'd be trivial to just use a fake hash for the system volume data. You didn't build the attestation chip. You will never find out.

Happy to be proven wrong here, but at first glance the whole idea seems like a sham. This is security theater. It does nothing.

reply
brookst
1 month ago
[-]
If it is all a lie, Apple will lose so much money from class action lawsuits and regulatory penalties.

> It’d be trivial to just use a fake hash

You have to go deeper to support this. Apple is publishing source code to firmware and bootloader, and the software above that is available to researchers.

The volume hash is computed way up in the stack, subject to the chain of trust from these components.

Are you suggesting that Apple will actually use totally different firmware and bootloaders, just to be able to run different system images that report fake hashes, and do so perfectly so differences between actual execution environment and attested environment cannot be detected, all while none of the executives, architects, developers, or operators involved in the sham ever leaks? And the nefarious use of the data is never noticed?

At some point this crosses over into “maybe I’m just a software simulation and the entire world and everyone in it are just constructs” territory.

reply
andersa
1 month ago
[-]
I don't know if they will. It is highly unlikely. But theoretically, it is possible, and very well within their technical capabilities to do so.

It's also not as complicated as you make it sound here. Because Apple controls the hardware, and thus also the data passing into attestation, they can freely attest whatever they want - no need to truly run the whole stack.

reply
brookst
1 month ago
[-]
It is as complicated as I make it sound. Technically, it's trivial, of course.

But operationally it is incredibly complicated to deliver and operate this kind of false attestation at massive scale.

reply
p_l
1 month ago
[-]
Usually the attestation systems operate on neither side having everything to compute a result that will match attestation requirements, and thus require that both server-side and client-side secret are involved in attestation process.

The big issue with Apple is that their attestation infrastructure is wholly private to them, you can't self-host (Android is a bit similar in that application using Google's attestation system have the same limitation, but you can in theory setup your own).

reply
andersa
1 month ago
[-]
Attestation requires a root of trust, i.e. if data hashes are involved in the computation, you have to be able to trust that the hardware is actually using the real data here. Apple has this for your device, because they built it. You don't have it for their server, making the whole thing meaningless. The maximum information you can get out of this is "Apple trusts Apple".

Under the assumption that Apple is telling the truth about what the server hardware is doing, this could protect against unauthorized modifications to the server software by third parties.

If however, we assume Apple itself is untrustworthy (such as, because the US government secretly ordered them to run a different system image with their spyware installed) then this will not help you at all to detect that.

reply
Xelynega
1 month ago
[-]
Attestation of software signed by who?

If apple holds the signing keys for the servers, can they not change the code at any time?

reply
jjav
1 month ago
[-]
> exact same thing that's running on the cloud servers

What runs on the servers isn't actually very important. Why? Becuase even if you could somehow know with 100% certainty that what a server runs is the same code you can see, any provider is still subject to all kinds of court orders.

What matters is the client code. If you can audit the client code (or better yet, build your own compatible client based on API specs) then you know for sure what the server side sees. If everything is encrypted locally with keys only you control, it doesn't matter what runs on the server.

reply
flakeoil
1 month ago
[-]
But in this use case of AI in the cloud I suppose it's not possible to send encrypted data which only you have the keys to as that makes the data useless and thus no AI processing in the cloud can be made. So the whole point of AI in the cloud vs. AI on device goes away.
reply
nardi
1 month ago
[-]
This is what the “attestation” bit is supposed to take care of—if it works, which I’m assuming it will, because they’re open sourcing it for security auditing.
reply
underdeserver
1 month ago
[-]
Unless you personally validate hardware designs, manufacturing processes, and all software, even when running locally you're trusting many, many people.
reply
nl
1 month ago
[-]
This isn't right.

If you trust math you can prove the software is what they say it is.

Yes it is work to do this, but this is a big step forward.

reply
ADeerAppeared
1 month ago
[-]
The only thing the math tells you is that the server software gave you a correct key.

It does not tell you how it got that key. A compromised server would send you the key all the same.

You still have to trust in the security infrastructure. Trust that Apple is running the hardware it says it is, Trust that apple is running the software it says it is.

Security audits help build that trust, but it is not and never will be proof. A three-letter-agency of choice can still walk in and demand they change things without telling anyone. (And while that particular risk is irrelevant to most users, various countries are still opposed to the US having that power over such critical user data.)

reply
nl
1 month ago
[-]
No, this really isn't right.

To quote:

verifiable transparency, goes one step further and does away with the hypothetical: security researchers must be able to verify the security and privacy guarantees of Private Cloud Compute, and they must be able to verify that the software that’s running in the PCC production environment is the same as the software they inspected when verifying the guarantees.

So how does this work?

> The PCC client on the user’s device then encrypts this request directly to the public keys of the PCC nodes that it has first confirmed are valid and cryptographically certified. This provides end-to-end encryption from the user’s device to the validated PCC nodes, ensuring the request cannot be accessed in transit by anything outside those highly protected PCC nodes

> Next, we must protect the integrity of the PCC node and prevent any tampering with the keys used by PCC to decrypt user requests. The system uses Secure Boot and Code Signing for an enforceable guarantee that only authorized and cryptographically measured code is executable on the node. All code that can run on the node must be part of a trust cache that has been signed by Apple, approved for that specific PCC node, and loaded by the Secure Enclave such that it cannot be changed or amended at runtime.

But why can't a 3-letter agency bypass this?

> We designed Private Cloud Compute to ensure that privileged access doesn’t allow anyone to bypass our stateless computation guarantees.

> We consider allowing security researchers to verify the end-to-end security and privacy guarantees of Private Cloud Compute to be a critical requirement for ongoing public trust in the system.... When we launch Private Cloud Compute, we’ll take the extraordinary step of making software images of every production build of PCC publicly available for security research. This promise, too, is an enforceable guarantee: user devices will be willing to send data only to PCC nodes that can cryptographically attest to running publicly listed software.

So your data will not be sent to node that are not cryptographically attested by third parties.

These are pretty strong guarantees, and really make it difficult for Apple to bypass.

It's like end-to-end encryption using the Signal protocol: relatively easy to verify it is doing what is claimed, and extraordinarily hard to bypass.

Specifically:

> The only thing the math tells you is that the server software gave you a correct key.

No, this is secure attestation. See for example https://courses.cs.washington.edu/courses/csep590/06wi/final... which explains it quite well.

The weakness of attestation is that you don't know what the root of trust is. But Apple strengthens this by their public inspection and public transparency logs, as well as the target diffusion technique which forces an attack to be very widespread to target a single user.

These aren't simple things for a 3LA to work around.

reply
ADeerAppeared
1 month ago
[-]
Next time you "um akshually", do your homework first.

> These are pretty strong guarantees, and really make it difficult for Apple to bypass.

These guarantees rely entirely on trust in the hardware but it's not your hardware.

reply
nl
1 month ago
[-]
> These guarantees rely entirely on trust in the hardware but it's not your hardware.

This exactly the problem that "trusted computing" is designed to solve.

I'd encourage you to read for example the AWS Nitro Enclave outline here: https://aws.amazon.com/blogs/security/confidential-computing....

Nitro enclaves are similar in that they are designed to stop AWS operators from having access to the compute, even though it isn't owned by you.

reply
saagarjha
1 month ago
[-]
No, it's not. This is because Apple is the one providing the enclave, so the party you have to trust is them. When a cloud vendor offers this they use trust rooted in the manufacturer of the chips they are using.
reply
mr_toad
1 month ago
[-]
The machine doing the code signing has the private keys. Extracting them from the Secure Enclave is not going to be easy, but it’s not completely impossible either. If those keys are compromised then the whole house of cards comes down.

Still, this is notably more secure than your typical cloud compute, where you have to just trust the cloud provider when they pinky swear that they won’t peek.

reply
Xelynega
1 month ago
[-]
What changes with your analysis with the understanding that apple holds the signing keys for the PCC nodes?

How does the client verify that the code running on the PCC is the code that has been audited publicly, and not a modified version logging your data or using it for other purposes(signed by apple).

reply
robmccoll
1 month ago
[-]
It's not fully homomorphic encryption. The compute is happening in the plain on the other side, and given the scale of models they are running, it's not likely that all of the data involved in a computation is happening inside a single instance of particularly secure and hardened hardware. I don't think it's reasonable for most individuals to expect to be protected from nation-state actors or something, but their claims seem a little too absolute to me.
reply
nl
1 month ago
[-]
> it's not likely that all of the data involved in a computation is happening inside a single instance of particularly secure and hardened hardware.

Actually it is. Read their docs on what they do linked below.

reply
detourdog
1 month ago
[-]
If one has to use tech one has to trust someone. Apple has focused on the individual using computers since inception. They have maintained a consistent message and have a good track record.

I will trust them because the alternatives I see are scattered and unfocused.

reply
seydor
1 month ago
[-]
> if wanted.

Or if someone compels them to

reply
mbesto
1 month ago
[-]
They already have your private pictures. What difference is it that it's now running AI?
reply
blackqueeriroh
1 month ago
[-]
Ah yes, the unhackable bastion of open-source.

…wait, haven’t there been MULTIPLE cases of open-source projects getting hacked in the last couple of years?

reply
loteck
1 month ago
[-]
Some good comments on this from cryptographer Matt Green here: https://x.com/matthew_d_green/status/1800291897245835616?t=C...

(I wonder if Matt realizes nobody can read his tweets without a X account? Use BlueSky or Masto man)

Edit: here's his thread combined https://threadreaderapp.com/thread/1800291897245835616.html?...

reply
BenFranklin100
1 month ago
[-]
If he really wanted no one to be reading his tweets he’d be using BluSky or Masto…
reply
theshrike79
1 month ago
[-]
https://infosec.exchange/ has a ton of infosec people, big names.

https://ioc.exchange/@matthew_d_green - And he's there BTW :)

reply
unshavedyak
1 month ago
[-]
Is there more to that thread? I can't read it if it exists, not sure if that is what the parent is talking about? But i don't have a Twitter account anymore, so maybe it's locked?
reply
capybara_2020
1 month ago
[-]
Without being logged into X, you can only see the first post in a thread.
reply
jjav
1 month ago
[-]
Not even that anymore, all links show is "Something went wrong, but don’t fret — let’s give it another shot."

Impossible to see any content.

reply
AnonC
1 month ago
[-]
That's likely due to tracking prevention or protection by your browser because X really, really wants to track you. If you disable the tracking protection and related settings, you may be able to see the single tweet.
reply
qingcharles
1 month ago
[-]
I don't know what you're seeing. It's a very long thread. Exceptionally good take on the whole thing. Apple has gone way out of their way to try and sell this thing. Above and beyond compared to how I imagine Microsoft or Google would have tackled this.
reply
zooq_ai
1 month ago
[-]
If your AI model sucks, you have to use other gimmicks to lure customers. That's marketing 101.

Create irrational fear about piracy, push privacy focused products and profits as the sheeple promptly fall for this

reply
astrange
1 month ago
[-]
I've never seen someone use "sheeple" in an anti-privacy argument.
reply
zooq_ai
1 month ago
[-]
the most successful sheeple operation is the one the sheeple and the entire world is completely oblivious of it.

jokes aside, this is no different from people selling bunker beds, gold, ammunition, crypto, vpns. It is specifically for the set of gullible people who think they and their data is so important. Reality (except for 10,000 people or so) is, most lives and their 'precious' data is worthless. (I'm not talking about SSN, Bank Accounts -- those are well protected by tech cos HN seem to hate on)

reply
rmm
1 month ago
[-]
Ok that made me spill my coffee.
reply
Andrex
1 month ago
[-]
Or maybe (gasp!) a blog?
reply
brigandish
1 month ago
[-]
These two tweets stand out for me:

> Ok there are probably half a dozen more technical details in the blog post. It’s a very thoughtful design. Indeed, if you gave an excellent team a huge pile of money and told them to build the best “private” cloud in the world, it would probably look like this.

and

> And of course, keep in mind that super-spies aren’t your biggest adversary. For many people your biggest adversary is the company who sold you your device/software. This PCC system represents a real commitment by Apple not to “peek” at your data. That’s a big deal.

I'd prefer things stay on the device but at least this is a big commitment in the right direction - or in the wrong direction but done better than their competitors, I'm not sure which.

reply
transpute
1 month ago
[-]
Thanks for the link.

> As best I can tell, Apple does not have explicit plans to announce when your data is going off-device for to Private Compute. You won't opt into this, you won't necessarily even be told it's happening. It will just happen. Magically.

Presumably it will be possible to opt out of AI features entirely, i.e. both on-device and off-device?

Why would a device vendor not have an option for on-device AI only? iOS 17 AI features can be used today without iCloud.

Hopefully Apple uses a unique domain (e.g. *.pcc.apple.com) that can be filtered at the network level.

reply
onel
1 month ago
[-]
I think the main reason might be the on-device AI is fairly limited features wise. For Apple to actually offer something useful they would need to switch between device/server constantly and they don't want to limit the product by allowing users to disable going to a server.

With OpenAI calls is different because the privacy point is stronger

reply
azinman2
1 month ago
[-]
You would have to activate a clearly LLM-powered software feature and have internet access. I don't know if settings will appear to disable this, but you could imagine it would be the case. This isn't just siphoning off all your data at random.
reply
transpute
1 month ago
[-]
Would Spotlight be considered a "clearly LLM-powered software feature"? Will there be an option for "non-AI Spotlight"? Disabling dozens of software features, or identifying all apps which might use LLM services, is a daunting proposition. It would be good to have a PCC kill switch, which makes opt-in usage meaningful, rather than forced.
reply
chefandy
1 month ago
[-]
Privacy "consent" is fundamentally broken. We've moved from "we're doing whatever the fuck we want" to "we're doing whatever the fuck we want, but on paper it's whatever the fuck you expressly asked for, whether you wanted to or not."
reply
sneak
1 month ago
[-]
Almost certainly you will be able to disable it entirely and hide the UI to re-enable it via provisioning profiles via Apple Configurator 2 or MDM.

This is actually what you have to do now if you don’t want Siri and Mail to leak your address book to Apple.

reply
transpute
1 month ago
[-]
> if you don’t want Siri and Mail to leak your address book to Apple.

By disabling Siri and iCloud, or other policies?

reply
wmf
1 month ago
[-]
If you have no threat model and want to opt out of random features just because... you probably shouldn't use Apple products at all. Or Google or Microsoft.
reply
transpute
1 month ago
[-]
For years, Apple has a documented set of security policies to disable off-device processing (e.g iCloud, Siri), via MDM / Apple Configurator. Apple also published details needed for enterprise network filtering to limit Apple telemetry, if all you want from Apple servers are software security updates and notifications.

With a hardened configuration, Apple has world-class device security. In time, remote PCC may prove as robust against real-world threats. Until then, it would be good to retain on-device security policy and choice for remote computation.

reply
sneak
1 month ago
[-]
Apple does not publish details to limit telemetry. Nowhere in MDM or in their docs do they tell you that you can safely block xp.apple.com (telemetry) but not gs.apple.com (boot ticket signing server for updates).
reply
transpute
1 month ago
[-]
Thanks, both are listed as required for software updates, https://support.apple.com/en-us/101555

Is there a good non-Apple reference for the functions performed by their servers?

reply
1vuio0pswjnm7
1 month ago
[-]
"I wonder if Matt realises nobody can read his tweets without a X account?"

https://nitter.poast.org/matthew_d_green/status/180029189724...

reply
Tepix
1 month ago
[-]
Thanks. I wonder how long that service is going to last.
reply
dmix
1 month ago
[-]
its been around a loong time
reply
vaylian
1 month ago
[-]
> (I wonder if Matt realizes nobody can read his tweets without a X account? Use BlueSky or Masto man)

He actually has an active Mastodon account, but this particular story is not on there (yet): https://ioc.exchange/@matthew_d_green

reply
Tepix
1 month ago
[-]
Inactive since 2 months
reply
vaylian
1 month ago
[-]
You were right until a couple of hours ago. Then this happened: https://ioc.exchange/@matthew_d_green/112597917470493480
reply
gvurrdon
1 month ago
[-]
reply
stavros
1 month ago
[-]
He's not wrong that, given that you want to do this, this is the best way. The alternative would be to not do it at all (though an opt-out would have been good).
reply
wslh
1 month ago
[-]
Beyond all the hardware complexity, another attack vector is the network infrastructure.
reply
astrange
1 month ago
[-]
That is covered in the article.
reply
firecall
1 month ago
[-]
Threads also is popular.

Probably the mainstream Twitter alternative at this point?

reply
jxi
1 month ago
[-]
Threads is far from mainstream and just filled with spam and OnlyFans spammers at this point.
reply
threeseed
1 month ago
[-]
By every metric Threads is mainstream:

a) Top 10 App Store charts in every country.

b) Heavily promoted through Facebook and Instagram.

c) DAUs are higher than X.

reply
JimDabell
1 month ago
[-]
That sounds far more like Twitter than Threads. I get so much spam on Twitter now that I hit rate limits reporting it all.
reply
fragmede
1 month ago
[-]
weird, i get a bunch of music and programming stuff on my Threads feed. it's not very deep, but what's on the surface is quite nice and not a bunch of almost-porn. Twitters become half porn though
reply
firecall
1 month ago
[-]
There is a lot of that.

But there is far more.

Kara Swisher is on Threads, for instance.

reply
ineedaj0b
1 month ago
[-]
All good ai researchers are on X. They are not switching to bluesky or masto which are frankly, lame.

You build the machine god all day and somehow find in yourself respect for what Jack let Twitter become?

If you dream of Napoleon, an elephant tooting a horn is a signal to sell.

reply
tantalor
1 month ago
[-]
> nobody can read his tweets without a X account

False; works fine for me logged out or incognito..

reply
windexh8er
1 month ago
[-]
No, you can't see the thread. You can see the first post, but X took this away [0].

Nitter still works [1]. Also Threadreader (as can be seen linked in Green's tweet).

[0] https://tweetdelete.net/resources/view-twitter-without-accou... [1] https://nitter.poast.org/matthew_d_green

reply
unshavedyak
1 month ago
[-]
Also can't see the thread.
reply
steg132
1 month ago
[-]
I’m on iOS. I can’t see the thread. Incognito or normally.
reply
OneLeggedCat
1 month ago
[-]
False
reply
zmmmmm
1 month ago
[-]
Read through it all, it still comes down to "trust us". Apple can sign and authorise an update at any time that will backdoor it, and the government is the stroke of a pen away from forcing them to, all completely silently.

I get that there's benefit to what they are doing. But the problem of selling a message of trust is you absolutely have to be 100% truthful about it, and them failing to be transparent that people's data is still subject to access like this poisons the larger message they are selling.

reply
troad
1 month ago
[-]
They already have root. Their software is closed source. There is absolutely nothing stopping them from uploading all of your data right now.

If you don't trust the people making your OS, your problems are much deeper than fretting about off-device AI processing.

reply
Vegenoid
1 month ago
[-]
While true, the gap between "we send your data to our datacenters but we don't look at it" to "we look at it a little bit without telling you" is much smaller than "we leave your data on your device alone" to "we upload data from your device", both on a technical and policy level.

Even if the org has been trustworthy to this point, I think this step makes it more likely (maybe still unlikely, but more likely) that in the future they do look at your data, as less things have to change for that to happen.

reply
__MatrixMan__
1 month ago
[-]
That's true, but also it should be possible to make an OS that people can trust without trusting you, and as users we should encourage movement in that direction.
reply
troad
1 month ago
[-]
I understand the sentiment, but it's impractical to live in a trust-less society. If you've ever had dental work done, you've put an awful lot of faith in a stranger pushing a drill into your head. Ditto for riding buses and bus drivers, etc etc.

Trust can be abused, certainly, but it also allows collaboration and specialisation, and without those I doubt we'd have gotten very far.

reply
__MatrixMan__
1 month ago
[-]
I'm happy to trust many kinds of people, dentists included, just not the kind people who find themselves at the helm of companies like Apple and Google.

Better to trust many people narrowly (e.g. I don't trust the bus driver to drill my cavities) than to trust a small handful of people broadly (e.g. like Apple expects of their users).

reply
Aerbil313
1 month ago
[-]
Any kind of practical OS would contain code of unpractical amounts to manually review and audit.

That's not to mention the argument that any software of a LOC count of higher than some number is impossible to audit because of complex state handling. Rice's Theorem applies to your brain too, probably, to some extent. Idk about purely functional Haskell though.

reply
abtinf
1 month ago
[-]
> should be possible

What makes you think this?

reply
brookst
1 month ago
[-]
It’s especially funny because I believe it is provably impossible. You’ll have to trust me that I’ve done the proof.
reply
__MatrixMan__
1 month ago
[-]
Because there are so freaking many of us, and some of us trust each other. If we were better at coordinating about which parts of the code we trust and to what degree, we could determine which parts of it are untrustworthy and patch the problem out of it.

The GrapheneOS people are doing this, for example. It's not crazy to consider your device vendor as part of your threat model, because like it or not, they are a threat.

reply
abtinf
1 month ago
[-]
That’s a good point. It would be interesting if there was a “git blame” style command, but that showed a trust score for every line/block based on who has touched it.
reply
__MatrixMan__
1 month ago
[-]
I'm working on a system for data annotation that might support apps of that nature.

I'm focusing on simpler datasets for now, I want people to be able to annotate a paper restaurant menu with notes about allergens in a way that other people with those allergens can summon those annotations and steer clear--all without participation from the restaurant. Like an augmented reality layer for text.

I hope it grows up into something that would let you ask:

> How trusted is this line of code, and by whom?

reply
EternalFury
1 month ago
[-]
Good luck. It’s much easier to talk about it. The last open OS I have seen reach a semi-mainstream level of adoption was started in the early 90’s, more than 30 years ago, by some Linus guy.
reply
fastball
1 month ago
[-]
And (basically) nobody running linux is individually verifying the source code of every little piece of software that goes into it (maybe Linus is), so you're still trusting someone.
reply
detourdog
1 month ago
[-]
I can’t even find the appropriate documentation.
reply
dialup_sounds
1 month ago
[-]
It's all on Discord now.
reply
tsimionescu
1 month ago
[-]
How would documentation help you trust the code?
reply
detourdog
1 month ago
[-]
How does one understand what is going on with out a clear set of documentation? Either one is so smart they can review everyline of everything they are running. My contention is that it is hard enough to find authoritative documentation much less developing an understanding of the code needed or running.

The choice is either many little trust relationships or a giant leap of faith. I feel better served by a giant leap of faith and access to all the technology I can use.

I prefer no documentation and consistent behavior to no documentation and a bunch of internet howtos on what might work.

reply
jchw
1 month ago
[-]
That's true, but if you don't update your local software and it isn't currently backdoored, then it won't magically become backdoored without some active involvement somewhere. The trouble with remotely pushing data somewhere is that you can't tell if anything has changed even if you wanted to. (Attestation only works if it's not compromised, and for obvious reasons, there's no way to know that an attestation mechanism is compromised.)

That said I really don't disagree with this point at all in terms of it being a valid problem. It's not a fixable problem either (it comes down to, again, building trustworthy computers) but it could be biased way towards being solved whereas today it is still "trust me bro". I don't think Apple will be the company to make progress towards this, though.

reply
troad
1 month ago
[-]
> if you don't update your local software and it isn't currently backdoored, then it won't magically become backdoored without some active involvement somewhere

If you don't update your local software then it will certainly become automatically backdoored by an accumulating series of security vulnerabilities over time.

> I don't think Apple will be the company to make progress towards this, though.

I agree.

reply
jchw
1 month ago
[-]
> If you don't update your local software then it will certainly become automatically backdoored by an accumulating series of security vulnerabilities over time.

Y'know though, when you put it that way, it sounds inherent that security vulnerabilities will pop up, which is kinda true, at least for the foreseeable future, but to be pedantic, the security vulnerabilities are already there, it's discovering them that's the problem. If we could make secure computers... (time to formally prove everything from the ground up I guess.)

But, that said, I wasn't overlooking this, I'm just looping "getting pwned" into "active involvement". If you have some sufficiently isolated machines, they're probably fine indefinitely. The practicality of this is limited outside of thought experiments. However it's definitely worth noting that unlike a compromised remote, it is at least technically feasible to work on the problem of making local compromise more evident, whereas a remote compromise is truly impossible to reliably be able to detect from the outside.

reply
troad
1 month ago
[-]
> If you have some sufficiently isolated machines, they're probably fine indefinitely.

The eternal dream of unplugging, and living free on Amigas.

reply
buzzerbetrayed
1 month ago
[-]
Your argument is no different than what Apple could do to your iPhone. The fact that it happens on the server changes nothing. Apple could push a button and have your iPhone upload whatever they want to their servers. In other words, based on your argument, you shouldn't trust anything, including locally run AI. You're probably right, but it isn't practical.

Edit: The final couple tweets from the Matthew Green tweet thread posted in another comment sum it up well:

> Wrapping up on a more positive note: it’s worth keeping in mind that sometimes the perfect is the enemy of the really good.

> In practice the alternative to on-device is: ship private data to OpenAI or someplace sketchier, where who knows what might happen to it. And of course, keep in mind that super-spies aren’t your biggest adversary. For many people your biggest adversary is the company who sold you your device/software. This PCC system represents a real commitment by Apple not to “peek” at your data. That’s a big deal. In any case, this is the world we’re moving to. Your phone might seem to be in your pocket, but a part of it lives 2,000 miles away in a data center. As security folks we probably need to get used to that fact, and do the best we can to make sure all parts are secure.

reply
devjab
1 month ago
[-]
I think he has a nice pragmatic view on things. I’m EU enterprise we basically view things like picking cloud providers as a question of who we want to spy on us. Typically it comes down to AWS or Azure if you’re pocking a “everything included” service. That being said, I’m not really sure I’m on board with this part:

> As security folks we probably need to get used to that fact, and do the best we can to make sure all parts are secure.

Isn’t that sort of where the pragmatism ends? All the parts aren’t going to be secure… Unless I misunderstood his intention, I think the conclusion should be more along the lines of approaching the cloud without trust.

reply
kfreds
1 month ago
[-]
Wow! This is incredibly exciting.

Apple's Private Cloud Compute seems to be conceptually equivalent with System Transparency - an open-source software project my colleagues and I started six years ago.

I'm very much looking forward to more technical details. Should anyone at Apple see this, please feel free to reach out to me at stromberg@mullvad.net. I'd be more than happy to discuss our design, your design, and/or give you feedback.

Relevant links:

- https://mullvad.net/en/blog/system-transparency-future

- http://system-transparency.org (somewhat outdated)

- http://sigsum.org

reply
v4dok
1 month ago
[-]
https://en.m.wikipedia.org/wiki/Confidential_computing

This is what they are doing. Search implementations of this to understand more technical details.

reply
jiveturkey
1 month ago
[-]
It's not, AFAICT from the press release.

Confidential Compute involves technologies such as SGX and SEV, and for which I think Asylo is an abstraction for (not sure), where the operator (eg Azure) cannot _hardware intercept_ data. The description of what Apple is doing "just" uses their existing code signing and secure boot mechanisms to ensure that everything from the boot firmware (the computers that start before the actual computer starts) to the application, is what you intended it to be. Once it lands in the PCC node it is inspectable though.

Confidential Compute goes a step further to ensure that the operator cannot observe the data being operated on, thus also defeating shared workloads that exploit speculative barriers, and hardware bus intercept devices.

Confidential Compute also allows attestation of the software being run, something Apple is not providing here. EDIT: looks like they do have attestation, however it's different to how SEV etc attestation works. The client still has to trust that the private key isn't leaked, so this is dependent on other infrastructure working correctly. It also depends on the client getting a correct public key. There's no description of how the client attests that.

Interesting that they go through all this effort just for (let's be honest) AI marketing. All your data in the past (location, photos, contacts, safari history) is just as sensitive and deserving of such protection. But apparently PCC will apply only to AI inference workloads. Siri was already and continues to be a kind of cloud AI.

reply
derpsteb
1 month ago
[-]
Apple's secure enclave docs also mention memory encryption. The PCC blogpost mentions that the server hardware is built on secure enclaves. And since they are claiming that even Apple can't access it, I am currently assuming that there will be memory encryption happening on the servers. At which point you have have the main ingredients of CC: memory encryption & remote attestation.

EDIT: and they mention SGX and Nitro. Other CC technologies :)

reply
jiveturkey
1 month ago
[-]
> Apple's secure enclave docs also mention memory encryption.

Yes, but that's only within the enclave. Every Mac hardware since T2 has had that, and we don't consider them strong enough to meet the CC bar.

As an example of the difference, CC is designed so that a compromised hypervisor cannot inspect your guest workload. Whereas in Apple's design, they attempt to prove that the hypervisor isn't compromised. Now imagine there's a bug ...

(Not that SGX hasn't had exploitable hardware flaws, but there is a difference here.)

reply
rekoil
1 month ago
[-]
This was my take from the presentation as well, immediately thought of your feature. Will be interesting to hear your take on it once the details have been made available and fully understood.
reply
ThePhysicist
1 month ago
[-]
Yeah it seems so, though most of these systems (e.g. Intel SGX, AMD SEV, NVIDIAs new tech) use the same basic building blocks (Apple itself isn't member of the confidential computing consortium but ARM is), for me it's the quality of the overall implementation and system that sets this apart. I'm also quite bullish about trusted computing, seems it gains significant momentum. I would like some technologies to be more open and e.g. allow you to control the whole stack and install your own root certificates / keys on a hardware platform, but even so I think it can provide many benefits. With Apple pushing this further into the mainstream I expect to see more adoption.
reply
Shank
1 month ago
[-]
> In a first for any Apple platform, PCC images will include the sepOS firmware and the iBoot bootloader in plaintext, making it easier than ever for researchers to study these critical components.

Yes!

> Software will be published within 90 days of inclusion in the log, or after relevant software updates are available, whichever is sooner.

I think this theoretically leaves a 90-day maximum gap between publishing vulnerable software and potential-for-discovery. I sincerely hope that the actual availability of images is closer to instant than the maximum, though.

reply
gigel82
1 month ago
[-]
Well, a 89-day "update-and-revert" schedule will take care of those pesky auditors asking too many questions about NSA's backdoor or CCP's backdoor and all that.
reply
gpm
1 month ago
[-]
No, because the log of what source was used will still show the backdoored version, and you can't unpublish the information that it was used. Reverting doesn't solve the problem that people will be able to say "this software was attested 90 days ago and it hasn't been released".

If you're trying to do a quiet backdoor and you have the power to compel Apple to assist, the route to take is to simply misuse the keys that are supposed to only go into hardware for attestation, and instead simply use them to forge messages attesting to be running software on hardware that you aren't.

Or just find a bug in the software stack that gives you RCE and use it

reply
brookst
1 month ago
[-]
> simply use them to forge messages attesting to be running software on hardware that you aren't

Well, your messages have to be congruent with the expected messages from the real hardware, and your fake hardware has to register with the real load balancers to receive user requests.

> RCE

That’s probably the best attack vector, and presumably why Apple is only making binary executables available. Not that that stops RCE.

But even then you can’t pick and choose the users whose data you compromise. It’s still a sev0 problem, but less exploitable for the goals of nation states so less likely to be heavily invested in for exploiting.

reply
gpm
1 month ago
[-]
> Well, your messages have to be congruent with the expected messages from the real hardware,

Yes, which is why you need the keys that are used to make real hardware. Provided you have those very secret and well protected keys (you are Apple being compelled by the government) that's not an issue.

> and your fake hardware has to register with the real load balancers to receive user requests.

Absolutely, but we're apple in this scenario so that's "easy".

reply
brookst
1 month ago
[-]
I think I misunderstood your point -- I took it to mean someone impersonating a server, but you're saying it's Apple. So the part you're attacking (as Apple) is:

> The process involves multiple Apple teams that cross-check data from independent sources, and the process is further monitored by a third-party observer not affiliated with Apple. At the end, a certificate is issued for keys rooted in the Secure Enclave UID for each PCC node.

So, in your scenario, the in-house certificate issuer is compelled to provide certificates for unverified hardware, which will then be loaded with a parallel software stack that is malicious but reports the attestation ID of a verified stack.

So far, so good. Seems like a lot of people involved, but probably still just tens of people, so maybe possible.

Are you envisioning this being done on every server, so there are no real ones in use? Or a subset? Just for sampling, or also with a way to circumvent user diffusion so you can target specific users?

It's an interesting thought exercise but the complexity of getting anything of real value from this without leaks or errors that expose the program seems pretty small.

reply
gpm
1 month ago
[-]
Well, the broader context of the proposal is as an alternative to the original comment in this HN thread

> Well, a 89-day "update-and-revert" schedule will take care of those pesky auditors asking too many questions about NSA's backdoor or CCP's backdoor and all that.

As a backdoor I am taking it to mean they can compel assistance from inside of Apple, it's not a hack where they have to break in and hide it from everyone (though certainly they would want to keep it to as few people as possible).

At least in the NSAs case I think it would be reasonable to imagine that they are limited to compromising a subset of the users data. Specific users they've gotten court orders against or something... so yes a subset of nodes and also circumventing user diffusion (which sounds like traffic analysis right up the NSAs alley, or a court order to whatever third party Apple has providing the service).

reply
brookst
1 month ago
[-]
> so yes a subset of nodes and also circumventing user diffusion (which sounds like traffic analysis right up the NSAs alley, or a court order to whatever third party Apple has providing the service).

How does traffic analysis help? The client picks the server to send the query to, and encrypts with that particular server's private key. I guess maybe your have the load balancer identify the target and only provide compromised servers to it? But then every single load balancer has to have the list of targeted individuals and compromised servers, which seems problematic for secrecy at scale.

reply
gpm
1 month ago
[-]
The load balancer is blind to which client sent a request via ohttp. You need to do something to bypass that (traffic analysis or ordering the ohttp provider to help).

> But then every single load balancer has to have the list of targeted individuals and compromised servers, which seems problematic for secrecy at scale.

It really doesn't. This seems well within the realms of what you could achieve with a court order without it becoming public.

reply
ein0p
1 month ago
[-]
It is not possible for this to be fully private in the United States because the government not only can force Apple to open up the kimono, it can also forbid it to talk about it. There’s not really anything Apple can do to work around this “limitation”. Thank your “representative” for extending the PATRIOT Act when you get a chance.
reply
amiantos
1 month ago
[-]
Private Cloud Compute servers have no persistent storage so there would be nothing to see upon opening the kimono. You'd need some sort of government requested live wire tap thing to harvest the data out of the incoming requests, which might be a different situation. I'm, of course, just some dude on the internet, thinking up a counter-point to this concern, who knows if I am even remotely in the right ballpark.
reply
choppaface
1 month ago
[-]
Apple already services US Gov cloud data requests, see e.g. https://www.reddit.com/r/privacy/comments/eqg5gc/apple_compl...
reply
KaiserPro
1 month ago
[-]
> Private Cloud Compute servers have no persistent storage so there would be nothing to see upon opening the kimono

It doesn't actually say there is no persistent storage, it says that the compute node will not store it for longer than the request. There's nothing to stop the data coming from a datastore outside of the "PCC" in another part of apple's infrastructure.

reply
transpute
1 month ago
[-]
> have no persistent storage

How often do PCC servers reboot and wipe the temporary encryption key?

reply
visarga
1 month ago
[-]
mandatory 30 day retention policies or something like it
reply
theshrike79
1 month ago
[-]
You can't mandate retention on stuff you're not storing anyway - or because of encryption can't store.
reply
talldayo
1 month ago
[-]
You would think that, but cell carriers have been found to retain both plaintext and encrypted traffic for several years in some cases: https://www.vice.com/en/article/m7vqkv/how-fbi-gets-phone-da...
reply
theshrike79
1 month ago
[-]
Cell carriers aren't a bastion of end to end encryption, the tech just can't do it.

That's why you use them just as dumb pipes forwarding encrypted data traffic from one place to another.

No SMS, no phone calls if you can avoid it.

reply
talldayo
1 month ago
[-]
Speaking of tech, has anyone ever independently audited Apple's encrypted infrastructure a-la what they're promising for Private Compute? I'm unconvinced that the government couldn't crack that if they wanted.
reply
paradite
1 month ago
[-]
Slightly off-topic, "open up the kimono" sounds disturbing and creepy to me as an Asian. I suspect I'm not alone in this.
reply
dxbednarczyk
1 month ago
[-]
reply
_heimdall
1 month ago
[-]
I'm not sure how I've been in tech since before this article was written and today is the first time I've ever even seen/heard this phrase.
reply
bn-l
1 month ago
[-]
Is this phrase worth an entire long form blog post from NPR?
reply
rpastuszak
1 month ago
[-]
Honestly, why not? I love reading about etymologies and I know that many people here do as well.
reply
ein0p
1 month ago
[-]
That’s even better. I do think it’s disturbing and creepy when someone goes through my private data without my knowledge.
reply
digging
1 month ago
[-]
That's not what they meant or what the phrase means. It's mildly racist and misogynist, and the kinds of discomfort it elicits are not the kind that will make people trust you, the user of the phrase.
reply
sciolist
1 month ago
[-]
There's a difference between guaranteed privacy and certifiable privacy. Yes, the government can request one's data. However, Apple's system would reveal those intrusions to the public, even if Apple themselves couldn't say it.
reply
afh1
1 month ago
[-]
How?
reply
ls612
1 month ago
[-]
What Apple can do (and appears to be doing throughout its products) is not have the data requested. Or not have it in cleartext. NSLs can't request data that doesn't exist anymore.
reply
ein0p
1 month ago
[-]
LLMs work on clear text inputs
reply
ls612
1 month ago
[-]
But the setup is that Apple doesn't know which cleartexts currently being processed are associated with which user meaning that even live surveillance can't work without surveilling everybody, making any such program very quickly discoverable. Read the section about non-targetability in the link.

Apple deserves credit for correctly analyzing their threat model and designing their system accordingly.

reply
ein0p
1 month ago
[-]
I’m willing to concede that Apple’s system is the best designed of the bunch. I’m not willing to call it “private”, however, if it processes unencrypted inputs in the jurisdiction of a nation state with pervasive government surveillance.
reply
ls612
1 month ago
[-]
I agree from a technical perspective actually. What this system does to counteract that is use a social strategy. The government couldn't break "this specific system" without a lot of people knowing, and that makes it very likely that they would be discovered. And not in a vague way but in a very specific one.

Would this work in some authoritarian countries? No and Apple doesn't care. Would it work in a western one? Maybe.

reply
aalimov_
1 month ago
[-]
Their post sure makes it seam like it’s possible. Was there something that stood out to you?
reply
quenix
1 month ago
[-]
I mean, the software running on the client (phone/mac/ipad) is closed-source and, if we assume Apple is compromised, can be made to circumvent all of these fancy protections at the push of a button.

If pressured by the government, Apple can simply change the client software to loosen the attestation requirements for private compute. And that would be the most inconspicuous choice.

reply
rvnx
1 month ago
[-]
Or target a device by IMEI or iCloud to be candidate to receive a software update, and push an update that sends data to "dev-llm-assistant.ai.apple.com".

"oh it's our dev version ? what's the problem ? we need data access for troubleshooting"

reply
tantalor
1 month ago
[-]
Warrant canary
reply
egorfine
1 month ago
[-]
It makes zero sense for a company of this size. I bet they are served with gag orders like daily, so the warrant canary is going to expire the moment it is published.
reply
afh1
1 month ago
[-]
Apple removed theirs year ago.
reply
sneak
1 month ago
[-]
The orders in question aren’t search warrants and don’t require probable cause.

70,000+ Apple user accounts are surveilled in this manner every year.

reply
sneak
1 month ago
[-]
It seems to me that this security architecture is a direct response to the hostile regulatory environment Apple finds themselves in wrt USA PATRIOT and the CCP et al.
reply
zer00eyz
1 month ago
[-]
I have a big question here.

Who is this for? Dont get me wrong I think it's a great effort. This is some A+ nerd stuff right here. It's speaking my languge.

But Im just going to figure out how to turn off "calls home". Cause I dont want it doing this at all.

Is this speaking to me so I tell others "apple is the most secure option"? I don't want to tell others "linux" because I don't want to do tech support for that.

At this point I feel like an old man shouting "Dam you keep your hands off my data".

reply
al_borland
1 month ago
[-]
Apple needs to differentiate itself, and they have chosen privacy as a way to do that, which I'm all for. The headlines around Microsoft's AI efforts have largely been a nightmare, with a ton of bad press. If the press around Apple's AI is all about how over the top they went with security and privacy, that will likely make people feel a little better about using it.

I'm not a big user of OpenAI's stuff, but if I was going to use any of it, I'd rather use it through Apple's anonymizing layer than going directly to OpenAI.

reply
gpm
1 month ago
[-]
I actually thought one notable thing in the presentation was that they spent all this time talking about their new private cloud compute architecture.

And then showed that they have a prompt asking if you're ok sending the data to OpenAI. Presumably because despite OpenAI promising not to use your data (a promise apple relayed) OpenAI didn't buy into this new architecture.

reply
al_borland
1 month ago
[-]
Thank you for mentioning this. I thought I was going crazy, because I heard this too, but kept seeing comment after comment on other sites asking if a person could choose not to use OpenAI, or that it was happening magically in the background. The way I heard it, the user was in control.

I think this goes back to what Steve said in 2010.

https://youtube.com/watch?v=Ij-jlF98SzA

And yes, while the data might not be linked to the user and striped of sensitive data, I could see people not wanting something very personal things to go to OpenAI, even if there should be no link. For example, I wouldn’t want any of my pictures going to OpenAI unless I specifically say it is OK for a given image.

reply
rekoil
1 month ago
[-]
I was under the impression that the OpenAI integrations were more about content generation and correction than the Apple Intelligence-driven personal stuff.
reply
manmal
1 month ago
[-]
I’m not sure but I thought I saw it mentioned that OpenAI is still allowed to train on the data received from Apple customers.
reply
FumblingBear
1 month ago
[-]
Actually the opposite. They’re explicitly not allowed to.
reply
brookst
1 month ago
[-]
Different features.

OpenAI provides the chatbot interface we all know.

The PCC cloud serves all of the other integrated AI features like notification prioritization, summarization, semantic search, etc. At least when those can’t be run on device.

reply
wmf
1 month ago
[-]
What if you can't turn it off and this extreme security is the justification for why?
reply
transpute
1 month ago
[-]
If user data disclosure is forced, would user data be limited to PCC nodes located within the same legal jurisdiction, e.g. EU, UK, US, China, etc?
reply
wmf
1 month ago
[-]
PCC is as government-proof as the iPhone itself so jurisdiction may not matter much.
reply
transpute
1 month ago
[-]
Some jurisdictions require data to be processed within the jurisdiction.
reply
m463
1 month ago
[-]
what happens when your apple id is turned off?
reply
hapticmonkey
1 month ago
[-]
It's for shareholders. Microsoft and Nvidia have a bigger market cap than Apple now, thanks to the AI investor boom. Apple need to show they can be all about AI, too. But Apple have the institutional culture to maintain privacy.
reply
written-beyond
1 month ago
[-]
This is exactly what I was thinking during the entire keynote. It was blatantly the WWSC (worldwide shareholder conference) and hackernews commenters are eating it up.

Don't get me wrong, I've always appreciated apples on device ml/AI features, those have always been powerful, interesting and private but these announcements feel very rushed, it's literally a few weeks after Microsoft's announcements.

They've basically done almost exactly what Microsoft announced with a better UX and a pinky promise about privacy. How are they going to pay for all of that compute? Is this going to be adjusted into the price of the iPhones and MacBook? and then a subscription layer is going to be added to continue paying for it? I don't feel comfortable with the fact that my phone is basically extending it's hardware to the cloud. No matter how "private" it is it's just discomforting to know that apple will be doing inference on things seemingly randomly to "extend" compute capabilities.

Also what on earth is apple high on, integrating a third party API into the OS, how does that even make sense. Google was always a separate app, or a setting in safari, you didn't have Google integrated at an OS level heck you don't have that on Android. It feels very discomforting to know that today my phone could phone home to somewhere other than iCloud.

reply
mr_toad
1 month ago
[-]
> How are they going to pay for all of that compute?

Hardware sales. Only the latest pro/max models will run these models, everyone else is going to have to upgrade.

reply
lurking_swe
1 month ago
[-]
Me

I don’t care if the government has access to the data. I just don’t want “bad actors” (scammers, foreign governments, ad-tech companies, insurance companies, etc) to have access to my private data. But i also want the power of LLM’s. Does that sound so far fetched?

I’m a realist. I already EXPECT the US govt has all my data. I don’t like the status quos, but it is what is is.

reply
fundad
1 month ago
[-]
It’s for their competitors who have pushed a narrative that Apple was caught “flat-footed”. Well there is actually a line of LLM and cloud infrastructure at iPhone scale. This is not merely 2 years of work. They push the privacy because it’s expected of them.

Gemini could claim privacy but I think people would assume that if true, it would make it less effective.

reply
theshrike79
1 month ago
[-]
NSA already has all our data and if they don't, they have direct contacts at Meta and Alphabet to get it same-day delivery.

I'm trusting Apple more in this case, they have an incentive to keep things private and according to experts they're doing everything they can to do so.

"Indeed, if you gave an excellent team a huge pile of money and told them to build the best “private” cloud in the world, it would probably look like this." - Matthew D. Green

reply
_heimdall
1 month ago
[-]
The NSA partners directly with telecoms companies, especially AT&T. Its easier when companies like Meta and Facebook will play along, but that's not the only way they get access to a bunch of our data.
reply
theshrike79
1 month ago
[-]
How do telecom companies unravel public key encryption in transit?
reply
WatchDog
1 month ago
[-]
I'm interested in how this compares to AWS nitro enclaves, which they mention briefly.

The main difference seems to be verifiability down to the firmware level.

Nitro enclaves does not provide measurements of the firmware[0], or hypervisor, furthermore they state that the hypervisor code can be updated transparently at any time[1].

Apple is going to provide images of the secure enclave processor operating system(sepOS), as well as the bootloader.

It also sounds like they will provide the source code for these components too, although the blog post isn't clear on that.

[0]: https://docs.aws.amazon.com/enclaves/latest/user/set-up-atte....

[1]: https://docs.aws.amazon.com/pdfs/whitepapers/latest/security...

reply
yolovoe
1 month ago
[-]
Nitro does measure firmware. If any firmware is unexpected, server will essentially stop being connected to the EC2 substrate network and/or server wiped clean automatically. People will be paged automatically, security will likely be pulled in, etc.

There is no reason to measure hypervisor firmware as it’s not firmware in the case of EC2. The BIOS/UEFI firmware on the mobo is overwritten if it’s tampered with. Hypervisor code (always signed, like all code) is streamed via a verifiably secure system on the server (Nitro cards, which make use of measured boot and/or secure boot).

No idea what the customer facing term “Nitro enclaves” means, but EC2 engineers are literally mobilized like an army with pages when any security risk (even minor ones) is determined. Basic stuff like this is covered. We even go as far as guaranteeing core dumps don’t contain any real customer data, even encrypted

reply
WatchDog
1 month ago
[-]
I'm glad to hear about those internal processes, but I guess the key point of difference is that in apple's case, the measurements of the firmware are provided and verifiable externally.

Although in the end, I'm not sure how much of a difference it makes, as ultimately, even with measurements of the whole stack, the platform provider if compelled to do so, can still push out a malicious firmware that fakes it's measurements.

reply
ram_rattle
1 month ago
[-]
Aws had to do it this way because of their custom silicon, Intel, ARM and AMD do provide firmware/hypervisor level attestation
reply
advael
1 month ago
[-]
I really want to see this OS, and have cautious optimism that this could be the first time we'll see a big tech company actually provide an auditable security guarantee!

I think depending on how this plays out, Apple might manage to earn some of the trust its users have in it, which would be pretty cool! But even cooler will be if we get full chain-of-custody audits, which I think will have to entail opening up some other bits of their stack

In particular, the cloud OS being open-source, if they make good on that commitment, will be incredibly valuable. My main concern right now is that if virtualization is employed in their actual deployment, there could be a backdoor that passes keys from secure enclaves in still-proprietary parts of the OSes running on user devices to a hypervisor we didn't audit that can access the containers. Surely people with more security expertise than me will have even better questions.

Maybe Apple will be responsive to feedback from researchers and this could lead to more of this toolchain being auditable. But even if we can't verify that their sanctioned use case is secure, the cloud OS could be a great step forward in secure inference and secure clouds, which people could independently host or build an independent derivative of

The worst case is still that they just don't actually do it, but it seems reasonably likely they'll follow through on at least that, and then the worst case becomes "Super informative open-source codebase for secure computing at scale just dropped" which is a great thing no matter how the other stuff goes

reply
ignoramous
1 month ago
[-]
> could be the first time we'll see a big tech company actually provide an auditable security guarantee

AWS Nitro Enclaves [0] come close but of course what Apple has done is productize private compute for its 1b+ macOS & iOS customers!

[0] https://docs.aws.amazon.com/enclaves/latest/user/nitro-encla...

reply
threeseed
1 month ago
[-]
You would combine that with AWS BottleRocket:

https://aws.amazon.com/bottlerocket

reply
stensonb
1 month ago
[-]
Absolutely looking forward to that possibility: https://github.com/bottlerocket-os/bottlerocket/issues/3348
reply
transpute
1 month ago
[-]
> even if we can't verify that their sanctioned use case is secure, the cloud OS could be a great step forward in secure inference and secure clouds, which people could independently host or build an independent derivative of

Yes, the tech industry loves to copy Apple :)

Asahi Linux has a good overview of on-device boot chain security, https://github.com/AsahiLinux/docs/wiki/Apple-Platform-Secur...

> My main concern right now is that if virtualization is employed in their actual deployment, there could be a backdoor that passes keys from secure enclaves in still-proprietary parts of the OSes running on user devices to a hypervisor we didn't audit that can access the containers.

  We’ll release a PCC Virtual Research Environment: a set of tools and images that simulate a PCC node on a Mac with Apple silicon, and that can boot a version of PCC software minimally modified for successful virtualization.
This seems to imply that PCC nodes are bare-metal.

Could a PCC node be simulated on iPad Pro with M4 Apple Silicon?

reply
advael
1 month ago
[-]
> Yes, the tech industry loves to copy Apple :)

Yes, most technology is built on other technology ;)

> This seems to imply that normal PCC nodes are bare-metal.

I realize that, but there's plausible deniability in it, especially since the modification could also hide the mechanism I've described in some other virtualization context that uses the unmodified image, without the statement being untrue

reply
DEADMINCE
1 month ago
[-]
> Yes, the tech industry loves to copy Apple :)

Eh, it goes both ways. Even Apple devices got widgets eventually :)

reply
yla92
1 month ago
[-]
> And finally, we used Swift on Server to build a new Machine Learning stack specifically for hosting our cloud-based foundation model.

Interesting to see Swift on Server here!

https://www.swift.org/documentation/server/

reply
nardi
1 month ago
[-]
Many people in this thread are extremely cynical and also ignorant of the actual security guarantees. If you don’t think Apple is doing what they say they’re doing, you can go audit the code and prove it doesn’t work. Apple is open sourcing all of it to prove it’s secure and private. If you don’t believe them, the code is right there.
reply
v4dok
1 month ago
[-]
This is Confidential Computing https://en.m.wikipedia.org/wiki/Confidential_computing

with another name. Intel, AMD and Nvidia have been working for years on this. OpenAI released a blog some time ago where they mentioned this as the "next step". Exciting that Apple went ahead and deployed first, it will motivate the rest as well.

reply
leboshki
1 month ago
[-]
This new DataProtector tool streamlines confidential computing software development. Imagine being able to rent access to your data so you own it but code running in a TEE can access it, securely transferring ownership of data, or offering subscription bundles for your data. With generative AI essentially turning into an echo chamber where it will train on its own content, sourcing human-derived data and content is going to be so important. This might be how an economy of that data/content gets off the ground. https://medium.com/iex-ec/introducing-the-content-creator-de...
reply
ramesh31
1 month ago
[-]
Here's the answer to the "what's taking Apple so long to get on the LLM train?" folks. Per usual, they lag a bit and then do it better than anyone else.
reply
JimDabell
1 month ago
[-]
It’s also because they have a twelve month release cycle.
reply
piccirello
1 month ago
[-]
> The Secure Enclave randomizes the data volume’s encryption keys on every reboot and does not persist these random keys, ensuring that data written to the data volume cannot be retained across reboot. In other words, there is an enforceable guarantee that the data volume is cryptographically erased every time the PCC node’s Secure Enclave Processor reboots.
reply
Timber-6539
1 month ago
[-]
Feels like an uptime screenshot would be appropriate here
reply
transpute
1 month ago
[-]
PCC node execution should be per-transaction, i.e. relatively short lived.
reply
wmf
1 month ago
[-]
The server can't afford to do one transaction then reboot.
reply
transpute
1 month ago
[-]
Intel and AMD server processors can use DRTM late launch for fast attested restart, https://www.semanticscholar.org/paper/An-Execution-Infrastru.... If future Apple Silicon processors can support late launch, then PCC nodes can reduce intermingling of data from multiple customer transactions.

> The server can't afford

What reboot frequency is affordable for PCC nodes?

reply
tzs
1 month ago
[-]
> The Secure Enclave randomizes the data volume’s encryption keys on every reboot and does not persist these random keys, ensuring that data written to the data volume cannot be retained across reboot. In other words, there is an enforceable guarantee that the data volume is cryptographically erased every time the PCC node’s Secure Enclave Processor reboots.

I wonder if there is anything that enforces an upper limit on the time between reboots?

Since they are building their own chips it would be interesting to include a watchdog timer that runs off an internal oscillator, cannot be disabled by software, and forces a reboot when it expires.

reply
j0e1
1 month ago
[-]
> The Apple Security Bounty will reward research findings in the entire Private Cloud Compute software stack — with especially significant payouts for any issues that undermine our privacy claims.

Let the games begin!

reply
bayareabadboy
1 month ago
[-]
What are the longer term implications that Apple is doing this on their own hardware and not Nvidia? This seems like a big thing to me, an idiot.
reply
ls612
1 month ago
[-]
Apple doesn't want to pay the Jensen Leather Jacket Fee and has $200 billion in cash it is sitting on to make it happen. If anyone can create an Nvidia substitute for AI chips its Apple and their cash hoard combined with their world-class design team and exclusive access to all of TSMC 3nm and next year 2nm production they could possibly want.
reply
wmf
1 month ago
[-]
If you're one of the richest companies in history you can "simply" invest 15 years into developing your own chips instead of buying Nvidia GPUs.
reply
transpute
1 month ago
[-]
> simply invest 15 years into developing your own chips instead of buying Nvidia GPUs

https://www.notebookcheck.net/Apple-and-Imagination-strike-G...

  Following the loss of Apple, easily its biggest client, Imagination was bought out by a Chinese-based investment group. Apple subsequently released its first in-house designed mobile GPU as part of the A11 Bionic SoC that powered the iPhone X.. The new “multi-year license agreement” gives Apple official access to much wider range of Imagination’s mobile GPU IP as well as its AI technologies. The A11 Bionic also included the first neural processing engine in an iPhone
https://9to5mac.com/2020/01/01/apple-imagination-agreement/

  Apple described Imagination’s characterizations as misleading while hiring Imagination employees to work for Apple’s GPU team in the same community.
reply
wmf
1 month ago
[-]
That probably has nothing to do with the Neural Engine though.
reply
transpute
1 month ago
[-]
Probably a coincidence that Apple GPU and NPU both appeared at the same time (A11).
reply
jrk
1 month ago
[-]
It is a coincidence. They are unrelated hardware blocks and very different architectures.
reply
onesociety2022
1 month ago
[-]
But this is just inference. What did they use to train their foundation models?
reply
theshrike79
1 month ago
[-]
The M-series CPUs are stupidly effective in LLM operations. Even my relatively old M1 mac mini can do decent speeds of 7B models.

And Apple clearly has made some custom server hardware and slapped a ton of them on a board just to do the PCC stuff.

reply
dindobre
1 month ago
[-]
This feels like the biggest part of the news to me
reply
ethbr1
1 month ago
[-]
This entire platform is the first time I've strategically considered realigning the majority of my use to Apple.

Airtag anonymity was pretty cool, technically speaking, but a peripheral use case for me.

To me, PCC is a well-reasoned, surprisingly customer-centric response to the fact that due to (processing, storage, battery) limitations not all useful models can be run on-device.

And they tried to build a privacy architecture before widely deploying it, instead of post-hoc bolting it on.

>> 4. Non-targetability. An attacker should not be able to attempt to compromise personal data that belongs to specific, targeted Private Cloud Compute users without attempting a broad compromise of the entire PCC system. This must hold true even for exceptionally sophisticated attackers who can attempt physical attacks on PCC nodes in the supply chain or attempt to obtain malicious access to PCC data centers.

Oof. That's a pretty damn specific (literally) attacker, and it's impressive that made it into their threat model.

And neat use of onion-style encryption to expose the bare minimum necessary for routing, before the request reaches its target node. Also [0]

>> For example, the [PCC node OS] doesn’t even include a general-purpose logging mechanism. Instead, only pre-specified, structured, and audited logs and metrics can leave the node, and multiple independent layers of review help prevent user data from accidentally being exposed through these mechanisms.

My condolences to Apple SREs, between this and the other privacy guarantees.

>> Our commitment to verifiable transparency includes: (1) Publishing the measurements of all code running on PCC in an append-only and cryptographically tamper-proof transparency log. (2) Making the log and associated binary software images publicly available for inspection and validation by privacy and security experts. (3) Publishing and maintaining an official set of tools for researchers analyzing PCC node software. (4) Rewarding important research findings through the Apple Security Bounty program.

So binary-only for majority, except the following:

>> While we’re publishing the binary images of every production PCC build, to further aid research we will periodically also publish a subset of the security-critical PCC source code.

>> In a first for any Apple platform, PCC images will include the sepOS firmware and the iBoot bootloader in plaintext, making it easier than ever for researchers to study these critical components.

[0] Oblivious HTTP, https://www.rfc-editor.org/rfc/rfc9458

reply
manquer
1 month ago
[-]
> Oof. That's a pretty damn specific (literally) attacker, and it's impressive that made it into their threat model.

How so ? There are any number of state and state sponsored attackers who it should apply it including china, North Korea , Russia , Israel as nation states and their various affiliates like NSO group .

Even if NSA its related entities are going to be notably absent. If your threat model includes unfriendly nation state actors then the security depends on security at NSA and less on Apple, they have all your data anyway.

If nation state actors are interested in you, no smartphone that is not fully open source on both hardware and OS side that has been independently verified by multiple reviewers is worth it, i.e. no phone in the market today, everything else is tradeoff for convenience for risk, the degree of each is quite subjective to each individual.

For the rest of us, the threat model is advertisers, identity thieves, scammers and spammers and now AI companies using it for training.

Apple will protect against other advertisers insofar to grow their own ad platform , they already sell searches to Google for $20B/year and there is no knowing the details of the OpenAI deal on what kind of data will be shared.

reply
transpute
1 month ago
[-]
It's very encouraging.

Another good step in this direction would be publishing a list of all on-device Apple software (including Spotlight models for image analysis) and details of any information that is sent to Apple, along with opt-out instructions via device Settings or Apple Configurator MDM profiles.

Apple does publish a list of network ports and servers, so that network traffic can be permitted for specific services. The list is complicated by 3rd-party CDNs, but can be made to work with dnsmasq and ipset, "Use Apple products on enterprise networks", https://support.apple.com/en-us/101555

reply
krosaen
1 month ago
[-]
I wonder if they will ever make this available to developers - I can think of many products that would be nice to have at least part of the cloud infra being hosted in a trusted provider like this, e.g indoor cameras for health metrics: sounds awesome but I would never trust a startup to handle private data this sensitive.
reply
paul2paul
1 month ago
[-]
We don't need "a new frontier". I want to be the only one who holds the private key to my encrypted data. I think it's pretty lame to sell privacy when it's not.
reply
dyauspitr
1 month ago
[-]
The problem with that is it’s not possible for you to be the only one to hold the private key and have the cloud run your data against a model.
reply
tiffanyh
1 month ago
[-]
I wonder who Apple will be colocating with for data centers.

And what the PCC chassis looks like for these compute devices (will it be a display-less iPad)?

reply
jrk
1 month ago
[-]
The have built and operated a growing number of their own data centers for years. Presumably this will go into those.
reply
jachee
1 month ago
[-]
Apple’s rich enough to build and own their own datacenters. Savvy enough, too. I’d imagine the chassis are custom Apple-NOC-specific M-chip powered servers.
reply
tiffanyh
1 month ago
[-]
So basically, a Mac mini.
reply
jachee
1 month ago
[-]
Well, I was thinking more like the equivalent of 16 or 32 mac minis in a 2U rack enclosure.
reply
jnaina
1 month ago
[-]
Starts with A and ends with S
reply
tiffanyh
1 month ago
[-]
Apple DatacenterS

Gotcha, makes sense :)

reply
jaydeegee
1 month ago
[-]
Outside of all the security aspects which look to be handled quite well on the surface I do enjoy that the client mainframe architecture is still a staple of computing.
reply
vlovich123
1 month ago
[-]
What I haven’t heard from the announcement is whether the private cloud has external network access. Presumably it wouldn’t otherwise the guarantees of your request staying in your cloud is meaningless. Conversely, a lot of trivial network stuff can be involved (eg downloading the model). Anyone know which balance Apple is choosing to strike initially?
reply
dymk
1 month ago
[-]
I would love to be able to run a PCC node locally on my M2 MacBook or similar for my iPhone to offload to, even if it’s only for doing what 15 Pro iPhones can do on-device.

There’s precedent for this sort of thing as well, like Apple TVs or iPads acting as HomeKit hubs and processing security can footage on-device.

Maybe they’ll open that up in the future.

reply
gigel82
1 month ago
[-]
The only way to trust this is them selling "cloud compute" servers that folks can deploy and monitor in their own infrastructure. Nothing else can be guaranteed to not include malicious code to exfiltrate the data.

Or better yet, make the APIs public and pluggable so that one can choose an off-device AI processor themselves if one is needed.

reply
astrange
1 month ago
[-]
Your own infrastructure is definitely less secure than this or even, say, Google. You do not have the capability and teams of SREs to detect intrusions, and an attacker would know that your server processes your data.
reply
gigel82
1 month ago
[-]
Maybe, but I can totally firewall my own servers to my heart's desire, including completely blocking it off from the internet and only allowing connections via my own network's routes.
reply
astrange
1 month ago
[-]
Sure, but then it doesn't work when you're out of the house, which isn't a good pairing with a phone.
reply
gigel82
1 month ago
[-]
I VPN (WireGuard) into my home network to access other selfhosted services (like Immich for photos, paperless-ngx, DNS adblock, etc.) and to prevent some tracking, so it would work great for me.
reply
KETpXDDzR
1 month ago
[-]
The only way, besides trusting the cloud provider, is encryption. Homomorphic encryption allows you to run calculations on encrypted data without decrypting it. However, besides the performance penalty, it leaks information.
reply
asp_hornet
1 month ago
[-]
This thread reads like a whole bunch of sour grapes. Hopefully this challenges other companies to do better
reply
CGamesPlay
1 month ago
[-]
All of this is interesting, but how easy is this to circumvent? When Apple changes their mind for whatever reason, don't they just return a key to a fake PCC node, which would bypass all of their listed protections? Furthermore, what prevents Apple from doing this for specific users?
reply
a2128
1 month ago
[-]
According to the article, it would be difficult to tie any request to a user:

> Target diffusion starts with the request metadata, which leaves out any personally identifiable information about the source device or user, and includes only limited contextual data about the request that’s required to enable routing to the appropriate model

If this is the case, I wonder how the authentication would work. Is it a security through obscurity sort of situation? Wouldn't it be possible for someone, through extensive reverse engineering, to write a client in Python that gives you a nice free chat API and Apple would be none the wiser?

reply
filleokus
1 month ago
[-]
Don't know if they use it (or if it would somehow weaken/break the privacy claims you cited), but Apple has an SDK called DeviceCheck[0].

Essentially, your server send a nonce which the client signs using a key pair derived from the Secure Enclave. The server can then verify the signature by an API provided by Apple's servers, and they respond whether it was signed by a Secure Enclave resident key or not.

I'm guessing this could be helpful to make it hard(er) to write a Python client.

[0]: https://developer.apple.com/documentation/devicecheck/establ...

reply
JimDabell
1 month ago
[-]
iOS won’t send requests to it unless that node appears in the transparency log.

If it appears in the transparency log, the whole world will be able to see that a suspicious node has started serving requests.

If Apple changes iOS to remove that restriction, the whole world will be able to see that change because it’s client side.

If Apple tries to deliver a custom version of iOS to a single user, the iOS hardware will refuse to run it unless it has a valid signature.

If it has a valid signature, that copy of the firmware is irrefutable evidence that Apple is deliberately breaking its privacy promises and spying on people in a way they specifically said they wouldn’t, which would be extremely harmful to their business.

Apple seems to be going all-out in binding themselves in a way that makes it as difficult as possible to do what you are suggesting.

reply
CGamesPlay
1 month ago
[-]
Ok, I think you're referring to this:

> Specifically, the user’s device will wrap its request payload key only to the public keys of those PCC nodes whose attested measurements match a software release in the public transparency log.

But what’s stopping Apple from returning a node which lies about its “attested measurements” (possibly even to a specific user)? Whats to prevent any old machine, not running the TPM at all, from receiving a certificate?

I get that “the process is further monitored by a third-party observer not affiliated with Apple”, but I don’t know where I read their report, or even if they are still paid by Apple, so this feels like a trust-based proof.

reply
thomasahle
1 month ago
[-]
Did Apple say anything about what training data they used for their generative image models?
reply
wmf
1 month ago
[-]
There's a thread about the models: https://news.ycombinator.com/item?id=40639506
reply
jml78
1 month ago
[-]
Yes, basically if you opted out of Apple scraping, your data isn’t used
reply
renegade-otter
1 month ago
[-]
I think the best AI business model is charging you to keep the data away from data collection. It's brilliant.

Or "Professional" version of software that removes all those annoying "AI" features.

reply
cherioo
1 month ago
[-]
Can some ELI5 how remote attestation is supposed to work? It feels like asking a remote endpoint “are you who you say you are”. What’s stopping remote endpoint always responding “yes”
reply
transpute
1 month ago
[-]
> What’s stopping remote endpoint always responding “yes”

It requires a small, trusted remote observer hardware component, e.g. TCG TPM/DICE, Apple Secure Enclave, Google OpenTitan, Microsoft Pluton.

2021 literature review, https://arxiv.org/abs/2105.02466

2022 HN thread on remote attestation, https://news.ycombinator.com/item?id=32282305

reply
GrantMoyer
1 month ago
[-]
My understanding is that it's similar to TLS authentication.

The remote endpoint has special hardware which keeps secret signing keys (similar to a TLS server's signing keys). The hardware refuses to reveal the private keys, but will sign certain payloads under certain conditions. In addition, Intel or AMD or whoever also has super duper mega secret master keys (similar to a CA's signing keys), which they use to sign the device's signing keys. The certificate signing the device keys is also stored on the device.

So, each time the endpoint is asked to attest its software, it says yes and signs its response with its keys, and it also sends a certificate showing its keys are signed by the master key. That way, the client knows the special hardware really said yes and that Intel or AMD or whoever said that particular special hardware is legit.

reply
wyes
1 month ago
[-]
They rely on Trusted Execution Environments and the fact that hash functions are one-way functions.

Verifier -> requests a Prover to attest its software state

Prover -> goes into RoT, verifies authenticity of Verifier (and request), computes hash of attested memory region, sends hash digest

Verifier -> receives digest and compares to known hash

> What’s stopping remote endpoint always responding “yes” The attestation code is inside of a RoT, so a bad actor shouldn't be able to call this code, only callable by receiving a request from a Verifier

reply
davidczech
1 month ago
[-]
Servers send signed statements describing its state, and clients make the yes/no determination.
reply
croes
1 month ago
[-]
Who pays for the costs of private cloud compute, is it free of charge for the iPhone owner (at least until they turn it into a subscription)?

What about second hand iPhone users?

reply
AnonHP
1 month ago
[-]
> Who pays for the costs of private cloud compute, is it free of charge for the iPhone owner (at least until they turn it into a subscription)?

My guess is that this is similar to how iOS upgrades are “free”, how Apple Maps is free, how iMessage is free, how iCloud Mail is free, etc. To a good extent, it’s all paid for by the price paid by the customer for the hardware.

I’d also wager that there will be a paid service/subscription that will get baked into iCloud+ at some point in time (maybe a year from now). This will offer a lot more and Apple will try to attract more customers into its paid services net.

reply
repler
1 month ago
[-]
Exactly - nothing is for free. They explicitly state that PCC data gets destroyed after a response is returned.

Are the anonymized queries (minus user data context) worth anything?

It’s gotta be some kind of subscription/per query charge model to pay for the servers, electricity, and bandwidth.

reply
EternalFury
1 month ago
[-]
Let’s not be too picky. This is a good thing.
reply
SirensOfTitan
1 month ago
[-]
What I'm most curious about here is if a state actor comes to Apple with a subpoena and compels them to release information on an individual, what would Apple be able to release?

... I suppose this is ultimately a question that will be tested sooner or later in the US.

reply
throwaway41597
1 month ago
[-]
I'm very curious as well because my very limited understanding tells me the answer is nothing. The relay hides your identity. Your phone checks the attestations so it won't send your data to servers not running the published software which ensures encryption keys are ephemeral. Once your session is done, the keys are deleted.

Law enforcement would need to seize the right server among millions while it's processing your request and perform an attack on it to get the keys before they're gone.

My next question is what happens if/when the attestation keys are stolen.

reply
gpm
1 month ago
[-]
Probably everything uploaded after the intercept is in place if you can convince a court to compel it.

One option is to release a malicious software update, sign it, publish the signature on the public chain, and then simply not release the binaries until after whatever associated gag orders there are (if any) expire. Apple gave themselves a 90 day timeline for this before they'd even be in violation of their promises.

Another option is to use the cryptographic keys used to make the hardware that attests to the software running on it, to simply falsely attest to what software is running. Unless Apple's somehow moved those keys outside of the courts jurisdiction (which means outside of Apple's control in the case of most courts) that should be within the courts power. If they can still create new hardware, it seems likely whoever is making that hardware must still have access to the keys...

Both of these attacks are outside the "threat model" proposed, because they are broad compromises against the entire PCC infrastructure. The fact that they are possible and within the legal systems power... well... why are we advertising this as secure again?

The main value of this whole architecture in my mind isn't actually security though, it's that it's Apple implicitly making the promise that they won't under any circumstance use the data, or let anyone else use the data, for business purposes (not even for running the service itself).

reply
aalimov_
1 month ago
[-]
> One option is to release a malicious software update, sign it, publish the signature on the public chain,

In this option it would be Apple releasing a malicious software update?

> If they can still create new hardware, it seems likely whoever is making that hardware must still have access to the keys...

This option reads like the keys are stored in apple-keys.txt

> Both of these attacks are outside the "threat model" proposed, because they are broad compromises against the entire PCC infrastructure

They mentioned that the in-depth write up will be shared later, might they still address this concern in writing? Your wording makes you sound so certain, but this is just a broad overview. How are you so sure?

reply
gpm
1 month ago
[-]
> In this option it would be Apple releasing a malicious software update?

Yes, compelled by something like the all writs act (if the US is the one doing the compelling).

> This option reads like the keys are stored in apple-keys.txt

They probably are. That file might live on a CD drive in a safe that requires two people to open it, but ultimately it's a short chunk of binary data that exists somewhere (until it is destroyed)...

> might they still address this concern in writing?

Can I say beyond all doubt that this won't happen? Of course not.

On the first approach I'm quite confident though, because it's both the type of attack they discuss in their initial press release, and pretty fundamental to and explicitly allowed by their model of updating the software.

On the second approach I'm reasonably confident. Like the first issue it's the type of issue that they were discussing in their initial press release. Unlike the first issue it's not something that is explicitly allowed in the model. If Apple can find a way to make the attestation keys irretrievable while still allowing themselves to manufacture hardware I believe they'd do it - I just don't see a method and think it would have warranted a mention if they had one. I tried to insert a level of uncertainty in my original writing on this one because I could be missing a way to solve it.

Ultimately I'd rather over-correct now then have people start thinking this is going to be more secure than it is and then have some fraction of them miss the extremely-likely follow up of "and we could be compelled to work around our security".

reply
jahewson
1 month ago
[-]
reply
gpm
1 month ago
[-]
I'm well aware of these. They don't solve the problem at hand. You need a way to put keys into new hardware. Thus you need a way to get keys out of wherever you've stored your cryptographic material. Thus it can't be on a HSM (or it can be if it's a master key signing child keys, but in that case the attack only needs a signed child key).
reply
lgg
1 month ago
[-]
From: https://support.apple.com/guide/security/secure-enclave-sec5...

“A randomly generated UID is fused into the SoC at manufacturing time. Starting with A9 SoCs, the UID is generated by the Secure Enclave TRNG during manufacturing and written to the fuses using a software process that runs entirely in the Secure Enclave. This process protects the UID from being visible outside the device during manufacturing and therefore isn’t available for access or storage by Apple or any of its suppliers.“

reply
gpm
1 month ago
[-]
Sure, and even Apple can't imitate a different server that they made.

They're making new servers though. Take the keys that are used to vouch for the UIDs in actual secure enclaves, and use them to vouch for the UID in your evil simulated "secure" enclave. Your simulated secure enclave doesn't present as any particular real secure enclave, it just presents as a newly made secure enclave that Apple has vouched for as being a secure enclave.

reply
riscy
1 month ago
[-]
I mean it was famously tested in 2015 after the San Bernardino attack. Apple didn’t back down [1] and later sued the company who sold the zero-day to the govt to unlock the phone [2].

[1] https://en.m.wikipedia.org/wiki/Apple%E2%80%93FBI_encryption...

[2] https://www.washingtonpost.com/technology/2021/04/14/azimuth...

reply
asadotzler
1 month ago
[-]
Also famously tested (and failed) much more recently. https://arstechnica.com/tech-policy/2023/12/apple-admits-to-...

Apple shills are the worst.

reply
astrange
1 month ago
[-]
That's an ordinary subpoena and that data is not being specially collected and not e2ee encrypted. Has nothing to do with the guarantee in this article.
reply
theshrike79
1 month ago
[-]
Push notifications are on a whole different level than "full access to your phone".
reply
m463
1 month ago
[-]
The more common use case will be when you're locked out of "your" apple id.
reply
gpm
1 month ago
[-]
The design appears to be entirely ephemeral. There's no personal data to recover here from "your" apple id.
reply
ricardobeat
1 month ago
[-]
Unless their statements regarding the design of these systems are blatantly false, or they are forced to add data collectors on purpose to target individuals, the answer is close to nothing.

You can opt into full E2E encryption [1] which makes it nothing, presumably at the cost of some convenience features.

[1] https://support.apple.com/en-us/108756

reply
onesociety2022
1 month ago
[-]
PCC exists because full E2E is not feasible for these use cases. The LLM has to take your personal data (context window and prompt) to process it.
reply
clipjokingly
1 month ago
[-]
Is it possible to have zero knowledge AI?
reply
wslh
1 month ago
[-]
Yes, the issue is that they are really slow.
reply
rjeli
1 month ago
[-]
ZKML is actually not horrible, probably only 100-1000x overhead atm. Unfortunately it doesn’t solve the problem, you would need FHE which has much higher overhead
reply
tharant
1 month ago
[-]
FHE? I, a noob, assume that acronym maybe has something to do with homomorphic encryption?

Also, got any links for interesting ZKML papers/projects?

reply
m3kw9
1 month ago
[-]
I wonder how they will do this in china?
reply
mlindner
1 month ago
[-]
Apple already runs China-only software on their devices, I suppose it just won't run there.
reply
solarkraft
1 month ago
[-]
I was sceptical of the announcement, but this actually sounds really well thought out.

One key part though will be the remote attestation that the servers are actually running what they say they're running. Without any access to the servers, how do we do that? Am I correctly expecting that that part remains a "trust me bro" situation?

reply
wyes
1 month ago
[-]
Attestation will run on the RoT.

>While we’re publishing the binary images of every production PCC build, to further aid research we will periodically also publish a subset of the security-critical PCC source code.

I expect that they'll publish the attestation source code.

But, basically what will happen is the Verifier will request a certain memory region to be attested, then that region will be hashed and the digest will be sent back to the Verifier. If the memory is different from what is expected, the hash digest will NOT match.

reply
candiddevmike
1 month ago
[-]
Did Apple need to license the phrase Core OS like iOS?
reply
thirdhaf
1 month ago
[-]
That phrase distinguishes the internal group responsible for that part of the architecture, don’t think it’s a marketing term.
reply
rldjbpin
1 month ago
[-]
not trusting any of the privacy/security mumbo-jumbo when their icloud free tier still allows for a paltry 5 gigs, when even google offers thrice as much for their public service.

i am happy for those who see the positives here, but for the skeptic a toggle to prevent any online processing would be more satisfactory.

reply
rmbyrro
1 month ago
[-]
Most ironic thing is they abbreviate this as "PCC". (reads chinese communist party in many languages)

The absolute worst acronym for anything even remotely related to personal privacy.

reply
transpute
1 month ago
[-]
Inverted acronym?
reply
system7rocks
1 month ago
[-]
I trust Apple
reply
Havoc
1 month ago
[-]
Sounds good. Still won’t send anything sensitive there but I appreciate the effort and direction, especially when current industry trend seems to be fuck you were rewriting our TOS to take your data.
reply
m-s-y
1 month ago
[-]
Apple’s doing this specifically to avoid the possibility of what you’re describing.

The transparency & architecture together are intended to be more than enough to publicly detect any major retooling of the system.

reply
nerdright
1 month ago
[-]
Even if you don't like Apple's monopolistic approaches, you have to admire how they go an extra mile to stay true to their mantra of selling privacy.

This is clearly a company with an identity, unlike Microsoft and Google who are very confused.

reply
throwaway369
1 month ago
[-]
[Deleted]
reply
zie
1 month ago
[-]
Well the options from China's perspective is: Come to the table and meet some/all of our demands or stop doing business here.

Since Apple devices are now on the Chinese Governments poopy list, I assume Apple is only meeting some, not all of China's demands. I assume if Apple did everything the Chinese govt wanted, they wouldn't be on the poopy list. Personally I see being on the Chinese govt poopy list as an endorsement that it's probably a net positive for privacy and security compared to those not on the list. :)

Around WhatsApp, it's probably part of the whole compromise mess above. WhatsApp now does E2E and that's something China is not a fan of, so it's probably China's doing that it's not in the app store in China any more. Apple is just following the laws China forces them to follow.

It should be noted I've never been to China(yet) and have zero 1st hand knowledge.

reply
astrange
1 month ago
[-]
It's a silly oversimplification that nothing in China is ever allowed to have privacy ever. China has privacy/data protection laws just like other countries do. Even an authoritarian government doesn't want other random private actors getting to see everything.
reply
zie
1 month ago
[-]
I agree, but I was talking specifically about the govt.

The govt basically requires total access doesn't it? I mean every govt basically wants it, and the US has tried many times, but so far hasn't quite gotten complete access everywhere.

reply
nisten
1 month ago
[-]
Complete horseshit marketing speak.

Was the cloud non-private before? Was it not secure in the first place? Do my Siri searches no longer end up as google ads metadata now? Are the feds no longer able to get rubber stamp access to my i C L O U D now?

You are a naive idiot for believing that this is anything but security theater to adress the emotional needs of AI anxiety in and outside the company.

Just my opinion.

reply
goupil
1 month ago
[-]
It's sad to see so many discussions on security and so little on privacy. How about solutions that could combine both, such as homomorphic encryption for AI?
reply
wmf
1 month ago
[-]
Privacy is the entire point of this discussion?

Homomorphic encryption is mostly a fantasy at this point.

reply
dymk
1 month ago
[-]
There’s not a single homomorphic encryption implementation out there which can do real world quantities of computation
reply
whatever1
1 month ago
[-]
Fyi this is the same company that has been accused of showing people's photos and videos in stranger people's devices by accident.

https://discussions.apple.com/thread/252459254?sortBy=best

reply
blackqueeriroh
1 month ago
[-]
You mean the _bug_ Apple fixed? These things are not related.
reply
jeffbee
1 month ago
[-]
A lot of this sounds like Apple has been 10-20 years behind the state of the art and now wants to tell you that they partially caught up. Verifiable hardware roots of trust and end-to-end software supply chain integrity are things that have existed for a while. The interesting part doesn't come until the end where they promise to publish system images for inspection.
reply
7e
1 month ago
[-]
This is not true at all. Apple is the first to roll out end to end remote attestation of an enclave that includes an ML accelerator in the root of trust, with public verifiability of the entire stack. They are way ahead.
reply
ethbr1
1 month ago
[-]
Do you have analogous search terms for Microsoft, Alphabet, Google, and Amazon's approaches?

Your comment makes me curious on how guarantee-to-guarantee looks (and associated architectures).

reply
wmf
1 month ago
[-]
reply
bowmessage
1 month ago
[-]
reply
threeseed
1 month ago
[-]
Apple's system goes further by having incoming requests choose and verify a server and then encrypt itself using the public key of the node to prevent MITM attacks.

And a one-time credential to prevent replay attacks.

As well as minor things like obfuscating IP addresses, metadata etc.

reply
aaomidi
1 month ago
[-]
Apples system is also the entire pipeline. Borg SREs can still change behavior here. It’s a lot better than what most places have but does not go far enough.
reply
sodality2
1 month ago
[-]
None for consumer-facing products, though
reply
candiddevmike
1 month ago
[-]
Most of the stuff in the blog post reads like common security precautions: don't run as root, stateless immutable nodes, use secure boot, etc. All wrapped up in some Apple marketing pizzazz.
reply
ignoramous
1 month ago
[-]
> common security precautions ... marketing pizzazz.

If it were this common, Meta, Google, and others would have announced or launched something similar for its consumer apps/services; I can't seem to recall anything of note.

reply
saagarjha
1 month ago
[-]
Perhaps one of these days we'll get a 'jeffbee that realizes that Google is not actually ahead of everyone in everything all the time. But not today, I guess.
reply