https://www.devever.net/~hl/webcrypto
And to be fair, this doesn't apply only to this case. Even the data you have stored locally, Apple could access it if they wanted, they sure have power to do it if they so wish or were ordered by the government. They might have done it already and just didn't told anyone for obvious reasons. So, I would argue the best you could say is that it's private in the sense that only Apples knows/can know what you are doing rather than a larger number of entities .
Which, you could argue it's a win when the alternatives will leak your data to many more parts... But still far away from being this unbreakable cryptography that it's portrayed it to be.
Apple, on the other hand, has made a pretty serious effort to ensure that no employee can access your data on these AI systems. That’s hugely different! They’re going as far as to severely restrict logging and observability and even building and designing their own chips and operating systems. And ensuring that clients will refuse to talk to non-audited systems.
Yes, we can’t take Apple’s word for it. But I think the third party audits are a huge part of how we trust, and also verify, that this system will be private. I don’t think it’s far to claim that “Apple knows what you’re doing.” That implies that some one, at some level at Apple can at some point access the data sent from your device to this private cloud. That does not seem to be true.
I think another facet of trust here is that a rather big part of Apple’s business model is privacy. They’ve been very successful financially by creating products that generate money in other ways, and it’s very much not necessary or even a sound business idea for them to do something else.
While I think it’s fair to be skeptical about the claims without 3rd party verification, I don’t think it’s fair to say that Apple’s approach isn’t better for your data and privacy than openAI or Google. (Which I think is the broad implication — openAI tracks prompts for its own model training, not to resell, so it’s also “only openAI knows what your doing.”)
Also, what makes you think that Apple's investments on chip design and OS is superior to Google's? Google is known for OpenTitan and other in-house silicon projects. It's also been working in secure enclave tech (https://news.ycombinator.com/item?id=20265625), which has been open-source for years.
You're making unverifiable claims about Apple's actual implementation of the technical systems and policies it is marketing. Apple also sells ads (App Store, but other surfaces as well) and you don't have evidence that your AI data is not being used to target you. Conversely, not all user data is used by Google for ad targeting.
Apple generally engineers their business so that there isn’t an incentive to violate those access controls or principles. Thats not where the money is for them.
Behavior is always shaped by rewards and punishments. Positive reinforcement is always stronger.
All these conversations always end up boiling down to someone thinking they’re being clever for pointing out you have to trust a company at the end of the day when it comes to security and privacy.
Yes. Valid. So if you have to trust someone, doesn’t it make sense for it to be someone who has built protecting privacy into their core value proposition, versus a company that has baked violating your privacy into their value prop?
Apple doesn't care about you, the individual. Your value as a singular customer is worthless. They do care about the whole; a whole that governments can threaten to exclude them from if they don't cooperate with domestic surveillance demands. How far off do you really think American iCloud is from China? If Apple is willing to backdoor one server, what's stopping them from backdooring them all? If they're willing to lie about notification security, what's stopping them from lying about server integrity too?
And worst off, Apple markets security. That's it; you can't go verify their veracity outside the dinky little whitepapers they publish. You can't know for sure if they have privacy violation baked-in to their system because you can't actually verify anything. You simply have to guess, and the best guess you can make gets based off whatever Apple markets as "true" to you. In reality, we can do better with security and should probably expect more from one of the largest consumer technology brands in the world. Simply assuming that they aren't violating user privacy is an absurd thing to gamble your security on.
My dad fits into this category. So worried about being “tracked by the government.” He’s not a dissident. He’s not a journalist. Not a freedom fighter. Just deeply inconveniencing his kids with some of his tech choices.
But if these people were the targets of APTs, all the massive technology lifestyle changes they’ve made to supposedly protect themselves wouldn’t really matter.
Sounds like your dad is the cool dude, and you're the tech-obsessed weirdo. Do you visit him often?
He’s otherwise a good dude. Just makes some tech choices here and there as if he’s a former CIA agent on the run that sort of just make you chuckle and shake your head.
Which is why it's good for us to demand more from capable companies. Apple looks good when they're scared, and the market wins when they're forced to compete in novel and interesting ways. Success breeds complacency, the rest is distant history.
Oh, boy, but this is deeply false. Apple literally provides security researchers models of their devices to verify their security claims on their most important cash cow, the iPhone.
This is just an incredibly bold and verifiably false claim.
Wow.
On top of that, they fail to commit to iOS security on the level of AOSP and don't let researchers create hardened variants or custom patches. With actively-distributed exploits like Pegasus still being used, that's the sort of behavior that turns your userbase into a stationary target. Giving researchers iPhones is insultingly usel
Apple vehemently opposes the concept of anyone securing their iPhone except them. They have a well-documented habit of ignoring vulnerabilities and offering zero compensation for the discovery of zero-days. Apple's ambivalence towards the security research sector is like one of the only things they're known for, among hacker communities. It is "verifiably false" in the sense that Apple spends quite a lot of money marketing the opposite of what they actually do in reality (not that you should be surprised by that).
American companies are subject to US law, full stop. Global technology companies have to balance interests to operate globally. China requires a local partner to operate services in the PRC, thus Apple and Microsoft (and others) operate with a business partner in that market.
From a business perspective, there's little or no incentive for Apple to take measures to collect information on you systematically - they do not monetize it and won't devote resources to its collection. However, not being responsive to government requests, demands, or order for information will result in punitive action. So they comply.
No company cares about you. They don't love or hate you. There's no moral purity - the competitive platform is owned by a company that owns the advertising market and has a long history of extracting every sinew of data to create profiles that allow for maximally efficient ad delivery. Engaging in whataboutism isn't productive.
How do you distinguish between a company who 'has built protecting privacy into their core value proposition' and one who just says they've done so?
What are you going to do if a major privacy scandal comes out with Apple at the center? If you wouldn't jump ship from Apple after a major privacy scandal then why does your input on this matter at all?
Some people feel that is inevitable so it's best to just rip that bandaid off now.
If you're already using a dumb phone and eschewing modern software services, then I'm not really talking to you. Roll on brother/sister, you are living your ideals.
> How do you distinguish between a company who 'has built protecting privacy into their core value proposition' and one who just says they've done so?
The business incentives. Apple's brand and market valuation to some extent depends on being the secure and privacy oriented company you and your family can trust. While Google's valuation and profit depends almost entirely on exploiting as much of your personal data as they possibly can get away with. The business models speaks for themselves.
Does this guarantee privacy and security? Does Apple have a perfect track record here? No of course not, but again if these are my two smartphone choices it seems fairly clear to me.
If you really perceive this as a binary choice, I have no idea how you could conclude that iOS is more secure than the Android Open Source Project.
...of course, it's not just a choice between a Google-spyware phone or an Apple-spyware phone. Many people like to reduce it to that so they can rationalize whichever company they pick, but in reality you have many choices including no smartphone at all. On Android's side, the Open Source images have enabled rigorous cross-referencing in OS capability, as well as forks that reduce the already-limited attack surface. Apple has a long track-record of letting zero-days fester in their inbox and failing to communicate promptly to security researchers, even for actively-exploited vulnerabilities.
It's not a "false equivalency" to highlight how Google, Apple and Microsoft all fold over like wet paper when the intelligence agencies come around. It's not a coincidence, either; all of those companies are enrolled in the NSA's domestic warrantless surveillance program.
Oh come on man. This is why these conversations often aren’t even worth having.
But, hey, at least the NSA won’t get ya.
I wonder how all those people did it in the 90s and 00s and before the age of smartphones.
Reality is based in a context.
Or are we going to go to even more “get off my lawn” kind of places and talk about how ancient man survived quite fine without the internet?
Like to eschew smartphones and just use basic feature phones and to interact in real physical settings and not digital ones.
There's a growing and warranted push back to pervasive and addictive digital technology.
This stuff is all designed so that even an employee with physical access to the machine would find it very difficult to get data. It's encrypted at rest by customer keys, stored in enclaves in volatile RAM. If you detached the computer or disk, you'd lose access. You'd have to perform an attack by somehow injecting code into the running system. But Shielded VMs/GKE instances makes that very hard.
I am not a Google employee anymore but this common tactic of just throwing out "oh, their business model contains ad model ergo, they will sell anything and everything, and violate contracts they sign to steal private data from your private cloud" is a bridge too far.
There are multiple verified stories on the lengths Apple goes internally to keep things secret.
I saw a talk years ago about (I think) booting up some bits of the iCloud infrastructure, which needed two different USB keys with different keys to boot up. Then both keys were destroyed so that nobody knows the encryption keys and can't decrypt the contents.
Using deniable, one-time keys etc. are... not that unusual. In fact I'd say I'm more worried about the use of random USB keys there instead of proper KMS system.
(There are similar stories with how doing a cold start can be difficult when you end up with a loop in your access controls, from Google, where a fortunately simulated cold-start showed that they couldn't access necessary KMS physically to bootstrap the system... because access controls depended, after many layers, on the system to be cold-started).
Any piece of technology MAY have a backdoor or secondary function you don't know of and can't find out without breaking said device.
[1]: https://developer.apple.com/videos/play/wwdc2024/10114/
If I were king of Apple and I truly valued user privacy, I would be careful not to tie any revenue streams to products that entail the progressive violation of user privacy.
If a third party wants that data, whether the third party is an online criminal, government law enforcement or a "business partner", this idea that Apple's "business model" will somehow negate the downsides of "cloud computing", online advertising and internet privacy is futile. Moreover, it is a myth. Apple is spending more and more on ad services, we can see this in its SEC filings. Before he died, Steve Jobs was named on an Apple patent application for showing ads during boot. The company uses "privacy" as a marketing tactic. There is no evidence of an ideological or actual effort to avoid the so-called "tech" company "business model". Apple follows what these companies do. It considers them competitors. Apple collects a motherload of user data and metadata. A company that was serious about privacy would not do this. It's a cop out, not a trade off.
To truly avoid the risks of cloud computing, online advertising and associated privacy issues, choosing Apple instead of Google is a half-baked effort. Anyone who was serious about it would choose neither.
Of course, do what is necessary, trust whomever; no one is faulting anyone for making practical choices, but let's not pretend choosing Apple and trusting it solves these problems introduced by so-called "tech" company competitors. Apple pursues online advertising, cloud computing and data collection. All at the expense of privacy. With billions in cash on hand, it is one of the wealthiest companies on Earth, does it really need to do that.
In the good old days, we could call Apple a hardware company. The boundaries were clear. Those days are long gone. Connect an Apple computer to a network and watch what goes over the wire wth zero user input, destined for servers controlled by the mothership. There is nothing "private" about that design.
Yeah. I feel like the conversation needs some guard rails like, "Within the realm of big tech, which has discovered that one of its most profitable models is to make you the product, Apple is really quite privacy friendly!"
> Google obviously tracks everything you do for ads, recommendations, AI, you name it. They don’t even hide it, it’s a core part of their business model.
This wasn't the experience I saw. Google is intentional about which data from which products go into their ads models (which are separate from their other user modeling), and you can see things like which data of yours is used in ads personalization on https://myadcenter.google.com/personalizationoff or in the "Why this ad" option on ads.
> and it’s very much not necessary or even a sound business idea for them to do something else
I agree that Apple plays into privacy with their advertising and product positioning. I think assuming all future products will be privacy-respecting because of this is over-trusting. There is _a lot_ of money in advertising / personal data
I'm trying to understand if this is really possible. I know they claim so but is there any info on how this would prevent Apple from executing different code to what is presented for audit?
So the question is: could the hash be falsified? That’s why they’re publishing the source code to firmware and bootloader, so researchers can audit the secure boot foundations.
I am sure there is some way that a completely malevolent Apple could design a weakness into this system so they could spend a fortune on the trappings while still being able to access user information they could never use without exposing the lie and being crushed under class actions and regulatory assault.
But I reject the idea that that remote possibility means the whole system offers no benefit users should consider in purchasing decisions.
Matt Green's posts about it so am sure it's been thought out - but hard to understand how it doesn't just depend on employees doing the right thing, when if you could, you would need all the rigmarole.
And at least after my experiences with T2 chip, I consider Apple devices to be always owned by Apple first...
Build your system so that it can't be decrypted, don't log anything etc. Mullvad has been doing this with VPNs and law enforcement has tested it - there's nothing for them to get.
Same has been proven with Apple not allowing FBI to open an iPhone, because it'd set a precedent. Future iPhone versions were made so that it's literally impossible for even Apple to open a locked iPhone.
There's no reason why they wouldn't go to same lengths on their private cloud compute. It's the one thing they can do that Google can't.
I thought the outcome of that case was that no precedent was set, since the iPhone was unlocked before the FBI could test their argument in court.
> Future iPhone versions were made so that it's literally impossible for even Apple to open a locked iPhone.
Firmware signed by apple is what runs to verify your biometrics and decide whether or not to unlock the device. At any point apple could sign firmware with a backdoor for this processor which lets them unlock any phone. How did they prevent this in future iPhone versions?
> theshrike79 18 hours ago | parent | context | flag | on: Private Cloud Compute: A new frontier for AI priva...
The best protection against "secret orders" is to use mathematics.
Build your system so that it can't be decrypted, don't log anything etc. Mullvad has been doing this with VPNs and law enforcement has tested it - there's nothing for them to get.
Same has been proven with Apple not allowing FBI to open an iPhone, because it'd set a precedent. Future iPhone versions were made so that it's literally impossible for even Apple to open a locked iPhone.
> There's no reason why they wouldn't go to same lengths on their private cloud compute. It's the one thing they can do that Google can't.
They did go to the same length, they have the ability to see your data whenever they choose to since they own the signing keys.
Now you can't debug anything.
> Mullvad has been doing this with VPNs
Mullvad do not need to store any data at all. Infact any data that they store is a risk. Minimising the data stored minimises their risk. The only thing they need to store is keys.
Look, if you want to ask an AI service if this photo has a dog in, thats simple and requires no state other than the photo. If you want to ask it does it have my dog in, thats a whole 'nother kettle of fish. How do you communicate the descriptors that describe your dog? how do you generate them? on device? that'll drain your battery in a very short order.
> Apple not allowing FBI to open an iPhone, because it'd set a precedent
Because they didn't follow process.
> Future iPhone versions were made so that it's literally impossible for even Apple to open a locked iPhone.
They don't need to, just hack the icloud backup. plus its not impossible, its just difficult. If you own the key authority then its less hard.
Right, but I have no reason to think that this isn't a marketing ploy either, just another story. There is simply no way that Apple is as big as it is, without providing whatever data the government requires. Corporations and governments are not your friend.
No government order short of targeting a specific backdoored update to a specific person will allow them to give data they can't access.
And if you're doing something that can make a TLA force Apple to create a targeted iOS update just for you, it's not something regular people can or should worry about.
Apple keeps normal people safe from mass surveillance, being protected from CIA/NSA required going Full Snowden and it's not a technological problem, you need to change the way you live.
I'm failing to see the what would be the challenge here. Apple can technically do that. The government can force them to do that.
> The scandal broke in early June 2013, external when the Guardian newspaper reported that the US National Security Agency (NSA) was collecting the telephone records of tens of millions of Americans.
> The paper published the secret court order directing telecommunications company Verizon to hand over all its telephone data to the NSA on an "ongoing daily basis".
https://www.bbc.com/news/world-us-canada-23123964
You seem to think that 10 years, under cover of secret orders, that this is NOT going on now. Not Apple!
People's lovely trusting natures in corporations and government never ceases to amaze me.
I don't think this is already the case, and I think the article is an example of safeguards being put into place (in this particular scenario) to prevent it.
Under the system described in the linked paper, your scenario is not possible. In fact, the whole thing looks to be designed to prevent exactly that scenario.
Where do you see the weakness? How could a secret order result in undetectable data capture?
In case anyone was uncertain about whether to trust what we are told - we heard that the US government was taping millions of phone records from the Snowden revelations.
So, we are told there are secrets, and we are told that there are mechanisms in place to prevent this information from being made public.
You are also free to believe that the revelations are no longer relevant... I'd like to hear the reason.
IMO - the reverse is the case - in that you need to show why Apple have now become trustworthy. Why would Apple not be subject to secret judgements?
I know there is a lot of marketing spin about Apple's privacy - but do you really think that they would actually confront the government system, in a way that isn't some further publicity stunt? Can one confront the government and retain a license to operate, do you think? Is it not probable that the reality is that Apple have huge support from the government?
Perhaps this kind of idea is hard to understand - that one can make a big noise about privacy, and how one is doing this or that to prevent access, and all the while ensuring that access is provided to authorised parties. Corporations can say this sort of thing with a straight face - its not a privacy issue to private information - its a (secret) legal issue!
Sorry, but secret courts and secret judgements, along with existing disclosure that millions were being spied upon, means one needs to expect the worst.
But I'm not sure where that leaves you. Is it just a nihilistic "no security matters, it's all a show" viewpoint?
You wouldn't expect a repeat abuser to stop abusing just because of 'time' or a marketing campaign. And yet this is the case here. People keep looking to their tormentors for solutions.
Not expecting healing from those also inflicting the trauma, ie changing one's expectations, seems like a minimum effort/engagement in my view, but it's somehow inconceivable.
https://www.apple.com/legal/privacy/data/en/apple-advertisin...
It also exempts itself from normal tracking opt-outs in iOS. It has _another_ set of settings you need to opt out off to disable _their_ advertising tracking.
Every teams meeting we have is now transcribed by AI, and while it’s something we want, it’s also a lot of data in the hands of a company where we don’t fully know what happens with it. Maybe they keep it safe and only really share it with the NSA or whichever American sneaky agency listens in on our traffic. Which isn’t particularly tin-foil-hat. We’ve semi-recently had a spy scandal where it somewhat unrelated (this wasn’t the scandal) was revealed that our own government basically lets the US snoop on every internet exit node our country has. It is what it is when you’re basically a form of vassal state to the Us. Anyway, with the increased AI monitoring tools build directly into Microsoft products, we’re now handing over more data than ever.
To get the point, we’re currently seeing some debate on whether Chromebooks and Google education/workspaces should be allowed in schools. Which is a good debate. Or at least it would be if the alternative wasn’t Microsoft… Because does it really matter if it’s Google or Microsoft that invades your privacy?
Apple is increasingly joining this trend. Only recently it was revealed that new Apple devices have some sort of radio build into them, even though it’s not on their tech sheets. Or in other words, Apple has now joined the trend of devices that can form their own internet by being near other Apple devices. Similar to how Samsung and most car manufacturers have operated for years now.
And again if sort of leads to… does it really matter if it’s Google or Apple that intrudes on your privacy? To some degree it does, of course, I’d personally rather have Microsoft or Apple spy on me, but I would frankly prefer if no one spied on me.
Specifically, open-source and self-hostable. Open source doesn't save you if people can't run their own servers, because you never know whether what's in the public repo is the exact same thing that's running on the cloud servers.
Other than obvious "open source software isn't perfectly secure" attack scenarios, this would require a non-targeted hardware attack, where the entire infrastructure would need to misinterpret the software or misrepresent the chain of custody.
I believe this is one of the protections Apple is attempting to implement here.
Their device says it's been attested. Has it? Who knows? They control the hardware, so can just make the server attest whatever they want, even if it's not true. It'd be trivial to just use a fake hash for the system volume data. You didn't build the attestation chip. You will never find out.
Happy to be proven wrong here, but at first glance the whole idea seems like a sham. This is security theater. It does nothing.
> It’d be trivial to just use a fake hash
You have to go deeper to support this. Apple is publishing source code to firmware and bootloader, and the software above that is available to researchers.
The volume hash is computed way up in the stack, subject to the chain of trust from these components.
Are you suggesting that Apple will actually use totally different firmware and bootloaders, just to be able to run different system images that report fake hashes, and do so perfectly so differences between actual execution environment and attested environment cannot be detected, all while none of the executives, architects, developers, or operators involved in the sham ever leaks? And the nefarious use of the data is never noticed?
At some point this crosses over into “maybe I’m just a software simulation and the entire world and everyone in it are just constructs” territory.
It's also not as complicated as you make it sound here. Because Apple controls the hardware, and thus also the data passing into attestation, they can freely attest whatever they want - no need to truly run the whole stack.
But operationally it is incredibly complicated to deliver and operate this kind of false attestation at massive scale.
The big issue with Apple is that their attestation infrastructure is wholly private to them, you can't self-host (Android is a bit similar in that application using Google's attestation system have the same limitation, but you can in theory setup your own).
Under the assumption that Apple is telling the truth about what the server hardware is doing, this could protect against unauthorized modifications to the server software by third parties.
If however, we assume Apple itself is untrustworthy (such as, because the US government secretly ordered them to run a different system image with their spyware installed) then this will not help you at all to detect that.
If apple holds the signing keys for the servers, can they not change the code at any time?
What runs on the servers isn't actually very important. Why? Becuase even if you could somehow know with 100% certainty that what a server runs is the same code you can see, any provider is still subject to all kinds of court orders.
What matters is the client code. If you can audit the client code (or better yet, build your own compatible client based on API specs) then you know for sure what the server side sees. If everything is encrypted locally with keys only you control, it doesn't matter what runs on the server.
If you trust math you can prove the software is what they say it is.
Yes it is work to do this, but this is a big step forward.
It does not tell you how it got that key. A compromised server would send you the key all the same.
You still have to trust in the security infrastructure. Trust that Apple is running the hardware it says it is, Trust that apple is running the software it says it is.
Security audits help build that trust, but it is not and never will be proof. A three-letter-agency of choice can still walk in and demand they change things without telling anyone. (And while that particular risk is irrelevant to most users, various countries are still opposed to the US having that power over such critical user data.)
To quote:
verifiable transparency, goes one step further and does away with the hypothetical: security researchers must be able to verify the security and privacy guarantees of Private Cloud Compute, and they must be able to verify that the software that’s running in the PCC production environment is the same as the software they inspected when verifying the guarantees.
So how does this work?
> The PCC client on the user’s device then encrypts this request directly to the public keys of the PCC nodes that it has first confirmed are valid and cryptographically certified. This provides end-to-end encryption from the user’s device to the validated PCC nodes, ensuring the request cannot be accessed in transit by anything outside those highly protected PCC nodes
> Next, we must protect the integrity of the PCC node and prevent any tampering with the keys used by PCC to decrypt user requests. The system uses Secure Boot and Code Signing for an enforceable guarantee that only authorized and cryptographically measured code is executable on the node. All code that can run on the node must be part of a trust cache that has been signed by Apple, approved for that specific PCC node, and loaded by the Secure Enclave such that it cannot be changed or amended at runtime.
But why can't a 3-letter agency bypass this?
> We designed Private Cloud Compute to ensure that privileged access doesn’t allow anyone to bypass our stateless computation guarantees.
> We consider allowing security researchers to verify the end-to-end security and privacy guarantees of Private Cloud Compute to be a critical requirement for ongoing public trust in the system.... When we launch Private Cloud Compute, we’ll take the extraordinary step of making software images of every production build of PCC publicly available for security research. This promise, too, is an enforceable guarantee: user devices will be willing to send data only to PCC nodes that can cryptographically attest to running publicly listed software.
So your data will not be sent to node that are not cryptographically attested by third parties.
These are pretty strong guarantees, and really make it difficult for Apple to bypass.
It's like end-to-end encryption using the Signal protocol: relatively easy to verify it is doing what is claimed, and extraordinarily hard to bypass.
Specifically:
> The only thing the math tells you is that the server software gave you a correct key.
No, this is secure attestation. See for example https://courses.cs.washington.edu/courses/csep590/06wi/final... which explains it quite well.
The weakness of attestation is that you don't know what the root of trust is. But Apple strengthens this by their public inspection and public transparency logs, as well as the target diffusion technique which forces an attack to be very widespread to target a single user.
These aren't simple things for a 3LA to work around.
> These are pretty strong guarantees, and really make it difficult for Apple to bypass.
These guarantees rely entirely on trust in the hardware but it's not your hardware.
This exactly the problem that "trusted computing" is designed to solve.
I'd encourage you to read for example the AWS Nitro Enclave outline here: https://aws.amazon.com/blogs/security/confidential-computing....
Nitro enclaves are similar in that they are designed to stop AWS operators from having access to the compute, even though it isn't owned by you.
Still, this is notably more secure than your typical cloud compute, where you have to just trust the cloud provider when they pinky swear that they won’t peek.
How does the client verify that the code running on the PCC is the code that has been audited publicly, and not a modified version logging your data or using it for other purposes(signed by apple).
Actually it is. Read their docs on what they do linked below.
I will trust them because the alternatives I see are scattered and unfocused.
Or if someone compels them to
…wait, haven’t there been MULTIPLE cases of open-source projects getting hacked in the last couple of years?
(I wonder if Matt realizes nobody can read his tweets without a X account? Use BlueSky or Masto man)
Edit: here's his thread combined https://threadreaderapp.com/thread/1800291897245835616.html?...
https://ioc.exchange/@matthew_d_green - And he's there BTW :)
Impossible to see any content.
Create irrational fear about piracy, push privacy focused products and profits as the sheeple promptly fall for this
jokes aside, this is no different from people selling bunker beds, gold, ammunition, crypto, vpns. It is specifically for the set of gullible people who think they and their data is so important. Reality (except for 10,000 people or so) is, most lives and their 'precious' data is worthless. (I'm not talking about SSN, Bank Accounts -- those are well protected by tech cos HN seem to hate on)
> Ok there are probably half a dozen more technical details in the blog post. It’s a very thoughtful design. Indeed, if you gave an excellent team a huge pile of money and told them to build the best “private” cloud in the world, it would probably look like this.
and
> And of course, keep in mind that super-spies aren’t your biggest adversary. For many people your biggest adversary is the company who sold you your device/software. This PCC system represents a real commitment by Apple not to “peek” at your data. That’s a big deal.
I'd prefer things stay on the device but at least this is a big commitment in the right direction - or in the wrong direction but done better than their competitors, I'm not sure which.
> As best I can tell, Apple does not have explicit plans to announce when your data is going off-device for to Private Compute. You won't opt into this, you won't necessarily even be told it's happening. It will just happen. Magically.
Presumably it will be possible to opt out of AI features entirely, i.e. both on-device and off-device?
Why would a device vendor not have an option for on-device AI only? iOS 17 AI features can be used today without iCloud.
Hopefully Apple uses a unique domain (e.g. *.pcc.apple.com) that can be filtered at the network level.
With OpenAI calls is different because the privacy point is stronger
This is actually what you have to do now if you don’t want Siri and Mail to leak your address book to Apple.
By disabling Siri and iCloud, or other policies?
With a hardened configuration, Apple has world-class device security. In time, remote PCC may prove as robust against real-world threats. Until then, it would be good to retain on-device security policy and choice for remote computation.
Is there a good non-Apple reference for the functions performed by their servers?
https://nitter.poast.org/matthew_d_green/status/180029189724...
He actually has an active Mastodon account, but this particular story is not on there (yet): https://ioc.exchange/@matthew_d_green
Probably the mainstream Twitter alternative at this point?
a) Top 10 App Store charts in every country.
b) Heavily promoted through Facebook and Instagram.
c) DAUs are higher than X.
But there is far more.
Kara Swisher is on Threads, for instance.
You build the machine god all day and somehow find in yourself respect for what Jack let Twitter become?
If you dream of Napoleon, an elephant tooting a horn is a signal to sell.
False; works fine for me logged out or incognito..
Nitter still works [1]. Also Threadreader (as can be seen linked in Green's tweet).
[0] https://tweetdelete.net/resources/view-twitter-without-accou... [1] https://nitter.poast.org/matthew_d_green
I get that there's benefit to what they are doing. But the problem of selling a message of trust is you absolutely have to be 100% truthful about it, and them failing to be transparent that people's data is still subject to access like this poisons the larger message they are selling.
If you don't trust the people making your OS, your problems are much deeper than fretting about off-device AI processing.
Even if the org has been trustworthy to this point, I think this step makes it more likely (maybe still unlikely, but more likely) that in the future they do look at your data, as less things have to change for that to happen.
Trust can be abused, certainly, but it also allows collaboration and specialisation, and without those I doubt we'd have gotten very far.
Better to trust many people narrowly (e.g. I don't trust the bus driver to drill my cavities) than to trust a small handful of people broadly (e.g. like Apple expects of their users).
That's not to mention the argument that any software of a LOC count of higher than some number is impossible to audit because of complex state handling. Rice's Theorem applies to your brain too, probably, to some extent. Idk about purely functional Haskell though.
What makes you think this?
The GrapheneOS people are doing this, for example. It's not crazy to consider your device vendor as part of your threat model, because like it or not, they are a threat.
I'm focusing on simpler datasets for now, I want people to be able to annotate a paper restaurant menu with notes about allergens in a way that other people with those allergens can summon those annotations and steer clear--all without participation from the restaurant. Like an augmented reality layer for text.
I hope it grows up into something that would let you ask:
> How trusted is this line of code, and by whom?
The choice is either many little trust relationships or a giant leap of faith. I feel better served by a giant leap of faith and access to all the technology I can use.
I prefer no documentation and consistent behavior to no documentation and a bunch of internet howtos on what might work.
That said I really don't disagree with this point at all in terms of it being a valid problem. It's not a fixable problem either (it comes down to, again, building trustworthy computers) but it could be biased way towards being solved whereas today it is still "trust me bro". I don't think Apple will be the company to make progress towards this, though.
If you don't update your local software then it will certainly become automatically backdoored by an accumulating series of security vulnerabilities over time.
> I don't think Apple will be the company to make progress towards this, though.
I agree.
Y'know though, when you put it that way, it sounds inherent that security vulnerabilities will pop up, which is kinda true, at least for the foreseeable future, but to be pedantic, the security vulnerabilities are already there, it's discovering them that's the problem. If we could make secure computers... (time to formally prove everything from the ground up I guess.)
But, that said, I wasn't overlooking this, I'm just looping "getting pwned" into "active involvement". If you have some sufficiently isolated machines, they're probably fine indefinitely. The practicality of this is limited outside of thought experiments. However it's definitely worth noting that unlike a compromised remote, it is at least technically feasible to work on the problem of making local compromise more evident, whereas a remote compromise is truly impossible to reliably be able to detect from the outside.
The eternal dream of unplugging, and living free on Amigas.
Edit: The final couple tweets from the Matthew Green tweet thread posted in another comment sum it up well:
> Wrapping up on a more positive note: it’s worth keeping in mind that sometimes the perfect is the enemy of the really good.
> In practice the alternative to on-device is: ship private data to OpenAI or someplace sketchier, where who knows what might happen to it. And of course, keep in mind that super-spies aren’t your biggest adversary. For many people your biggest adversary is the company who sold you your device/software. This PCC system represents a real commitment by Apple not to “peek” at your data. That’s a big deal. In any case, this is the world we’re moving to. Your phone might seem to be in your pocket, but a part of it lives 2,000 miles away in a data center. As security folks we probably need to get used to that fact, and do the best we can to make sure all parts are secure.
> As security folks we probably need to get used to that fact, and do the best we can to make sure all parts are secure.
Isn’t that sort of where the pragmatism ends? All the parts aren’t going to be secure… Unless I misunderstood his intention, I think the conclusion should be more along the lines of approaching the cloud without trust.
Apple's Private Cloud Compute seems to be conceptually equivalent with System Transparency - an open-source software project my colleagues and I started six years ago.
I'm very much looking forward to more technical details. Should anyone at Apple see this, please feel free to reach out to me at stromberg@mullvad.net. I'd be more than happy to discuss our design, your design, and/or give you feedback.
Relevant links:
- https://mullvad.net/en/blog/system-transparency-future
- http://system-transparency.org (somewhat outdated)
This is what they are doing. Search implementations of this to understand more technical details.
Confidential Compute involves technologies such as SGX and SEV, and for which I think Asylo is an abstraction for (not sure), where the operator (eg Azure) cannot _hardware intercept_ data. The description of what Apple is doing "just" uses their existing code signing and secure boot mechanisms to ensure that everything from the boot firmware (the computers that start before the actual computer starts) to the application, is what you intended it to be. Once it lands in the PCC node it is inspectable though.
Confidential Compute goes a step further to ensure that the operator cannot observe the data being operated on, thus also defeating shared workloads that exploit speculative barriers, and hardware bus intercept devices.
Confidential Compute also allows attestation of the software being run, something Apple is not providing here. EDIT: looks like they do have attestation, however it's different to how SEV etc attestation works. The client still has to trust that the private key isn't leaked, so this is dependent on other infrastructure working correctly. It also depends on the client getting a correct public key. There's no description of how the client attests that.
Interesting that they go through all this effort just for (let's be honest) AI marketing. All your data in the past (location, photos, contacts, safari history) is just as sensitive and deserving of such protection. But apparently PCC will apply only to AI inference workloads. Siri was already and continues to be a kind of cloud AI.
EDIT: and they mention SGX and Nitro. Other CC technologies :)
Yes, but that's only within the enclave. Every Mac hardware since T2 has had that, and we don't consider them strong enough to meet the CC bar.
As an example of the difference, CC is designed so that a compromised hypervisor cannot inspect your guest workload. Whereas in Apple's design, they attempt to prove that the hypervisor isn't compromised. Now imagine there's a bug ...
(Not that SGX hasn't had exploitable hardware flaws, but there is a difference here.)
Yes!
> Software will be published within 90 days of inclusion in the log, or after relevant software updates are available, whichever is sooner.
I think this theoretically leaves a 90-day maximum gap between publishing vulnerable software and potential-for-discovery. I sincerely hope that the actual availability of images is closer to instant than the maximum, though.
If you're trying to do a quiet backdoor and you have the power to compel Apple to assist, the route to take is to simply misuse the keys that are supposed to only go into hardware for attestation, and instead simply use them to forge messages attesting to be running software on hardware that you aren't.
Or just find a bug in the software stack that gives you RCE and use it
Well, your messages have to be congruent with the expected messages from the real hardware, and your fake hardware has to register with the real load balancers to receive user requests.
> RCE
That’s probably the best attack vector, and presumably why Apple is only making binary executables available. Not that that stops RCE.
But even then you can’t pick and choose the users whose data you compromise. It’s still a sev0 problem, but less exploitable for the goals of nation states so less likely to be heavily invested in for exploiting.
Yes, which is why you need the keys that are used to make real hardware. Provided you have those very secret and well protected keys (you are Apple being compelled by the government) that's not an issue.
> and your fake hardware has to register with the real load balancers to receive user requests.
Absolutely, but we're apple in this scenario so that's "easy".
> The process involves multiple Apple teams that cross-check data from independent sources, and the process is further monitored by a third-party observer not affiliated with Apple. At the end, a certificate is issued for keys rooted in the Secure Enclave UID for each PCC node.
So, in your scenario, the in-house certificate issuer is compelled to provide certificates for unverified hardware, which will then be loaded with a parallel software stack that is malicious but reports the attestation ID of a verified stack.
So far, so good. Seems like a lot of people involved, but probably still just tens of people, so maybe possible.
Are you envisioning this being done on every server, so there are no real ones in use? Or a subset? Just for sampling, or also with a way to circumvent user diffusion so you can target specific users?
It's an interesting thought exercise but the complexity of getting anything of real value from this without leaks or errors that expose the program seems pretty small.
> Well, a 89-day "update-and-revert" schedule will take care of those pesky auditors asking too many questions about NSA's backdoor or CCP's backdoor and all that.
As a backdoor I am taking it to mean they can compel assistance from inside of Apple, it's not a hack where they have to break in and hide it from everyone (though certainly they would want to keep it to as few people as possible).
At least in the NSAs case I think it would be reasonable to imagine that they are limited to compromising a subset of the users data. Specific users they've gotten court orders against or something... so yes a subset of nodes and also circumventing user diffusion (which sounds like traffic analysis right up the NSAs alley, or a court order to whatever third party Apple has providing the service).
How does traffic analysis help? The client picks the server to send the query to, and encrypts with that particular server's private key. I guess maybe your have the load balancer identify the target and only provide compromised servers to it? But then every single load balancer has to have the list of targeted individuals and compromised servers, which seems problematic for secrecy at scale.
> But then every single load balancer has to have the list of targeted individuals and compromised servers, which seems problematic for secrecy at scale.
It really doesn't. This seems well within the realms of what you could achieve with a court order without it becoming public.
It doesn't actually say there is no persistent storage, it says that the compute node will not store it for longer than the request. There's nothing to stop the data coming from a datastore outside of the "PCC" in another part of apple's infrastructure.
How often do PCC servers reboot and wipe the temporary encryption key?
That's why you use them just as dumb pipes forwarding encrypted data traffic from one place to another.
No SMS, no phone calls if you can avoid it.
Apple deserves credit for correctly analyzing their threat model and designing their system accordingly.
Would this work in some authoritarian countries? No and Apple doesn't care. Would it work in a western one? Maybe.
If pressured by the government, Apple can simply change the client software to loosen the attestation requirements for private compute. And that would be the most inconspicuous choice.
"oh it's our dev version ? what's the problem ? we need data access for troubleshooting"
70,000+ Apple user accounts are surveilled in this manner every year.
Who is this for? Dont get me wrong I think it's a great effort. This is some A+ nerd stuff right here. It's speaking my languge.
But Im just going to figure out how to turn off "calls home". Cause I dont want it doing this at all.
Is this speaking to me so I tell others "apple is the most secure option"? I don't want to tell others "linux" because I don't want to do tech support for that.
At this point I feel like an old man shouting "Dam you keep your hands off my data".
I'm not a big user of OpenAI's stuff, but if I was going to use any of it, I'd rather use it through Apple's anonymizing layer than going directly to OpenAI.
And then showed that they have a prompt asking if you're ok sending the data to OpenAI. Presumably because despite OpenAI promising not to use your data (a promise apple relayed) OpenAI didn't buy into this new architecture.
I think this goes back to what Steve said in 2010.
https://youtube.com/watch?v=Ij-jlF98SzA
And yes, while the data might not be linked to the user and striped of sensitive data, I could see people not wanting something very personal things to go to OpenAI, even if there should be no link. For example, I wouldn’t want any of my pictures going to OpenAI unless I specifically say it is OK for a given image.
OpenAI provides the chatbot interface we all know.
The PCC cloud serves all of the other integrated AI features like notification prioritization, summarization, semantic search, etc. At least when those can’t be run on device.
Don't get me wrong, I've always appreciated apples on device ml/AI features, those have always been powerful, interesting and private but these announcements feel very rushed, it's literally a few weeks after Microsoft's announcements.
They've basically done almost exactly what Microsoft announced with a better UX and a pinky promise about privacy. How are they going to pay for all of that compute? Is this going to be adjusted into the price of the iPhones and MacBook? and then a subscription layer is going to be added to continue paying for it? I don't feel comfortable with the fact that my phone is basically extending it's hardware to the cloud. No matter how "private" it is it's just discomforting to know that apple will be doing inference on things seemingly randomly to "extend" compute capabilities.
Also what on earth is apple high on, integrating a third party API into the OS, how does that even make sense. Google was always a separate app, or a setting in safari, you didn't have Google integrated at an OS level heck you don't have that on Android. It feels very discomforting to know that today my phone could phone home to somewhere other than iCloud.
Hardware sales. Only the latest pro/max models will run these models, everyone else is going to have to upgrade.
I don’t care if the government has access to the data. I just don’t want “bad actors” (scammers, foreign governments, ad-tech companies, insurance companies, etc) to have access to my private data. But i also want the power of LLM’s. Does that sound so far fetched?
I’m a realist. I already EXPECT the US govt has all my data. I don’t like the status quos, but it is what is is.
Gemini could claim privacy but I think people would assume that if true, it would make it less effective.
I'm trusting Apple more in this case, they have an incentive to keep things private and according to experts they're doing everything they can to do so.
"Indeed, if you gave an excellent team a huge pile of money and told them to build the best “private” cloud in the world, it would probably look like this." - Matthew D. Green
The main difference seems to be verifiability down to the firmware level.
Nitro enclaves does not provide measurements of the firmware[0], or hypervisor, furthermore they state that the hypervisor code can be updated transparently at any time[1].
Apple is going to provide images of the secure enclave processor operating system(sepOS), as well as the bootloader.
It also sounds like they will provide the source code for these components too, although the blog post isn't clear on that.
[0]: https://docs.aws.amazon.com/enclaves/latest/user/set-up-atte....
[1]: https://docs.aws.amazon.com/pdfs/whitepapers/latest/security...
There is no reason to measure hypervisor firmware as it’s not firmware in the case of EC2. The BIOS/UEFI firmware on the mobo is overwritten if it’s tampered with. Hypervisor code (always signed, like all code) is streamed via a verifiably secure system on the server (Nitro cards, which make use of measured boot and/or secure boot).
No idea what the customer facing term “Nitro enclaves” means, but EC2 engineers are literally mobilized like an army with pages when any security risk (even minor ones) is determined. Basic stuff like this is covered. We even go as far as guaranteeing core dumps don’t contain any real customer data, even encrypted
Although in the end, I'm not sure how much of a difference it makes, as ultimately, even with measurements of the whole stack, the platform provider if compelled to do so, can still push out a malicious firmware that fakes it's measurements.
I think depending on how this plays out, Apple might manage to earn some of the trust its users have in it, which would be pretty cool! But even cooler will be if we get full chain-of-custody audits, which I think will have to entail opening up some other bits of their stack
In particular, the cloud OS being open-source, if they make good on that commitment, will be incredibly valuable. My main concern right now is that if virtualization is employed in their actual deployment, there could be a backdoor that passes keys from secure enclaves in still-proprietary parts of the OSes running on user devices to a hypervisor we didn't audit that can access the containers. Surely people with more security expertise than me will have even better questions.
Maybe Apple will be responsive to feedback from researchers and this could lead to more of this toolchain being auditable. But even if we can't verify that their sanctioned use case is secure, the cloud OS could be a great step forward in secure inference and secure clouds, which people could independently host or build an independent derivative of
The worst case is still that they just don't actually do it, but it seems reasonably likely they'll follow through on at least that, and then the worst case becomes "Super informative open-source codebase for secure computing at scale just dropped" which is a great thing no matter how the other stuff goes
AWS Nitro Enclaves [0] come close but of course what Apple has done is productize private compute for its 1b+ macOS & iOS customers!
[0] https://docs.aws.amazon.com/enclaves/latest/user/nitro-encla...
Yes, the tech industry loves to copy Apple :)
Asahi Linux has a good overview of on-device boot chain security, https://github.com/AsahiLinux/docs/wiki/Apple-Platform-Secur...
> My main concern right now is that if virtualization is employed in their actual deployment, there could be a backdoor that passes keys from secure enclaves in still-proprietary parts of the OSes running on user devices to a hypervisor we didn't audit that can access the containers.
We’ll release a PCC Virtual Research Environment: a set of tools and images that simulate a PCC node on a Mac with Apple silicon, and that can boot a version of PCC software minimally modified for successful virtualization.
This seems to imply that PCC nodes are bare-metal.Could a PCC node be simulated on iPad Pro with M4 Apple Silicon?
Yes, most technology is built on other technology ;)
> This seems to imply that normal PCC nodes are bare-metal.
I realize that, but there's plausible deniability in it, especially since the modification could also hide the mechanism I've described in some other virtualization context that uses the unmodified image, without the statement being untrue
Eh, it goes both ways. Even Apple devices got widgets eventually :)
Interesting to see Swift on Server here!
with another name. Intel, AMD and Nvidia have been working for years on this. OpenAI released a blog some time ago where they mentioned this as the "next step". Exciting that Apple went ahead and deployed first, it will motivate the rest as well.
> The server can't afford
What reboot frequency is affordable for PCC nodes?
I wonder if there is anything that enforces an upper limit on the time between reboots?
Since they are building their own chips it would be interesting to include a watchdog timer that runs off an internal oscillator, cannot be disabled by software, and forces a reboot when it expires.
Let the games begin!
https://www.notebookcheck.net/Apple-and-Imagination-strike-G...
Following the loss of Apple, easily its biggest client, Imagination was bought out by a Chinese-based investment group. Apple subsequently released its first in-house designed mobile GPU as part of the A11 Bionic SoC that powered the iPhone X.. The new “multi-year license agreement” gives Apple official access to much wider range of Imagination’s mobile GPU IP as well as its AI technologies. The A11 Bionic also included the first neural processing engine in an iPhone
https://9to5mac.com/2020/01/01/apple-imagination-agreement/ Apple described Imagination’s characterizations as misleading while hiring Imagination employees to work for Apple’s GPU team in the same community.
And Apple clearly has made some custom server hardware and slapped a ton of them on a board just to do the PCC stuff.
Airtag anonymity was pretty cool, technically speaking, but a peripheral use case for me.
To me, PCC is a well-reasoned, surprisingly customer-centric response to the fact that due to (processing, storage, battery) limitations not all useful models can be run on-device.
And they tried to build a privacy architecture before widely deploying it, instead of post-hoc bolting it on.
>> 4. Non-targetability. An attacker should not be able to attempt to compromise personal data that belongs to specific, targeted Private Cloud Compute users without attempting a broad compromise of the entire PCC system. This must hold true even for exceptionally sophisticated attackers who can attempt physical attacks on PCC nodes in the supply chain or attempt to obtain malicious access to PCC data centers.
Oof. That's a pretty damn specific (literally) attacker, and it's impressive that made it into their threat model.
And neat use of onion-style encryption to expose the bare minimum necessary for routing, before the request reaches its target node. Also [0]
>> For example, the [PCC node OS] doesn’t even include a general-purpose logging mechanism. Instead, only pre-specified, structured, and audited logs and metrics can leave the node, and multiple independent layers of review help prevent user data from accidentally being exposed through these mechanisms.
My condolences to Apple SREs, between this and the other privacy guarantees.
>> Our commitment to verifiable transparency includes: (1) Publishing the measurements of all code running on PCC in an append-only and cryptographically tamper-proof transparency log. (2) Making the log and associated binary software images publicly available for inspection and validation by privacy and security experts. (3) Publishing and maintaining an official set of tools for researchers analyzing PCC node software. (4) Rewarding important research findings through the Apple Security Bounty program.
So binary-only for majority, except the following:
>> While we’re publishing the binary images of every production PCC build, to further aid research we will periodically also publish a subset of the security-critical PCC source code.
>> In a first for any Apple platform, PCC images will include the sepOS firmware and the iBoot bootloader in plaintext, making it easier than ever for researchers to study these critical components.
[0] Oblivious HTTP, https://www.rfc-editor.org/rfc/rfc9458
How so ? There are any number of state and state sponsored attackers who it should apply it including china, North Korea , Russia , Israel as nation states and their various affiliates like NSO group .
Even if NSA its related entities are going to be notably absent. If your threat model includes unfriendly nation state actors then the security depends on security at NSA and less on Apple, they have all your data anyway.
If nation state actors are interested in you, no smartphone that is not fully open source on both hardware and OS side that has been independently verified by multiple reviewers is worth it, i.e. no phone in the market today, everything else is tradeoff for convenience for risk, the degree of each is quite subjective to each individual.
For the rest of us, the threat model is advertisers, identity thieves, scammers and spammers and now AI companies using it for training.
Apple will protect against other advertisers insofar to grow their own ad platform , they already sell searches to Google for $20B/year and there is no knowing the details of the OpenAI deal on what kind of data will be shared.
Another good step in this direction would be publishing a list of all on-device Apple software (including Spotlight models for image analysis) and details of any information that is sent to Apple, along with opt-out instructions via device Settings or Apple Configurator MDM profiles.
Apple does publish a list of network ports and servers, so that network traffic can be permitted for specific services. The list is complicated by 3rd-party CDNs, but can be made to work with dnsmasq and ipset, "Use Apple products on enterprise networks", https://support.apple.com/en-us/101555
And what the PCC chassis looks like for these compute devices (will it be a display-less iPad)?
Gotcha, makes sense :)
There’s precedent for this sort of thing as well, like Apple TVs or iPads acting as HomeKit hubs and processing security can footage on-device.
Maybe they’ll open that up in the future.
Or better yet, make the APIs public and pluggable so that one can choose an off-device AI processor themselves if one is needed.
> Target diffusion starts with the request metadata, which leaves out any personally identifiable information about the source device or user, and includes only limited contextual data about the request that’s required to enable routing to the appropriate model
If this is the case, I wonder how the authentication would work. Is it a security through obscurity sort of situation? Wouldn't it be possible for someone, through extensive reverse engineering, to write a client in Python that gives you a nice free chat API and Apple would be none the wiser?
Essentially, your server send a nonce which the client signs using a key pair derived from the Secure Enclave. The server can then verify the signature by an API provided by Apple's servers, and they respond whether it was signed by a Secure Enclave resident key or not.
I'm guessing this could be helpful to make it hard(er) to write a Python client.
[0]: https://developer.apple.com/documentation/devicecheck/establ...
If it appears in the transparency log, the whole world will be able to see that a suspicious node has started serving requests.
If Apple changes iOS to remove that restriction, the whole world will be able to see that change because it’s client side.
If Apple tries to deliver a custom version of iOS to a single user, the iOS hardware will refuse to run it unless it has a valid signature.
If it has a valid signature, that copy of the firmware is irrefutable evidence that Apple is deliberately breaking its privacy promises and spying on people in a way they specifically said they wouldn’t, which would be extremely harmful to their business.
Apple seems to be going all-out in binding themselves in a way that makes it as difficult as possible to do what you are suggesting.
> Specifically, the user’s device will wrap its request payload key only to the public keys of those PCC nodes whose attested measurements match a software release in the public transparency log.
But what’s stopping Apple from returning a node which lies about its “attested measurements” (possibly even to a specific user)? Whats to prevent any old machine, not running the TPM at all, from receiving a certificate?
I get that “the process is further monitored by a third-party observer not affiliated with Apple”, but I don’t know where I read their report, or even if they are still paid by Apple, so this feels like a trust-based proof.
Or "Professional" version of software that removes all those annoying "AI" features.
It requires a small, trusted remote observer hardware component, e.g. TCG TPM/DICE, Apple Secure Enclave, Google OpenTitan, Microsoft Pluton.
2021 literature review, https://arxiv.org/abs/2105.02466
2022 HN thread on remote attestation, https://news.ycombinator.com/item?id=32282305
The remote endpoint has special hardware which keeps secret signing keys (similar to a TLS server's signing keys). The hardware refuses to reveal the private keys, but will sign certain payloads under certain conditions. In addition, Intel or AMD or whoever also has super duper mega secret master keys (similar to a CA's signing keys), which they use to sign the device's signing keys. The certificate signing the device keys is also stored on the device.
So, each time the endpoint is asked to attest its software, it says yes and signs its response with its keys, and it also sends a certificate showing its keys are signed by the master key. That way, the client knows the special hardware really said yes and that Intel or AMD or whoever said that particular special hardware is legit.
Verifier -> requests a Prover to attest its software state
Prover -> goes into RoT, verifies authenticity of Verifier (and request), computes hash of attested memory region, sends hash digest
Verifier -> receives digest and compares to known hash
> What’s stopping remote endpoint always responding “yes” The attestation code is inside of a RoT, so a bad actor shouldn't be able to call this code, only callable by receiving a request from a Verifier
What about second hand iPhone users?
My guess is that this is similar to how iOS upgrades are “free”, how Apple Maps is free, how iMessage is free, how iCloud Mail is free, etc. To a good extent, it’s all paid for by the price paid by the customer for the hardware.
I’d also wager that there will be a paid service/subscription that will get baked into iCloud+ at some point in time (maybe a year from now). This will offer a lot more and Apple will try to attract more customers into its paid services net.
Are the anonymized queries (minus user data context) worth anything?
It’s gotta be some kind of subscription/per query charge model to pay for the servers, electricity, and bandwidth.
... I suppose this is ultimately a question that will be tested sooner or later in the US.
Law enforcement would need to seize the right server among millions while it's processing your request and perform an attack on it to get the keys before they're gone.
My next question is what happens if/when the attestation keys are stolen.
One option is to release a malicious software update, sign it, publish the signature on the public chain, and then simply not release the binaries until after whatever associated gag orders there are (if any) expire. Apple gave themselves a 90 day timeline for this before they'd even be in violation of their promises.
Another option is to use the cryptographic keys used to make the hardware that attests to the software running on it, to simply falsely attest to what software is running. Unless Apple's somehow moved those keys outside of the courts jurisdiction (which means outside of Apple's control in the case of most courts) that should be within the courts power. If they can still create new hardware, it seems likely whoever is making that hardware must still have access to the keys...
Both of these attacks are outside the "threat model" proposed, because they are broad compromises against the entire PCC infrastructure. The fact that they are possible and within the legal systems power... well... why are we advertising this as secure again?
The main value of this whole architecture in my mind isn't actually security though, it's that it's Apple implicitly making the promise that they won't under any circumstance use the data, or let anyone else use the data, for business purposes (not even for running the service itself).
In this option it would be Apple releasing a malicious software update?
> If they can still create new hardware, it seems likely whoever is making that hardware must still have access to the keys...
This option reads like the keys are stored in apple-keys.txt
> Both of these attacks are outside the "threat model" proposed, because they are broad compromises against the entire PCC infrastructure
They mentioned that the in-depth write up will be shared later, might they still address this concern in writing? Your wording makes you sound so certain, but this is just a broad overview. How are you so sure?
Yes, compelled by something like the all writs act (if the US is the one doing the compelling).
> This option reads like the keys are stored in apple-keys.txt
They probably are. That file might live on a CD drive in a safe that requires two people to open it, but ultimately it's a short chunk of binary data that exists somewhere (until it is destroyed)...
> might they still address this concern in writing?
Can I say beyond all doubt that this won't happen? Of course not.
On the first approach I'm quite confident though, because it's both the type of attack they discuss in their initial press release, and pretty fundamental to and explicitly allowed by their model of updating the software.
On the second approach I'm reasonably confident. Like the first issue it's the type of issue that they were discussing in their initial press release. Unlike the first issue it's not something that is explicitly allowed in the model. If Apple can find a way to make the attestation keys irretrievable while still allowing themselves to manufacture hardware I believe they'd do it - I just don't see a method and think it would have warranted a mention if they had one. I tried to insert a level of uncertainty in my original writing on this one because I could be missing a way to solve it.
Ultimately I'd rather over-correct now then have people start thinking this is going to be more secure than it is and then have some fraction of them miss the extremely-likely follow up of "and we could be compelled to work around our security".
“A randomly generated UID is fused into the SoC at manufacturing time. Starting with A9 SoCs, the UID is generated by the Secure Enclave TRNG during manufacturing and written to the fuses using a software process that runs entirely in the Secure Enclave. This process protects the UID from being visible outside the device during manufacturing and therefore isn’t available for access or storage by Apple or any of its suppliers.“
They're making new servers though. Take the keys that are used to vouch for the UIDs in actual secure enclaves, and use them to vouch for the UID in your evil simulated "secure" enclave. Your simulated secure enclave doesn't present as any particular real secure enclave, it just presents as a newly made secure enclave that Apple has vouched for as being a secure enclave.
[1] https://en.m.wikipedia.org/wiki/Apple%E2%80%93FBI_encryption...
[2] https://www.washingtonpost.com/technology/2021/04/14/azimuth...
Apple shills are the worst.
You can opt into full E2E encryption [1] which makes it nothing, presumably at the cost of some convenience features.
Also, got any links for interesting ZKML papers/projects?
One key part though will be the remote attestation that the servers are actually running what they say they're running. Without any access to the servers, how do we do that? Am I correctly expecting that that part remains a "trust me bro" situation?
>While we’re publishing the binary images of every production PCC build, to further aid research we will periodically also publish a subset of the security-critical PCC source code.
I expect that they'll publish the attestation source code.
But, basically what will happen is the Verifier will request a certain memory region to be attested, then that region will be hashed and the digest will be sent back to the Verifier. If the memory is different from what is expected, the hash digest will NOT match.
i am happy for those who see the positives here, but for the skeptic a toggle to prevent any online processing would be more satisfactory.
The absolute worst acronym for anything even remotely related to personal privacy.
The transparency & architecture together are intended to be more than enough to publicly detect any major retooling of the system.
This is clearly a company with an identity, unlike Microsoft and Google who are very confused.
Since Apple devices are now on the Chinese Governments poopy list, I assume Apple is only meeting some, not all of China's demands. I assume if Apple did everything the Chinese govt wanted, they wouldn't be on the poopy list. Personally I see being on the Chinese govt poopy list as an endorsement that it's probably a net positive for privacy and security compared to those not on the list. :)
Around WhatsApp, it's probably part of the whole compromise mess above. WhatsApp now does E2E and that's something China is not a fan of, so it's probably China's doing that it's not in the app store in China any more. Apple is just following the laws China forces them to follow.
It should be noted I've never been to China(yet) and have zero 1st hand knowledge.
The govt basically requires total access doesn't it? I mean every govt basically wants it, and the US has tried many times, but so far hasn't quite gotten complete access everywhere.
Was the cloud non-private before? Was it not secure in the first place? Do my Siri searches no longer end up as google ads metadata now? Are the feds no longer able to get rubber stamp access to my i C L O U D now?
You are a naive idiot for believing that this is anything but security theater to adress the emotional needs of AI anxiety in and outside the company.
Just my opinion.
Homomorphic encryption is mostly a fantasy at this point.
Your comment makes me curious on how guarantee-to-guarantee looks (and associated architectures).
https://cloud.google.com/blog/products/identity-security/exp...
And a one-time credential to prevent replay attacks.
As well as minor things like obfuscating IP addresses, metadata etc.
If it were this common, Meta, Google, and others would have announced or launched something similar for its consumer apps/services; I can't seem to recall anything of note.