Reverse Engineering iOS 18 Inactivity Reboot
517 points
6 days ago
| 19 comments
| naehrdine.blogspot.com
| HN
happytoexplain
5 days ago
[-]
>In the After First Unlock (AFU) state, user data is decrypted

Note that this is a slight simplification because, I assume, the reality is irrelevant to understanding the topic:

There are a few different keys [0] that can be chosen at this level of the encryption pipeline. The default one makes data available after first unlock, as described. But, as the developer, you can choose a key that, for example, makes your app's data unavailable any time the device is locked. Apple uses that one for the user's health data, and maybe other extra-sensitive stuff.

[0]: https://support.apple.com/guide/security/data-protection-cla...

reply
wepple
5 days ago
[-]
How useful do you think this is in practice? Wouldn’t it rely on app-level memory scrubbing and page clearing and such as well, if you wanted to truly make sure it’s unavailable? Do Apple offer APIs to assist there?
reply
axoltl
5 days ago
[-]
There's a decent amount of data protected by Class A keys (which are only available when a device is 'actively unlocked') and some amount of data protected by Class B keys (which are asymmetric keys to allow data to be encrypted while the device is locked but only decrypted when the device is unlocked by way of a private key encrypted with a Class A key). The security guide[0] isn't super obvious about what data is protected with what keys:

> The Mail app database (including attachments), managed books, Safari bookmarks, app launch images, and location data are also stored through encryption, with keys protected by the user’s passcode on their device.

> Calendar (excluding attachments), Contacts, Reminders, Notes, Messages, and Photos implement the Data Protection entitlement Protected Until First User Authentication.

I can confirm that when they say "keys protected by the user's passcode" they mean "protected with class A or B". The most shameful omissions there in my opinion are Messages and Photos, but location data is (from a law enforcement perspective) obviously a big one.

0: https://help.apple.com/pdf/security/en_US/apple-platform-sec...

Edit: Additionally, as to your API question, the system provides notifications for when content is about to become unavailable allowing for an app developer to flush data to disk:

https://developer.apple.com/documentation/uikit/uiapplicatio...

reply
myflash13
5 days ago
[-]
> The class key is protected with a key derived from the user passcode or password and the device UID. Shortly after the user locks a device (10 seconds, if the Require Password setting is Immediately), the decrypted class key is discarded, rendering all data in this class inaccessible until the user enters the passcode again or unlocks (logs in to) the device using Face ID or Touch ID.
reply
happytoexplain
5 days ago
[-]
This means it can't be read from storage, but AFAIK anything you've read into your app's memory sandbox is still sitting there decrypted until your app releases it or is closed or has its memory wiped by system housekeeping.
reply
happytoexplain
5 days ago
[-]
It's a good point - I am not an expert, but I think this feature just doesn't protect memory (tying one of the keys to rebooting helps, but the Data Protection feature itself doesn't seem to protect memory). However, that doesn't moot in-storage protection. There are other features protecting memory (and other features protecting data in storage - there are tons of security features).

I am not aware of APIs for securely clearing your app's memory (aside from lower level, more manual APIs). This may be one of those cases that relies mostly on sandboxing for protection. I also imagine it's hard to circumvent sandboxing without rebooting. But I'm making a lot of guesses here.

reply
Shank
5 days ago
[-]
To me the biggest takeaway is that Apple is sufficiently paranoid to add this feature. Some people (like John Gruber) advocate for activating bio lockout at the border by squeezing the volume and power buttons. I would say if you’re the type of person who would do this, you should go one step further and power off.

Similarly, if you’re in a situation where you cannot guarantee your phone’s security because it’s leaving your possession, and you’re sufficiently worried, again, power off fully.

reply
wang_li
5 days ago
[-]
> advocate for activating bio lockout at the border

This is a terrible idea. When you're crossing a border you have to submit to the rules of entry. If one of those rules is that you let them create an image of your phone with all of its contents, that's the rule. If you say no, then, if you're lucky, you get to turn around and return to where you came from. If you're not lucky, then you get to go to jail.

What needs doing is the ability to make a backup then a way to reconcile the backup at a later date with the contents of a device. That is, I should be able to backup my phone to my home computer (or cloud I guess) and then wipe my phone or selectively delete contents. Then I travel abroad, take photos and movies, exchange messages with people, and so on. Then when I get home I should be able to restore the contents of my phone that were deleted without having to wipe all the new stuff from the trip.

reply
phinnaeus
5 days ago
[-]
What do you do if you’re at the border and they demand both the physical device and the password?

Let’s assume “get back on the plane and leave” is not a viable option.

reply
cherryteastain
5 days ago
[-]
GrapheneOS duress password [1] and user profiles [2] are quite solid solutions for this scenario

[1] https://grapheneos.org/features#duress

[2] https://grapheneos.org/features#improved-user-profiles

reply
andyjohnson0
5 days ago
[-]
From the link:

> GrapheneOS provides users with the ability to set a duress PIN/Password that will irreversibly wipe the device (along with any installed eSIMs) once entered anywhere where the device credentials are requested (on the lockscreen, along with any such prompt in the OS).

In a border interrogation scenario, isn't that just likely to get you arrested for destroying evidence?

reply
verandaguy
5 days ago
[-]
Depends on the border. In most democracies, and at most borders, and in most LE cases, there is a line between “destruction of my own property/data” and “destruction of evidence,” where the latter usually needs a court document notifying the subject of the potential charge of their requirement to preserve evidence (for example, a subpoena, or in some cases, a direct request to avoid spoliation).
reply
myflash13
5 days ago
[-]
Theory. This is not how things work in practice, even in "democracies". Speaking as a person who has been harassed at the US border from Canada many times, I've learned it depends more on how the border agent "feels" about you. These people are often uneducated bullies who don't know or don't care about the law anyway. And if you start objecting on some legal basis, they can legally make things a LOT harder for you, including simply denying entry for no reason (yes, they have such a right). Better to cooperate rather than give the appearance of "destroying evidence" (even if completely legal) or you're in for a world of hurt if you got the wrong guy.
reply
darkwater
5 days ago
[-]
Wella, if you are a "normal person" with actually nothing to hide, yes, cooperating as much as you can is probably the best thing to do. But if you are some "special person" (activist, journalist, diplomat etc) wiping out everything might be your best option.
reply
F7F7F7
4 days ago
[-]
With all due respect. I used to think that only Boomers and anonymous Youtube edge lords repeated the "if you have nothing to worry about, comply!" nonsense.

You surprised me today.

reply
darkwater
4 days ago
[-]
I didn't say that at all. What I mean is that if you are, let say, on a leisure trip or to meet your family, the last thing you want is to be sent back were you came from or put 2 days into custody because you valued more the privacy of your phone content.

Now, if you do it, hat off, and even more if you can hire a lawyer and get justice done, but in that case you definitely are not "a normal person".

reply
seanw444
5 days ago
[-]
I have a solution to that problem that works 100% of the time:

I don't leave the US.

reply
iAMkenough
5 days ago
[-]
2 out of 3 people in the US live within U.S. Customs and Border Protection jurisdiction, where border agents can search without warrant if they determine they have "reasonable suspicion."

Additionally, SCOTUS ruled in 2022 (Egbert v Boule) that someone who has had their Fourth Amendment rights violated by CBP agents are not entitled to any damages unless Congress clearly defines a punishment for the violation by a federal agent.

reply
seanw444
5 days ago
[-]
True, that's ridiculous. But luckily I am one of the 1 out of 3.
reply
wepple
5 days ago
[-]
That’s a significantly higher bar. It’s not foolproof though.

I believe in most countries, customs can inspect your luggage. They can’t force you to reveal information that they’re not even certain you have.

Under your situation, the best idea is to simply have a wiped device. A Chromebook, for example, allows you to login with whatever credentials you choose, including a near empty profile

reply
bananapub
5 days ago
[-]
> I believe in most countries, customs can inspect your luggage. They can’t force you to reveal information that they’re not even certain you have.

this isn't a very useful way to think about it.

they can definitely search your luggage, obviously, but the border guards/immigration officials/random law enforcement people hanging around/etc can also just deny non-citizens entry to a country, usually for any or no reason.

there's documented cases of Australia[0] demanding to search phones of even citizens entering the country, and the US CBP explicitly states they may deny entry for non citizens if you don't give them the password and while they can't deny entry to citizens, they state they may seize the device then do whatever they want to it[1].

0: https://www.theguardian.com/world/2022/jan/18/returning-trav...

1: https://www.cbp.gov/travel/cbp-search-authority/border-searc...

reply
golergka
4 days ago
[-]
> I believe in most countries, customs can inspect your luggage. They can’t force you to reveal information that they’re not even certain you have.

They can. And if you refuse, they can do a lot of very unpleasant things to you. It might against the local law, but it wouldn't really matter in a lot of countries.

reply
wutwutwat
5 days ago
[-]
you can be forced to place your thumb on a sensor, or have the device held to your face.

you can't be forced to remember a password you "forgot"...

biometric authentication is not always your friend

reply
kevincox
5 days ago
[-]
> you can't be forced to remember a password you "forgot"...

No, but the border agents also aren't required to let you into the country. (Generally unless you are a citizen.)

So border agents are very different than general laws of the country because while there may be legal protections about what they may be able to force you to do there are much less protections about when you have the right to pass the border (other than entering countries where you are a citizen).

reply
projektfu
5 days ago
[-]
I don't think there is a technological solution for this unless you have some sort of sleight-of-hand. Typically, border agents of countries with lots of transit do not stop people for very long. Some other countries (North Korea, perhaps) might put everyone through the wringer because they do not have a lot of crossings. If a border agent of a relatively free country is stopping you, they probably have some suspicion, in which case it is best to not be holding evidence in your hand.

There are steganographic methods to hide your stuff. You can also use burners on either side of the border crossing and keep your main line clean. But bringing a device full of encrypted data (even if it's just your regular photo collection) that you refuse to unlock will probably be suspicious.

I know that there are times when there are no reasons for suspicion and people get stopped anyway. The border agent didn't like your look, or racism, or an order came down from on high to stop everyone from a particular country and annoy them. If that's the case, it's probably still best to not have a lot of incriminating evidence on your person, encrypted or not.

reply
wutwutwat
5 days ago
[-]
I never said anything about crossing a border. I said nobody can force you to remember something, for any reason, border crossing or otherwise
reply
ThePowerOfFuet
5 days ago
[-]
You say no.

Or, with GrapheneOS, you give them the duress password, on the understanding that you will have to set the device up from scratch IF you ever see it again.

reply
thesuitonym
5 days ago
[-]
If that's in your threat profile, you should not be traveling with a phone. If this is a real threat for you, no amount of hardware/software security will beat a wrench: https://xkcd.com/538/
reply
ReptileMan
5 days ago
[-]
Don't carry a phone with you. You can always buy one after the airport.
reply
mzhaase
5 days ago
[-]
Burner phone
reply
maccard
5 days ago
[-]
> I would say if you’re the type of person who would do this, you should go one step further and power off.

I'd travel with a different device, honestly. I can get a new-in-box android device for under £60 from a shop, travel with that, set it up properly on the other side, and then either leave it behind or wipe it again.

reply
wutwutwat
5 days ago
[-]
The £60 burner sounds like a leader on the device security front. No way it could possibly be running an ancient version of android that is no longer getting security patches, or is hacked up to shit by the device manufacture to reskin it and install their vulnerable suite of bloatware, or built off of a base os and firmware flocked to by folks for its ease of being able to gain root access/root it and run whatever you want at the kernel level.
reply
maccard
5 days ago
[-]
There’s no guarantee your $1000 flagship isn’t doing that either.

I chose it because it’s a mainstream provider (Nokia) readily available running a supported version of android (12).

If you want to install a custom rom, you can get an older flagship (galaxy s9) and flash it for about the same price.

My point is if your threat model is devices seized at border, then a burner phone is far more suitable for you than a reboot.

reply
wutwutwat
5 days ago
[-]
levels of trust. I have more trust in the largest most heavily scrutinized device manufacture making an attempt at security than I do with a rando burner device reseller. To be clear, I don't trust either fully, but one has way less trust than the other
reply
avianlyric
5 days ago
[-]
The whole point of a burner is that you don’t trust it. You only store what you absolutely need to store on there, if anything, and basically assume it’s compromised the second it leaves your sight.

The advantage of a burner phone is that it can’t contain anything important, because you’ve never put anything important on it, or connected it to any system that contains important data. So it doesn’t really matter if it’s compromised, because the whole point of a burner, is that it’s so unimportant you can burn it the moment it so much as looks at you funny.

reply
wutwutwat
5 days ago
[-]
Something a lot of people don't really consider is that people who are doing things that could get them unwanted attention, they wouldn't have incriminating evidence on any device, burner or otherwise. So the theoretical ways around not getting busted, like using a burner, are for movie villains and bond type secret agents. Real criminals (smart ones anyway) aren't conducting anything important over any network, be it ip, telephony, morse code, smoke signal, or otherwise, regardless of the burn-ability of the device they would be using to do so
reply
maccard
4 days ago
[-]
That’s why I chose a low end mass market smartphone as my example.

My wife works for the government in a low level role that involves some amount of travel to local authorities (other major areas in Scotland). She has a phone, and strict instructions to never carry it across the border ofmany countries (as a blanket policy). They’re told they’ll be provided a device for travelling and not to install any work apps on it. It’s basic security - don’t travel with information that you can lose control over.

reply
maccard
4 days ago
[-]
The £60 burner isn’t a rando reseller of a shitty no name phone, it’s one of the largest phone retailers in the Uk selling a low end android device that’s fully supported. So you have that option. If you want a brand new different device you have the option too, but it’ll cost you more.

If your threat model is “I think my provider and a nation state are colluding to target me” you probably wouldn’t be posting on HN about it.

reply
kshacker
5 days ago
[-]
It could be doing all that actually but you are not obliged to install all your apps on the burner, just the basic minimum.
reply
wutwutwat
5 days ago
[-]
You're still walking around with a microphone and gps tracker connected to a cellular network even if the only thing you do is power it on
reply
brewdad
5 days ago
[-]
If that's your threat model, don't carry ANY phone. Probably best not to carry any modern electronic device at all.
reply
wutwutwat
5 days ago
[-]
Real criminals who don't want to be caught don't carry phones for this exact reason.
reply
colimbarna
5 days ago
[-]
Sometimes the alternative blows up in your face though.
reply
kshacker
2 days ago
[-]
Too soon
reply
maccard
4 days ago
[-]
You’re worried about this with your original phone too, right? That has nothing to do with being a burner.
reply
vsl
5 days ago
[-]
Doesn't the volume+power gesture transition into BFU, i.e. be equivalent to power-cycling?
reply
jonpalmisc
5 days ago
[-]
No. This is a myth, and while it does force you to enter your password instead of using biometrics on the next unlock, it is not the same as returning to BFU.
reply
486sx33
5 days ago
[-]
If I had to guess, there must have been an exploit in the wild that took advantage of this. It sounds like it eliminates the oldest tools in one swoop. Which is pretty sweet
reply
gruez
5 days ago
[-]
Even without an exploit in the wild, having such a feature is critical for security. Otherwise any device that's seized by police can be kept powered on indefinitely, until firms like Cellebrite can find an exploit.
reply
mptest
5 days ago
[-]
Also, lockdown mode and pair locking your device. Pair locking iirc is how you protect against cellubrite type attacks
reply
mjlee
5 days ago
[-]
I had to look up what SRD meant. It's a Security Research Device - "a specially fused iPhone that allows you to perform iOS security research without having to bypass its security features."

https://security.apple.com/research-device/

reply
alwayslikethis
5 days ago
[-]
Great writeup, but I wonder why so much emphasis is put on not 'connected to network' part. It seems like a timed inactivity reboot is a simpler idea than any type of inter-device communication schemes. It's not new either; Grapheneos had this for a while now and the default is 18 hours (and you can set it to 10 minutes) which would be a lot more effective as a countermeasure against data exfiltration tools.
reply
nneonneo
5 days ago
[-]
This is because earlier reports coming out of law enforcement agencies suggested that the network was involved in making even older devices reboot. This blog post is an effort to debunk that claim.
reply
lathiat
5 days ago
[-]
If you’re targeting these evidence grabbing/device exploiting mobs, generally the phones get locked into a faraday cage to drop the mobile network so that they can’t receive a remote wipe request from iCloud.
reply
thrdbndndn
5 days ago
[-]
Two questions:

1. surely unconditionally rebooting locked iPhones every 3 days would cause issues in certain legit use cases?

2. If I read the article correctly, it reboots to re-enter "Before First Unlock" state for security. Why can't it just go into this state without rebooting?

Bonus question: my Android phone would ask for my passcode (can't unlock with fingerprint or face) if it thinks it might be left unattended (a few hours without moving etc.), just like after rebooting. Is it different from "Before First Unlock" state? (I understand Android's "Before First Unlock" state could be fundamentally different from iPhone's to begin with).

reply
diggan
5 days ago
[-]
> it reboots to re-enter "Before First Unlock" state for security. Why can't it just go into this state without rebooting?

I think the reason is to make sure anything from RAM is wiped completely clean. Things like the password should be stored in the Secure Enclave (which encryption keys stored in RAM are derived from) but a reboot would wipe that too + any other sensitive data that might be still in memory.

As an extra bonus, I suppose iOS does integrity checks on boot too, so could be a way to trigger that also. Seems to me like a reboot is a "better safe than sorry" approach which isn't that bad approach.

reply
gizmo686
5 days ago
[-]
Reboots don't typically wipe RAM. Although wiping ram is relatively easy if you are early enough in the boot process (or late enough in the shutdown process).
reply
johncolanduoni
5 days ago
[-]
I'd expect that the RAM encryption key is regenerated each boot, so the RAM should be effectively wiped when the key from the previous boot is deleted from the memory controller.
reply
bayindirh
5 days ago
[-]
With ASLR and tons of activity happening during the boot process, it's almost guaranteed that you'll damage the keys you need. Plus, we don't know how shutdown processes are done. It might be wiping the keys clean before resetting the processor.
reply
diggan
5 days ago
[-]
> Reboots don't typically wipe RAM.

Typically yeah, I think you're right. But I seem to recall reading that iOS does some special stuff when shutting down/booting related to RAM but of course now I cannot find any source backing this up :/

reply
spijdar
5 days ago
[-]
The short answer to your last two questions is that “before first unlock” is a different state from requiring the PIN/passcode. On boot, the decryption keys for user profile data are not in memory, and aren’t available until they’re accessed from the security coprocessor via user input. The specifics depend on the device, but for Pixel devices running GrapheneOS you can get the gist of it here: https://grapheneos.org/faq#encryption

The important distinction is that, before you unlock your phone for the first time, there are no processes with access to your data. Afterwards, there are, even if you’re prompted for the full credentials to unlock, so an exploit could still shell the OS and, with privilege escalation, access your data.

Before first unlock, even a full device compromise does nothing, since all the keys are on the <flavor of security chip> and inaccessible without the PIN.

reply
dwaite
5 days ago
[-]
> Why can't it just go into this state without rebooting?

Because the state of the phone isn't clean - there is information in RAM, including executing programs that will be sad if the disk volume their open files are stored on goes away.

If your goal is to get to the same secure state the phone is in when it first starts, why not just soft reboot?

reply
TimeBearingDown
5 days ago
[-]
this also clears out deeper OS rootkits if they could not achieve reboot persistence, which is not uncommon.
reply
bonyt
5 days ago
[-]
> 1. surely unconditionally rebooting locked iPhones every 3 days would cause issues in certain legit use cases?

I wonder if this explains why the older iPhone I keep mounted to my monitor to use as a webcam keeps refusing to be a webcam so often lately and needing me to unlock it with my password...

reply
athrun
5 days ago
[-]
I have the same setup and what works for me is putting the phone into Supervised mode using the Apple Configurator.

From there, you can enable single app mode to lock it into the app you're using for the webcam (I use Camo).

reply
Someone
5 days ago
[-]
> If I read the article correctly, it reboots to re-enter "Before First Unlock" state for security. Why can't it just go into this state without rebooting?

1. Getting there reliably can be hard (see the age-old discussions about zero-downtime OS updates vs rebooting), even more so if you must assume malware may be present on the system (how can you know that all that’s running is what you want to be running if you cannot trust the OS to tell you what processes are running?)

2. It may be faster to just reboot than to carefully bring back stuff.

reply
Kwpolska
5 days ago
[-]
What legit use case involves not touching your phone at all for 3 days?
reply
layer8
5 days ago
[-]
It means that in the future you can’t use old iPhone hardware to run an unattended server or similar anymore (unless you simulate user activity by adding some hardware that taps on the display every three minutes, or something). This is why I don’t like that it’s a hardcoded non-configurable setting. It cripples potential use cases for the hardware.
reply
adastra22
5 days ago
[-]
Not a phone, but at my old apartment I used to have an iPad mounted on the wall. It was a dynamic weather display, Ring doorbell answerer, multimedia control, etc. Would suck if every 3 days I had to enter my passcode again.
reply
Shank
5 days ago
[-]
I haven’t tested this, but I assume this wouldn’t occur if the device is fully unlocked and powered on. Most kiosk adjacent deployments are setup so that they never turn the screen off and remain unlocked.
reply
grishka
5 days ago
[-]
Looks like something that doesn't need to have a passcode on it in the first place.
reply
layer8
5 days ago
[-]
I have something like this as well, connected to my Apple account for calendar and reminder access etc. I wouldn’t want every random guest to have access to that.
reply
myflash13
5 days ago
[-]
iPad has a kiosk mode for these use cases.
reply
Hackbraten
5 days ago
[-]
Maybe you want people to be able to reach you on a secondary, inbound-only phone number.

I’ve also heard people re-purpose old phones (with their batteries disconnected, hopefully) as tiny home servers or informational displays.

reply
neop1x
3 days ago
[-]
The article says the phone reception works even BFU (just without contact info)
reply
YoumuChan
5 days ago
[-]
I connect an iPhone 12 to my vehicle's CarPlay all the time. Recently I often found the start unreliable, which defeats all the purpose.
reply
oneplane
5 days ago
[-]
It is very different as the cryptography systems can only assure a secure state with a known root of trust path to the state it is in.

The big issue with most platforms out there (x86, multi-vendor, IBVs etc.) is you can't actually trust what your partners deliver. So the guarantee or delta between what's in your TEE/SGX is a lot messier than when you're apple and you have the SoC, SEP, iBoot stages and kernel all measured and assured to levels only a vertical manufacturer could know.

Most devices/companies/bundles just assume it kinda sucks and give up (TCG Optal, TPM, BitLocker: looking at you!) and make most actual secure methods optional so the bottom line doesn't get hit.

That means (for Android phones) your baseband and application processor, boot rom and boot loader might all be from different vendors with different levels of quality and maturity, and for most product lifecycles and brand reputation/trust/confidence, it mostly just needs to not get breached in the first year it's on the market and look somewhat good on the surface for the remaining 1 to 2 years while it's supported.

Google is of course trying hard to make the ecosystem hardened, secure and maintainable (it has been feasible to get a lot of patches in without having to wait for manufacturers or telcos for extended periods of time), including some standards for FDE and in-AOSP security options, but in almost all retail cases it is ultimately an individual manufacturer of the SoC and of the integrated device to make it actually secure, and most don't since there is not a lot of ROI for them. Even Intel's SGX is somewhat of a clown show... Samsung does try to implement their own for example, I think KNOX is both the brand name for the software side as well as the hardware side, but I don't remember if that was strictly Exynos-only. The supply chain for UEFI Secure Boot has similar problems, especially with the PKI and rather large supply chain attack surface. But even if that wasn't such an issue, we still get "TEST BIOS DO NOT USE" firmware on production mainboards in retail. Security (and cryptography) is hard.

As for what the difference is in BFU/AFU etc. imagine it like: essentially some cryptographic material is no longer available to the live OS. Instead of hoping it gets cleared from all memory, it is a lot safer to assume it might be messed with by an attacker and drop all keys and reboot the device to a known disabled state. That way, without a user present, the SEP will not decrypt anything (and it would take a SEPROM exploit to start breaking in to the thing - nothing the OS could do about it, nor someone attacking the OS).

There is a compartmentalisation where some keys and keybags are dropped when locked, hard locked and BFU locked, the main differences between all of them is the amount of stuff that is still operational. It would suck if your phone would stop working as soon as you lock it (no more notifications, background tasks like email, messaging, no more music etc).

On the other hand, it might fine if everything that was running at the time of the lock-to-lockscreen keeps running, but no new crypto is allowed during the locked period. That means everything keeps working, but if an attacker were to try to access the container of an app that isn't open it wouldn't work, not because of some permissions, but because the keys aren't available and the means to get the keys is cryptographically locked.

That is where the main difference lies with more modern security, keys (or mostly, KEKs - key encryption keys) are a pretty strong guarantee that someone can only perform some action if they have the keys to do it. There are no permissions to bypass, no logic bugs to exploit, no 'service mode' that bypasses security. The bugs that remain would all be HSM-type bugs, but SEP edition (if that makes sense).

Apple has some sort of flowchart to see what possible states a device and the cryptographic systems can be in, and how the assurance for those states work. I don't have it bookmarked but IIRC it was presented at Black Hat a year or so ago, and it is published in the platform security guide.

reply
pnw
5 days ago
[-]
Great writeup! And it's good to see Apple pushing the envelope on device security.
reply
Etheryte
5 days ago
[-]
Wouldn't really say Apple is pushing the envelope here, as covered in the previous threads about this topic, a number of Android flavors have done this long ago.
reply
dewey
5 days ago
[-]
The power of defaults is not to be underestimated. Yes, you probably can do it with some Android distribution but the amount of people using that would be microscopic.
reply
bananapub
5 days ago
[-]
> Wouldn't really say Apple is pushing the envelope here

come on dude. they're doing it by default, for > billion people, with their army of lawyers sitting around waiting to defend lawsuits from shitty governments around the world.

reply
F7F7F7
4 days ago
[-]
The fragmentation inherent in Android ALONE makes it an insecure device for the vast majority of its normie users. You need a damn near decoder ring just to figure out where you stand.
reply
dblitt
5 days ago
[-]
Does anyone have insight into why Apple encrypts SEP firmware? Clearly it’s not critical to their security model so maybe just for IP protection?
reply
jonpalmisc
5 days ago
[-]
They have a long history of encrypting firmware. iBoot just stopped being decrypted recently with the launch of PCC, and prior to iOS 10 the kernel was encrypted too.

The operating theory is that higher management at Apple sees this as a layer of protection. However, word on the street is that members of actual security teams at Apple want it to be unencrypted for the sake of research/openness.

reply
saagarjha
5 days ago
[-]
Someone high up is an idiot presumably
reply
jesprenj
5 days ago
[-]
> In law enforcement scenarios, a lot of the forensically relevant data is available in the AFU state. Law enforcement takes advantage of this and often keeps seized iPhones powered on, but isolated from the Internet, until they can extract data.

In Slovenia, devices have to be turned off the moment they are seized by their owner, prior to putting them into airplane mode.

reply
Razengan
5 days ago
[-]
Also when thieves or muggers rob someone, the first thing they do is turn on Airplane Mode or force power-off.

WHY the hell don't those actions require a passcode or bio authentication??

reply
saagarjha
5 days ago
[-]
They could just put it in a foil-lined pocket instead.
reply
Razengan
4 days ago
[-]
"Why bother deterring the more-common low-effort risks if there are higher effort ways to thwart the deterrence?"

How often do muggers carry foil pockets even in first world countries? Certainly not in places where there's a mugging almost every week. Some way to track the device on its way to wherever they strip them off for parts would be helpful than not being to track it at all.

reply
saagarjha
4 days ago
[-]
They don’t because they don’t have to. Make it necessary and they’ll switch within the week.
reply
Razengan
8 hours ago
[-]
That’s still only the preplanned muggings. Not the opportunists who spot an unattended phone somewhere by chance.
reply
miki123211
5 days ago
[-]
I don't think people would be fine with being unable to power any electronic device down at need, even if they're not the owner.

It feels like something that needs to be as easy as possible, for safety reasons if not anything else.

Now what I'd like to see is an extension of their protocol that is used to locate iPhones that would also let them accept a "remote wipe" command, even when powered down.

reply
Razengan
4 days ago
[-]
Then obviously it should be an option for when you're traveling in places with a high risk of mugging.
reply
Razengan
3 days ago
[-]
That’s still only the preplanned muggers. Not the opportunists who spot an unattended phone somewhere by chance.
reply
Wowfunhappy
5 days ago
[-]
You need to be able to forcibly power off the phone when it's frozen.
reply
newZWhoDis
5 days ago
[-]
FYI: Everyone should disable control center from the Lock Screen to prevent that attack (airplane mode activation while locked).

iPhones are still trackable while powered off, at least for a while.

reply
mccraveiro
5 days ago
[-]
You can definitely block AirPlane mode without a passcode on iOS. I disabled the access to the control center when the iPhone is locked. Therefore thieves won’t be able to do so.
reply
zarzavat
5 days ago
[-]
This doesn't work if they steal it out your hand while it's unlocked.
reply
mavhc
5 days ago
[-]
I assume apple has something similar to https://support.google.com/android/answer/15146908

Theft Detection Lock uses AI, your device's motion sensors, Wi-Fi and Bluetooth to detect if someone unexpectedly takes your device and runs away. If Theft Detection Lock detects that your device is taken from you, it automatically locks your device's screen to protect its content.

reply
4lun
5 days ago
[-]
Slight mitigation to this is you can add an automation via the Shortcuts app to be triggered when airplane mode is enabled, and set the actions to immediately lock your device and disable airplane mode

Downside is that you need to manually disable the automation if you actually wish to use airplane mode (and also remember to re-enable it when done)

reply
NamTaf
5 days ago
[-]
I've set two automations: 1) When airplane mode is activated, lock the screen. 2) When airplane mode is activated, turn it back off. That'll give me the most opportunity to either track it and/or lock it down remotely.

I can remember to disable the shortcut whenever I fly and need to enable it.

If they pop my SIM (my provider doesn't use eSIMs...) then there's a PIN on it to prevent use in another device.

reply
486sx33
5 days ago
[-]
Nice work, and appreciate the time you spent!

“I also downloaded an older kernel where Apple accidentally included symbols and manually diffed these versions with a focus on the code related to inactivity reboot. The kernel has three strings relating to the feature:” sounds like a little luck there for sure !

reply
jjallen
5 days ago
[-]
If this is such a security benefit why not do it after 24 hours instead? How many people go that long without using their phones?

How many people are using their phones for some other purpose for which they want their phones to never reboot? And what are they actually doing with their phones?

reply
saagarjha
5 days ago
[-]
Because it harms the user experience.
reply
Wowfunhappy
5 days ago
[-]
I'm sure this is why but I had the same thought as GP. Under what circumstances would 24 hours be disruptive, but three days would be okay?

If you're using the iPhone as some type of IoT appliance, either time limit would be disruptive. But if you e.g. enable Guided Access, the phone will stay unlocked and so shouldn't reboot.

If you're using the iPhone as a phone, who the heck doesn't touch their phone in 24 hours? Maybe if you're on some phone-free camping trip and you just need the iPhone with you as an emergency backup—but in that case, I don't think Inactivity Reboot would be particularly disruptive.

Maybe Apple will lower the window over time?

reply
jjallen
5 days ago
[-]
How though? Users haven't used their phone in a day or more? How would they notice except for having to reenter their passcode which takes two seconds?
reply
Shank
5 days ago
[-]
Not being able to glance at any push notifications or get incoming caller ID would be pretty disruptive.
reply
layer8
5 days ago
[-]
That’s not the case if you also have other Apple devices on the same account.
reply
IshKebab
5 days ago
[-]
Read the introduction.
reply
layer8
5 days ago
[-]
> How many people go that long without using their phones?

For people who don’t leave the house that often and have other Apple devices, this suddenly becomes much more frequent.

reply
archeantus
5 days ago
[-]
Great post. They talked about the possibility of iOS 18 wirelessly telling other phones to reboot, but then afaik didn’t address that again. Maybe they did and I missed it?
reply
C4K3
5 days ago
[-]
They conclude that there's no wireless component to the feature.

This feature is not at all related to wireless activity. The law enforcement document's conclusion that the reboot is due to phones wirelessly communicating with each other is implausible. The older iPhones before iOS 18 likely rebooted due to another reason, such as a software bug.

reply
zarzavat
5 days ago
[-]
If you think about it, if the attacker is sophisticated enough to break the phone within a 72 hour window, then they are definitely sophisticated enough to use a faraday container. So communication between phones wouldn't help very much.

Moreover, you'd have to have some inhibitory signal to prevent everybody's phones restarting in a crowded environment, but any such signal could be spoofed.

reply
threeseed
5 days ago
[-]
I suspected this was being managed in the Secure Enclave.

That means it's going to be extremely difficult to disable this even if iOS is fully compromised.

reply
karlgkk
5 days ago
[-]
If I’m reading this right:

Reboot is not enforced by the SEP, though, only requested. It’s a kernel module, which means if a kernel exploit is found, this could be stopped.

However, considering Apple’s excellent track record on these kind of security measures, I would not at all be surprised to find out that a next generation iPhone would involve the SEP forcing a reboot without the kernels involvement.

what this does is that it reduces the window (to three days) of time between when an iOS device is captured, and a usable* kernel exploit is developed.

* there is almost certainly a known kernel exploit out in the wild, but the agencies that have it generally reserve using them until they really need to - or they’re patched. If you have a captured phone used in a, for example, low stakes insurance fraud case, it’s not at all worth revealing your ownership of a kernel exploit.

Once an exploit is “burned”, they distribute them out to agencies and all affected devices are unlocked at once. This now means that kernel exploits must be deployed within three days, and it’s going to preserve the privacy of a lot of people.

reply
toomuchtodo
5 days ago
[-]
Would be nice if Apple would expose an option to set the timer to a shorter window, but still great work.
reply
alwayslikethis
5 days ago
[-]
In GrapheneOS, you can set it to as little as 10 minutes, with the default being 18 hours. That would be a lot more effective for this type of data exfiltration scenario.
reply
technics256
5 days ago
[-]
You can do this yourself with Shortcuts app.

Create a timer function to run a shutdown on a time interval you order. Change shutdown to "restart".

reply
KennyBlanken
5 days ago
[-]
You clearly haven't tried it or even googled it - because it's impossible to do it unattended. A dialog pops up (and only when unlocked) asking you to confirm the reboot. It's probably because they were worried users might end up in a constant reboot/shutdown cycle, though presumably they could just implement a "if rebooted in the last hour by a script, don't allow it again" rule.
reply
jojobas
5 days ago
[-]
Or to disable it entirely. Someone could set up and ipad to do something always plugged in, would be bloody annoying to have it locked cold every three days.
reply
stephen_g
5 days ago
[-]
I’m not sure, but I wouldn’t expect the inactivity timeout to trigger if the device was already in an unlocked state (if I understand the feature correctly) so in kiosk mode or with the auto screen lock turned off and an app open I wouldn’t expect it to happen.
reply
jojobas
5 days ago
[-]
Maybe you want it locked and only showing notification headers.
reply
stephen_g
5 days ago
[-]
Having to put your passcode in every three days is not the end of the world. It would make sense also that if you turned off the passcode entirely it also wouldn’t restart.
reply
mjevans
5 days ago
[-]
I'd rather have a dedicated Kiosk mode that has a profile of allow-listed applications and one or more that are auto-started.
reply
aspenmayer
5 days ago
[-]
Maybe one or two of these will do what you want?

https://support.apple.com/en-us/105121

> With Screen Time, you can turn on Content & Privacy Restrictions to manage content, apps, and settings on your child's device. You can also restrict explicit content, purchases and downloads, and changes to privacy settings.

https://support.apple.com/en-us/111795

> Guided Access limits your device to a single app and lets you control which features are available.

reply
duskwuff
5 days ago
[-]
Or "single-app mode", which is a more tightly focused kiosk mode:

https://support.apple.com/guide/apple-configurator-mac/start...

reply
grahamj
5 days ago
[-]
Conspiracy theory time! Apple puts this out there to break iPad-based DIY home control panels because they're about to release a product that would compete with them.
reply
duskwuff
5 days ago
[-]
> Apple puts this out there to break iPad-based DIY home control panels

If you were using an iPad as a home control panel, you'd probably disable the passcode on it entirely - and I believe that'd disable the inactivity reboot as well.

reply
grahamj
5 days ago
[-]
I dunno, does this SEP check only happen when the device is locked? I don’t recall that being mentioned.
reply
duskwuff
4 days ago
[-]
I can't imagine how "time since last unlock" would even work for a device with no passcode, since the user never explicitly unlocks the device. Besides, a reboot wouldn't do anything useful in that configuration; with no passcode, user data is always accessible.
reply
aspenmayer
5 days ago
[-]
You could also set the auto-lock in display settings to never.
reply
grahamj
5 days ago
[-]
I dunno, does this SEP check only happen when the device is locked? I don’t recall that being mentioned.
reply
aspenmayer
5 days ago
[-]
It’s more likely than you think!

> Apple's Next Device Is an AI Wall Tablet for Home Control, Siri and Video Calls

https://news.ycombinator.com/item?id=42119559

via

> Apple's Tim Cook Has Ways to Cope with the Looming Trump Tariffs

https://news.ycombinator.com/item?id=42168808

reply
KennyBlanken
5 days ago
[-]
> * there is almost certainly a known kernel exploit out in the wild, but the agencies that have it generally reserve using them until they really need to - or they’re patched.

There's literally emails from police investigators spreading word about the reboots, which state that the device goes from them being able to extract data while in AFU, to them not being able to get anything out of the device in BFU state.

It's a bit pointless, IMHO. All cops will do is make sure they have a search warrant lined up to start AFU extraction right away, or submit warrant requests with urgent/emergency status.

reply
karlgkk
5 days ago
[-]
I sort addressed this in my original comment but local police likely do not have access to an AFU vuln, and generally get it after it’s been patched. Then, they go on an unlocking spree. This prevents that
reply
grahamj
5 days ago
[-]
> Reboot is not enforced by the SEP, though, only requested. It’s a kernel module, which means if a kernel exploit is found, this could be stopped.

True. I wonder if they've considered the SEP taking a more active role in filesystem decryption. If the kernel had to be reauthenticated periodically (think oauth's refresh token) maybe SEP could stop data exfiltration after the expiry even without a reboot.

Maybe it would be too much of a bottleneck; interesting to think about though.

reply
karlgkk
5 days ago
[-]
> If the kernel had to be reauthenticated periodically (think oauth's refresh token)

If the kernel is compromised, this is pointless I think. You could just "fake it".

SEP is already very active in filesystem encryption. The real important thing is evicting all sensitive information from memory. Reboot is the simplest and most effective, and the end result is the same.

reply
grahamj
5 days ago
[-]
It’s involved in handling the keys but I don’t think disk is processed by the SEP. If it was the SEP could simply stop providing access.
reply
op00to
5 days ago
[-]
If reboot doesn’t happen kernel panics, at least that’s what the article says.
reply
aaronmdjones
5 days ago
[-]
That's only because the kernel tells the userland to reboot. If the kernel is compromised, they can stop it from telling userland to reboot and stop the kernel panicing.
reply
markasoftware
5 days ago
[-]
Kernel exploits would let someone bypass the lockscreen and access all the data they want immediately, unless I'm missing something. Why would you even need to disable the reboot timer in this case?
reply
karlgkk
5 days ago
[-]
Hypotdthically, I suppose there's value in disabling the timer if you're, for example, waiting for a SEP exploit that only works if in an AFU state?

But, I don't know where the idea of disabling a reboot timer came in? I'm only simply saying that now, you have to have a kernel exploit on hand, or expect to have one within three days - a very tall order indeed.

reply
dmitrygr
5 days ago
[-]
> Reboot is not enforced by the SEP, though, only requested

We (the public) do not know if SEP can control nRST of the main cores, but there is no reason to suspect that it cannot.

reply
karlgkk
5 days ago
[-]
We actually do know, it cannot directly*. What it could do is functionally disable RAM, but that would basically cause the phone to hard lock and even cause data corruption in some limited cases.

This is still being actively researched. I have no evidence, but would not be surprised to find out that a SEP update has been pushed that causes it to pull RAM keys after the kernel panic window has closed.

* This may have been changed since the last major writeup came out for the iPhone 11.

reply
ghssds
5 days ago
[-]
My question is: why three days specifically instead of a user-configurable delay?
reply
Slartie
5 days ago
[-]
Because this way, the delay is parameterized within the Secure Enclave firmware by hard-coding it, which is a thing that only Apple can do.

If you were to allow a user to change it, you'd have to safeguard the channel by which the users' desired delay gets pushed into the SE against malicious use, which is inherently hard because that channel must be writable by the user. Therefore it opens up another attack surface by which the inactivity reboot feature itself might be attacked: if the thief could use an AFU exploit to tell the SE to only trigger the reboot after 300 days, the entire feature becomes useless.

It's not impossible to secure this - after all, changing the login credentials is such a critical channel as well - but it increases the cost to implement this feature significantly, and I can totally see the discussions around this feature coming to the conclusion that a sane, unchangeable default would be the better trade-off here.

reply
axxto
5 days ago
[-]
> if the thief could use an AFU exploit to tell the SE to only trigger the reboot after 300 days, the entire feature becomes useless

Then why not simply hardcode some fixed modes of operation? Just as an example, a forced choice between 12, 24, 48, or a maximum of 72 hours. You can't cheat your way into convincing the SE to set an unlimited reset timer. I'm sure there must be a better reason.

reply
F7F7F7
4 days ago
[-]
Any "choice" suffers from the same user exploit you responded to. The attack surface remains.

Plus, vulnerability often follows complexity. Whether it's human written validation logic being attacked for 6 months in a lab somewhere in Israel or the overly complex UX exposed to some soccer Mom in Minneapolis.

Save money. Save headaches. K.I.S.S.

reply
Etheryte
5 days ago
[-]
Apple's whole thing is offering whatever they think is a good default over configuration. I can't even begin to count all the things I wish were configurable on iOS and macOS, but aren't. Makes for a smooth user experience, sure, but is also frustrating if you're a power user.
reply
chews
5 days ago
[-]
thank you for such a great writeup, this is an excellent breakdown!
reply
sunnybeetroot
5 days ago
[-]
This may explain why since iOS18 my device randomly reboots (albeit only takes max 5 seconds). I am a daily user so perhaps the reboot I experience is a bug.
reply
echoangle
5 days ago
[-]
If it takes only 5 seconds, it doesn’t sound like a reboot. Does it show a black screen and the apple logo during this event?
reply
sunnybeetroot
5 days ago
[-]
No Apple logo, just black screen with loading spinner followed by requiring passcode to unlock
reply
future10se
5 days ago
[-]
That might be what's informally called a "respring", where the SpringBoard process is restarted.

SpringBoard is the process that shows the home screen, and does part of the lifecycle management for regular user apps. (i.e. if you tap an icon, it launches the app, if you swipe it away in the app switcher, it closes the app)

It is restarted to make certain changes take effect, like the system language. In the jailbreaking days, it was also restarted to make certain tweaks take effect. Of course, it can also just crash for some reason (which is likely what is happening to you)

reply
kaba0
5 days ago
[-]
Hi, is there some further info on iOS "internals" like this? I was always interested in how it works, but I found much less information compared to android (which obviously makes sense given one is more or less open-source), even though these probably don't fall in the secret category.
reply
sss111
5 days ago
[-]
mine used to do that when the battery needed replacement
reply
h1fra
5 days ago
[-]
I always assumed that it was memory reaching capacity or routine cleanup more than a reboot. This often happened to me after intensive use
reply
sroussey
5 days ago
[-]
Yes, lots of complaints on forums about this bug. Saw it happen to my phone today.
reply
pushupentry1219
5 days ago
[-]
I haven't read the whole thing, but from skimming the beginning. This is pretty similar how AOSP's BFU vs AFU unlock works.
reply
lofaszvanitt
5 days ago
[-]
More security theatre.
reply
bayindirh
5 days ago
[-]
Elaborate.
reply
alphan0n
5 days ago
[-]
If I were looking for low hanging fruit, I suspect it wouldn’t reboot if you were to replicate the user’s home WiFi environment in the faraday cage, sans internet connection of course. Or repeatedly initializing the camera from the lock screen.
reply
Syonyk
5 days ago
[-]
From the article:

> Turns out, the inactivity reboot triggers exactly after 3 days (72 hours). The iPhone would do so despite being connected to Wi-Fi. This confirms my suspicion that this feature had nothing to do with wireless connectivity.

reply
abhishekjha
5 days ago
[-]
How do these things work with devices inside a NAT gateway? Most of our devices are inside a LAN. Even if a server gets started, it won't be visible to the outside world, unless we play with the modem settings.

Now, a hacker/state who has penetrated a device can do an upload of data from the local decice to a CNC server.

But that seems risky as you need to do it again and again. Or do they just get into your device once and upload everything to CNC?

reply
aspenmayer
5 days ago
[-]
This particular feature doesn’t rely on network connectivity or lack thereof.

Here’s some info about how some spyware works:

https://www.kaspersky.com/blog/commercial-spyware/50813/

reply
meindnoch
5 days ago
[-]
What are you even talking about?
reply