"Over the past few years, several AI-powered features have been added to mobile phones that allow users to better search and understand their messages. One effect of this change is increased 0-click attack surface, as efficient analysis often requires message media to be decoded before the message is opened by the user"
Haven't we learned our lesson on this? Don't read and act on my sms messages without me asking you to!
What is the purported lesson we should have learned? Users choose phones with rich messaging features. This was a major selling point for iPhone, first, with iMessage, and later with Android until iOS caught up with RCS.
It seems like the lesson is that you shouldn't be processing data sent to the device by random strangers without the user explicitly choosing to open the file or follow the link.
Not to automatically execute things within data that we have been sent.
Somewhere there's an NSA agent reading this and laughing like a gin addict on payday.
Unfortunately, since we don't live in that world, we need to not open links, emails, text messages, etc, if they are sketchy.
A better solution may someday exist, but as of yet has not been found.
Corporate Security will tell you that it's ok to click links to the payroll system or hr or vanta or sage or the 'secure email service' or jira or github or to docusign or the microsoft document that a partner company sent you, but not ok to click links in the phishing email that looks like one of those that they sent you.
It's not possible to tell whether a message giving you a link to something is 'sketchy' or not before clicking the link.
This makes me feel better about Google, but also makes me kind of frightened of the rest of Android. I wonder what Apple's response time is?
https://docs.qualcomm.com/securitybulletin/may-2026-bulletin...
Since a lot of vendors are months or even years behind, their phones are full of known holes.
When it comes to security, basically: GrapheneOS > iOS > PixelOS >> Samsung OneUI >>>>>>>> everybody else.
Sadly, Samsung lets anyone who pays enough push bloatware and analytics on their phones. E.g. AppCloud from an Isreali company, Meta services that stay even when you remove Meta apps (only removable with ADB/UAD), etc. So there are only three somewhat serious options (and for two of them, you still give a lot of analytics to Apple or Google).
I've heard they cleaned up their program recently to respond much quicker nowadays
If you’re not then this seems quite paranoid, bordering on LARPing.
We have seen multiple software hacks resulting in >10 million dollar payouts. Apple's bug bounty program only pays out 4 million dollars (2 million dollars (2x) more than non-Lockdown) for a zero-click total compromise that can trivially worm to take down hundreds of millions of iPhones simultaneously. Even at the low end of that cyberattack payout range that is still a >2x ROI if your successful cyberattack depends on a iPhone zero-click, with many publicly known attacks being in the 10x ROI range. Lockdown mode, at best, raises the bar slightly for commercial profit-motivated attackers and reduces their profit margin from wildly profitable to slightly less, but still, wildly profitable.
And of course I am using the Apple bug bounty program as merely a available metric with at least some semblance of objective support. There are zero certifications, audits, or analysis that Apple has even attempted that would confirm any claim of protection against state level actors.
This sets a nice price bar for exploitation. Is someone willing to pay 10+ million dollars to get access to your phone?
The obvious caveat here is that for a lot less than 10 million dollars someone can be hired to hit you with a metal pipe until you give up your passcode.
> click total compromise that can trivially worm to take down hundreds of millions of iPhones simultaneously
Where is the profit motive in doing this? Possibility is one thing, but a realistic threat is another.
Not yours specifically usually, but there is a lot of money in a general tool that law enforcement can use to read out phones. Of course, most of them focus on physical access. In the few Cellebrite reports/presentations that have leaked, iPhones would fall after a relatively short time (IIRC a few months), but did better than most Android phones (except GrapheneOS).
Also, sometimes you do not need the 10M exploit, you can buy many cheaper exploits and make a chain yourself.
The obvious caveat here is that for a lot less than 10 million dollars someone can be hired to hit you with a metal pipe until you give up your passcode
If they hit you with a metal pipe, it's likely that you won't survive even if you give up your passcode. So most likely you are protecting something or someone else. Set up a duress PIN so that you have options in that case.
As a example of how they might be used in that fashion for profit, NSO group had a revenue of 240 million dollars in 2020. Many of their customers were governments who wanted to spy on activists and journalists. NSO group was in the business of economies of scale to democratize access to journalist devices by reusing a small stockpile of exploits across many targets with enough revenue to assure a steady stream of new exploits as fast as they were burned.
That is still quite a small pool, and there are other network effects preventing any Joe blogs with that much capital from launching an exploitation campaign.
To, once again, use the same example of NSO Group as it is infamous and well-documented [1]. In 2016 it was 500,000 $ upfront and 650,000 $/year for 10 devices. That article claims Saudi Arabia was monitoring 15,000 phones at a average cost of 10,000 $/phone. In [2] it was 7 million $ for 15 devices, but the upfront versus marginal cost per device is not broken down. And this was a relatively "above-board" company in the sense that they were a legitimate business entity with government deals which commands a premium relative to random unknown blackhat organization with no reputation.
And again, my original comment was discussing commercial profit-motivated attackers for which 1 million $ is easily within reach and just a cost of doing business to unlock greater amounts of profit. That is less than the cost of setting up a McDonalds. There is a vast, vast gap spanning factors of millions between Joe Schmo and commercial actors and a even vaster gap to state actors. There is no evidence that Lockdown mode is adequate against even commercial actors, let alone the vastly more capable state actors.
[1] https://prodefence.io/news/pegasus-spyware-operating-costs-c...
[2] https://www.reuters.com/business/media-telecom/meta-suit-aga...
The economics of the device exploitation industry are completely orthogonal from bug bounty payouts; the markets only overlap at the _extreme_ fringes. Trying to use one as a proxy for the other is meaningless.
There are sooooooo many other situations where such device lockdown is warranted. Government intrusion, sensitive industry, journalism, anything ITAR/EAR covered, and more. Your reduction to a single issue is absurd.
It’s undeniable that the proverbial guns for hire make it easy (if not cheap) to target basically anyone — but just because the vibes are bad doesn’t mean we can just say “it’s common knowledge that …”
The fact is mitigations are costly in terms of convenience and ease of use. Helping people make informed choices about whether to enable mitigations and bear that cost requires more than platitudes imo
I consider Anthropic's Mythros security bug finder mostly marketing, but other things worry me that there might be a global hack contagion: for example, a few months ago I saw in the news that an executive at a US security company was caught selling information to a hacking group.
Except for disabled Javascript compilation possibly slowing down web sites, not getting some attachments in messages, and some graphics not showing up on some web sites, having Lockdown mode set doesn't seem to affect anything I do. For dev I use VPSs with ssh set for ensuring SSH agent forwarding is strictly disabled, as are reverse tunnels.
It seems like doing little things like this make sense because it is such a tiny hassle to be a little safer.
[1] https://gs.statcounter.com/android-version-market-share [2] https://www.cybersecurity-insiders.com/survey-reveals-over-1...
Search "android phone" on aliexpress and there's top selling phones on the first page running android 8, android 10, etc. They're not getting security updates of any sort, let alone driver updates.
Smaller brands often ship budget android devices and never update them.
Feels like there’s something new every other day - linux, windows, mobile, various commonplace tools used by everybody, the list goes on
Going backwards from 2023, the doubling interval for published CVEs was approximately 4 to 4 1/2 years. Since then it’s approximately two years.
There has definitely been a rapid uptick.
It's easier to find a needle in the haystack if the haystack is 50% needles.
just doubled the value and use cases of your AI solution!
There probably is more vulnerabilities found, but the amount of CVEs is not a good metric.
https://projectzero.google/2026/01/pixel-0-click-part-1.html
So AI usage increases bugs and humans have to weed them out!
This article doesn't mention AI helping find this bug. Seems like humans can still do that on their own.
I've seen quite a few saying that they were inspired by the previous report that is presented as "the model pointed us to it" and you get FOMO about missing out if you don't snatch bugs now as well
OpenBSD fixed this back in 2017.
```
does this look right to you? don't do any searches or check memory, just think through first principles
static int vpu_mmap(struct file fp, struct vm_area_struct vm) { unsigned long pfn; struct vpu_core core = container_of(fp->f_inode->i_cdev, struct vpu_core, cdev); vm_flags_set(vm, VM_IO | VM_DONTEXPAND | VM_DONTDUMP); / This is a CSRs mapping, use pgprot_device */ vm->vm_page_prot = pgprot_device(vm->vm_page_prot); pfn = core->paddr >> PAGE_SHIFT; return remap_pfn_range(vm, vm->vm_start, pfn, vm->vm_end-vm->vm_start, vm->vm_page_prot) ? -EAGAIN : 0; }
```
And it correctly identified the issue at hand, without web searches. I'd love to try something more comprehensive, e.g. shoving whole chunks of the codebase into the prompt instead of just the specific function, but it seems the latent ability to catch security exploits is there.
So then.... I wonder how this got out in the first place. I know I'm using a toy example but would love to learn more!
This is quite impressive considering I’m just a dumbass with a Claude subscription.
> Observations & Potential Issues A few things worth flagging: 1. No bounds checking on the mapping size. Userspace controls vm_end - vm_start and vm->vm_pgoff. Here vm_pgoff is ignored entirely and the size is trusted blindly. If the VPU's register block is, say, 64KB but userspace requests a 1MB mapping, the driver will happily map 1MB of physical address space starting at core->paddr — potentially exposing whatever hardware happens to live at adjacent physical addresses. A defensive check would be:
---
70 day release cycles are very quickly not going to be fast enough to stop widespread use of exploits when you have bots able to scan every PR on every open source project as it comes out.
I agree that e.g. working on an OS should require guild-type credentials. But I don't know if most SWEs understand the professional-standards requirements such organisations are empowered to enforce on their members.
It really just introduces a legal burden to prove competence and work in good-faith, and nets immense power to throw out ridiculous deadlines. Your managers are legally responsible too, and if they push beyond what's reasonable you have just cause to bring them to court in a way that you currently don't. To re-emphasize, I don't think this is a better world, but it's not unlivable.
If I was personally liable for damages, and there was an insurance program or some sort - similar to how doctors & dentists practice - sure, I'd probably still write code, very carefully. But if there was a decent change of me spending the rest of my life in prison because something I wrote on a Friday at 4pm under some amount of stress? No thanks. I can re-train as a plumber, and stand knee-deep in shit all day.
Take my friend who is a property lawyer. The firm she works for buys her insurance, because it would be insane to operate without insurance, but the only available insurance is personal insurance, it insures a specific person to do property law. So, although her day job is helping that $100Bn farm equipment company buy a $10M new factory from a $100Bn construction firm, at the weekend she is covered by that same insurance when she represents her friend buying a $500k cottage. AIUI this is a completely normal arrangement.
If that was the situation for programming, the company is going to buy your $100M exploit insurance because they need a programmer, but it's personal insurance so you could work on your Game jam game using the same insurance, and it'd be crazy to just "Go commando" if you don't have employment and thus insurance, in case somehow your "Galaga but also Blue Prince and somehow a visual novel" Game jam entry causes a $10M damages payment.
Insurance companies are very, very good at figuring out how to identify and price risk, once motivated to do so.
Also from what I've seen there are way too many GA accidents involving airline pilots for the insurers to eat that loss. They almost invariably have superior skills, but some of them more than compensate with risk taking.
But if they noticed that they were paying out more than expected on these $500k deals, the insurance would change quite quickly.
The same thing happened with GA insurance - there was an assumption that airline pilots would be safer but it didn't really turn out as expected, because a 747 has a heck of a lot more "keep you safe" doohickeys and doesn't fly low to the ground much.
Someone T-bones you in parking lot, chef causes food poisoning, plumber's leak floods your bathroom, personal trainer pushes to injury, mislabeled allergen on food, movers break your armoire, roofer leaves a leak -- I bet we'd see a lot less of all that if a $1MM fine + life in jail loomed over everyone.
Nobody would want to do business, but boy would we be in a golden age.
Yes, they certainly would. You wouldn't have smartphones, for instance.
I can't tell if this is satirical or not. But there are so many takes like this recently (hold the website liable for user content, hold the corporate developer liable for zero days in a project they happened to touch) that would all result in the same outcome (no more product at all) that I can't help but wonder if there's some luddite psy-op trying desperately to bring us back to a pre-Internet era in any way they can...
Pssst - hey, kid, want some GNU?
It does make me scared for what other dangers lurk since this was a really bad one and it was so little work to find.
Also of note: so many security issues lately have been done using AI. This report makes me think two things:
1. Expertise is still immensely valuable, the more niche, the more valuable.
2. There are lots of niches still where AI doesn't dominate...
Here's a cool project that inventories all your KASLR info leaks: https://github.com/bcoles/kasld
Now imagine the dark horrors hiding in the BSPs of other Android devices... or embedded devices in general.
Frankly, it should be a requirement of Google's certification process that everything regarding drivers gets upstreamed into the Linux kernel. Yes, even if this adds quite a time delay to the usual hardware development process.
By definition in Rust it's incorrect to overflow the non-overflowing integer types, and so if you intend say wrapping you should use the explicit wrapping operations such as wrapping_add or the Wrapping<T> types in which the default operators do wrap - but if you turn off checks then it's still safe to be wrong, just as if you'd call the wrapping operations by hand instead of using the non-wrapping operations.
That Dolby overflow code looks awkward enough that I can't imagine writing it in Rust even if the checking was off - but I wasn't there. However the reason it's on Project Zero is that it resulted in a bounds miss, and that Rust would have prevented anyway.
That is not a solution because it means the code can behave differently, and expose vulnerability if wrong compilation settings are chosen.
The functions like "wrapping_add" have such a long names so that nobody wants to use them and they make the code ugly. Instead, "+" should be used for addition with exceptions, and something like "wrap+" or "<+>" or "[+]" used for wrapping addition.
That's how people work, they will choose the laziest path (the simplest function name) and this is why you should use "+" for safer, non-wrapping addition and make the symbol for wrapping addition long and unattractive. Make writing unsafe code harder. This is just basic psychology.
C has the same problem, they have functions checking for overflow, but they also have long and ugly names that discourage their use.
> modern hardware will just wrap if you don't check and that's cheaper
So you suggest that because x86 is a poorly designed architecture, we should adapt programing languages to its poor design? x86 will be gone sooner or later anyway.
Also, there are languages like JS, Python, Swift which chose the right path, it is only C and Rust developers who seem to be backwards.
This is a very C-flavoured "solution". For those who haven't seen it this involves a pointer (!) and we're going to compute the addition, write the result to the pointed-at integer and then if that didn't fit and so it overflowed we'll return true otherwise false.
The closest Rust analogy would be T::carrying_add which returns a pair to achieve a similar result.
And yeah, checking is "basically free" unless it isn't, that's not different. If you haven't measured you don't know, same in every programming language.
It's never been true that you can't write correct software in C or C++ the problem is that in practice you won't do so.
We've moved slightly closer to this, but in a world where we're still arguing over memory safety being necessary we've probably still got a ways to go before we notice that addition silently overflowing is a top-10 security issue. It's the silent top-10 security issue, I guess.
That said you can enable overflow checks in Rust's release mode. It's literally two lines:
[profile.release]
overflow-checks = true
I wonder if it would make sense for ISAs to have trapping versions of add and subtract. RISC-V's justification for not doing that is that it's only a couple more instructions to check afterwards. It would be interesting to see the performance difference of `overflow-check = true` on high performance RISC-V chips once they are available.MIPS does (did?). And VAX, IBM/360, ....