This attack wouldn't work if every app, even an "offline game", has those implicit permissions by default. Many apps should at most have "Only while using the app" permission to access the Internet. Which would not be complete protection -- there's always the risk you misclick on a now-malicious app that you never use -- but it would make the attack far less effective.
Mildly off-topic, do you know of any good studies in the dangerous defect rate of auto-updating vs never/manually updating in a semi-sandboxed environment like Android?
But I've seen fewer stories of that sort of thing with Android apps. Maybe the app store review process is able to catch it? But just as likely to me is that it's harder to discover that a mobile app is now maliciously sending data somewhere.
https://old.reddit.com/r/androiddev/comments/ci4tdq/were_on_...
I don't know about "running in the background" but Android work using "intents", which mean an app can be woken up effectively at any time, so "don't allow app to run in the background" may not do what you expect.
I think there's ways to manage the communication with users around which cases it is surprising/suspicious for the app to require that functionality. Personally, I don't love the model that apps ask for certain permissions but aren't required to explain in a way that can be verified by app store reviewers what they need those permissions for.
And even if one doesn't want every consumer to have to explicitly consent to the permission, it seems to me like you could still have an opt-out mechanism, so that the paranoid among us can implement a more restrictive policy, rather than giving up on the idea of having such a permission entirely.
IDK, I think there are obvious low-hanging attempts [0] such as: do not display secret codes in stable position on screen? Hide it when in background? Move it around to make timing attacks difficult? Change colours and contrast (over time)? Static noise around? Do not show it whole at the time (not necessarily so that user could observe it: just blink parts of it in and out maybe)? Admittedly, all of this will harm UX more or less, but in naïve theory should significantly raise demands for the attacker.
[0] Provided the target of the secret stealing is not in fact some system static raster snapshot containing the secret, cached for task switcher or something like that.
I wonder if they were aware of this flaw, and were mitigating the risk.
A relevant threat scenario is when you're using your phone in a public place. Modern cameras are good enough to read your phone screen from a distance, and it seems totally realistic that a hacked airport camera could email/password/2FA combinations when people log into sites from the airport.
Ideally, you want the workflow to be that you can copy the secret code and paste it, without the code as a whole ever appearing on your screen.
> The attacks described in Section 5 take hours to steal sensitive screen regions—placing certain categories of ephemeral secrets out of reach for the attacker app. Consider for example 2FA codes. By default, these 6-digit codes are refreshed every 30 seconds [38]. This imposes a strict time limit on the attack: if the attacker cannot leak the 6 digits within 30 seconds, they disappear from the screen
> Instead, assuming the font is known to the attacker, each secret digit can be differentiated by leaking just a few carefully chosen pixels
Throw a privacy notice to the users "This app will take periodic screenshots of your phone" You'd be amazed how many people will accept it.
> Did you release the source code of Pixnapping? We will release the source code at this link once patches become available: https://github.com/TAC-UCB/pixnapping
It's not exactly impossible to reverse what's happening here. You could have waited until it was patched but sounds like you wanted to get your own attention as soon as possible.
The researchers aren't releasing their code because they found a workaround to the patch.
Then there's a bunch of "no GPU vendor has committed to patching GPU.zip" and "Google has not committed to patching our app list bypass vulnerability. They resolved our report as “Won’t fix (Infeasible)”."
And their original disclosure was on February 24, 2025, so I don't think you can accuse them of being too impatient.
As for "This app will take periodic screenshots of your phone", you still need an exploit to screenshot things that are explicitly excluded from screenshots (even if the user really wants to screenshot them.)
P.S.: where did you see this discussion?
We have this tendency of adding more and more "features", more and more functionality 85% of which nobody asked for or has use for.
I believe that there will be a market for a small, bare bones secure OS in the future. Akin to how freeBSD is being run.
https://www.bunniestudios.com/blog/2020/introducing-precurso... (currently down, might be up later)
There are complaints on this story, and on the recent one about the fsf phone project about how inconvenient it is to not be able to access banking apps on your mobile phone. I can't be bothered to enter my banking password every 30 minutes on my desktop! What, I'm supposed to have two phones?
The first thing someone is going to do when they steal your phone (after they saw you enter your password in public) is open your banking and money apps and exfiltrate as much as they can from your accounts. This happens every single day. None of those apps should be installed or logged in on your phone. Same goes for 2FA apps. That's like traveling with Louis Vuitton luggage which is basically a "steal me" sign.
That's the most basic stuff for people who aren't a CEO of a company that is in the crosshairs of state sponsored espionage attacks.
The problems with "bare bones secure OS" device remain the same from a physical access standpoint: social engineering, someone sees your password, steals the device. But otherwise, yes, the devices you install a bunch of spyware/adware games on and take to bars should not be the ones you are doing your banking, 2FA, work, etc on ever.
But in this write up they say the patch doesn’t work fully
While these blurs make the sidechannel easier to use as it provides a clear signal, considering you can predict the exact contents of the screen I feel like you could get away with just a mask.
Given to rise of well defined templates (accurately vibe coding design for example: GitHub notification emails) phishing via email, I have literally stopped clicking links email and now I have stop launching apps from intent directly (say open with). Better to open manually and perform such operation + remove useless apps but people underestimate the attack surface (it can come through sdk, web page intents)
If you use the same password on two websites, any one of the two websites can use it to log you it in the second website (if it doesn't have an extra layer of security).
On paper security is pretty weak yet in practice these attacks are not very common or easy to do.
On desktop, apps aren't sandboxed. On mobile, they are. Breaking out of the sandbox is a security breach.
On desktop, people don't install an app for every fast food chain. On mobile, they do.
>A secure window should draw 100% as requested or not at all.
Take for example "night mode" which adds an orange tint to everything. If secure windows don't get such an orange tint they will look out of place. Being able to do post processing effects on secure windows is desirable, so as I said there is a trade off here in figuring out what should be allowed.
That seems well worth the trade to me.
Either the screen reader is built into the OS as signed + trusted (and locks out competition in this space), or it's a pluggable interface, that opens an attack surface to read secure parts of the screen.
Either that or Meta's ability to track/influence emotional state by behaviour is that good that they can advertise to me things I've only thought of and not uttered or even searched anywhere.
Do not install apps. Use websites.
Apps have way too much permissions, even when they have "no permissions".
Similarly, software vendors want you to use apps for the same reason you don't want to use them. They'll rely on dark patterns to herd you to their native apps.
These two desires influence whether it's viable to use the web instead of apps. I think we need legislation in this area, apps should be secondary to the web services they rely on, and companies should not be allowed to purposely make their websites worse in order to get you on their apps.
I don't own or carry a smart phone. I'm still able to get by without one, but just barely.
I only used once, in February, so hopefully they didn't break it since then.
>"It looks like the IT security world has hit a new low," Torvalds begins. "If you work in security, and think you have some morals, I think you might want to add the tag-line: "No, really, I'm not a whore. Pinky promise" to your business card. Because I thought the whole industry was corrupt before, but it's getting ridiculous," he continues. "At what point will security people admit they have an attention-whoring problem?"
https://www.techpowerup.com/242340/linus-torvalds-slams-secu...