This is really poor. And why is a Virgin Media address the closest best thing here? https://www.o2.co.uk/.well-known/security.txt should 200, not 404.
To be clear, I have no problem with disclosure in these circumstances given the inaction, but I'm left wondering if this is the sort of thing that NCSC would pick up under some circumstances (and may have better luck communicating with the org)?
I'll update it with a correction.
> is within LAC 0x1003 (decimal: 4009)
It should be decimal 4099.
When I worked there (many years ago) the security team was excellent. When I emaileld them about an issue last year, they were all gone.
There are no systems at any point tricked into revealing personal data, which is often illegal, even if the hack is trivial. Even appending something like "&reveal_privat_data=true" to an URL might be considered illegal, because there is clear intent to access data you shouldn't be allowed to access. In this case none of that is done.
You clearly aren’t familiar with how broad the Computer Misuse Act is
No, I'm not familiar with it at all. But usually illegal hacking requires to access devices in a way you aren't allowed to access. As long as making the phone call itself is not an issue, it should be fine. Dumping data from the memory of your phone can't be unauthorized.
It would probably become an issue if you make unusual phone calls, harassing people with constantly calling, or calling just for the purpose of getting the location data and immediately hanging up. But just dumping the diagnostics for regular phone calls should be fine (I'm not a lawyer).
> just dumping the diagnostics for regular phone calls should be fine
IANAL, but computer hacking laws like the CMA in the UK and CFAA in the US are written in a manner so vague that even pressing F12 to view the source of a web page could be a violation [0]. From O2's perspective, they could argue that the OP has accessed their internal diagnostic data in an unauthorized manner. What we (technical people) think is irrelevant.
[0]: In the US, the DOJ has revised its policy to not prosecute defendants pursuing "good faith security research," which you may trust at your own risk: https://www.justice.gov/archives/opa/pr/department-justice-a...
"good faith security research" is a different ballpark though. Some laws catch all unauthorized access, even if the intent is not in a bad faith (which is probably a very bad idea, but that's how it is). But it also makes sense to some point: if your neighbor has a really bad lock that can be opened just by hitting the door frame a few times, you're also not allowed to break in just to disclose their bad security.
Usually some deliberate action needs to be taken that qualifies as unauthorized access. Something like adding a malformed header to a HTTP request could be enough. Or logging in with credentials that are clearly not yours (even if it's just admin/admin). But logging the traffic of regular and authorized usage patterns shouldn't be enough.
Famously, in Germany, it's illegal to be carrying a laptop on which nmap is installed. Everyone (who has a laptop and knows how to use nmap) still does it. It's one of those crimes which they get you for if they don't like you but you didn't commit any actual crime.
Do you just sit on the info, hoping noone else sees it and exploits it?
Or do you try and get them to fix it somehow?
To be honest, I personally would be scared to report such vulnerabilities with my real identity to begin with. With big tech companies, no matter how poorly their bug bounty programs are run, I still have this naive expectation that they won't shoot the messenger. At worst they could ban my accounts and maybe send threatening letters, but they probably won't ruin my life as long as I abide by the norms (agreed by technical people).
However, I do not feel the same naive optimism towards "legacy" institutions like telecoms and public services. At best it's thankless work, at worst I get sued [0] or become a scapegoat so some official could score some political points [1]. It's unfortunate - I am acutely aware that this is chilling effect at work, and our systems are collectively less secure because of it.
[0]: https://www.cnbc.com/2024/09/15/dark-web-expert-warned-us-ho... [1]: https://techcrunch.com/2021/10/15/f12-isnt-hacking-missouri-...
It's as simple as using Scat (https://github.com/fgsect/scat) with the modem diag port enabled to view all signalling traffic to/from the network.
At least the free version of the app doesn't seem to "decrypt" anything, but it has root access and access to the modem, so it can read these logs. It can also disable bands and try to lock to a specific mast (like dedicated 4G/5G routers can), which is useful if you're trying to use mobile data as your main internet connection.
>O2 reached out to me via email to confirm that this issue has been resolved. I have validated this information myself, and can confirm that the vulnerability does appear to be resolved.
It's disappointing that they didn't reply, but I'm not surprised. O2 seems to be a mess internally. Anything that can't be fixed by someone at a store takes ages to fix (eg: a bad number port). Their systems seem to be outdated, part of their user base still can't use VoLTE, their new 5G SA doesn't support voice and seems to over rely on n28 making it slow for many, their CTO blogs about leaving "vanity metrics behind"[0] even though they are usually the worst network for data, etc.
[0] https://news.virginmediao2.co.uk/leaving-the-vanity-metrics-...
So it seems like that won't do anything.
*yes I’m aware that means people you know who have your number could also exploit this
If you're quick enough (or automate this with dedicated software, like an attacker might actually do), it won't even need to ring out. It's really not good.
Now UK has left the EU so GDPR does no longer apply. But it is my understanding they have not changed any fundamental principles in whatever applies now?