The level of persistence these guys went through to phish at scale is astounding—which is how they gained most of their access. They’d otherwise look up API endpoints on GitHub and see if there were any leaked keys (he wasn’t fond of GitHub's automated scanner).
https://www.justice.gov/usao-wdwa/pr/member-notorious-intern...
They themselves are likely to some extent the victims of social engineering as well. After all who benefits from creating exploits for online games and getting children to become script kiddies? Its easier (and probably safer) to make money off of cyber crime if your role isn't committing the crimes yourself. It isn't illegal to create premium software that could in theory be use for crime if you don't market it that way.
To gift to a 529 regardless of the financial institution, you go to some random ugift529.com site and put in a code plus all your financial info. This is considered the gold standard.
To get a payout from a class-action lawsuit that leaked your data, you must go to some other random site (usually some random domain name loosely related to the settlement recently registered by kroll) and enter basically more PII than was leaked in the first place.
To pay your fed taxes with a credit card, you must verify your identity with some 3rd party site, then go to yet another 3rd party site to enter your CC info.
This is insane and forces/trains people to perform actions that in many other scenarios lead to a phishing attack.
Yes, we've (the software industry) been training people to practice poor OpSec for a very long time, so it's not surprising at all that corporate cybersecurity training is largely ineffective. We violate our own rules all the time
1. It said "Dear User" instead of a name/username;
2. It talked about how they were upgrading their forum software and as such would require me to re-login;
3. It gave me a link to click in the email without any stated alternative;
4. It warned me that if I didn't do this, I would no longer be able to access the forum;
5. The domain on the From line of the email was not microsoft.com, but a different domain that had "microsoft" in it.
It was a textbook example for how a phishing email would look, and yet it was actually a legitimate email from Microsoft!
I haven't had any others like it since, but that was an eye-opener for sure.
[0] https://reddit.com/r/facepalm/comments/32ou4z/microsoft_what...
Maybe if you expected everyone to copy-paste the info into the form? That might work
I mean, what's the point of their SSO if you're just going to need to verify it with an email code anyways?
This is how I found out quite a few scams (apart from obvious ones with improper wording or visual formatting, but those are on purpose so bad to catch only most unskilled or gullible, ie your grandma)
Phone/laptop based biometrics?
You really do fully own and control your identity, and if you botch it and lose your top level keys, no one else can give you a "forgot password" recovery.
If this level of unforgiveness were dropped onto everyone overnight, it would mean infinite lost life savings and houses and just mass chaos.
Still I think it would be the better world where that was somehow actually adopted. The responsibility problem would be no problem if was simply the understood norm all along that you have this super important thing and here is how you handle it so you don't lose your house and life savings etc.
If you grew up with this fact of life and so did everyone else, it would be no problem at all. If it had been developed and adopted at the dawn of computers so that you learned this right along with learning what a compuer was in the first place, no problem. It's only a problem now that there are already 8 billion people all using computer-backed services without ever having to worry about anything before.
The real reason it's never gonna happen is exactly because it delivers on the most important promise of end user ultimate agency and actual security.
No company can own it, or own end users use of it. It can not be used for vendor lock in or data collection or profiling or government back doors or censorship or discrimination or any of the things that holding someone's password or the entire auth technology can be used for to have control over users.
No (large) company nor any government has any interest in that, and it's way too technical for 99.99% of people to understand the problems with all the other popular auth systems so there will be no overwhelming popular uprising forcing the issue, and so it will never happen.
A method already exists (I think), that solves the hard problems and delivers the thing everyone says they want, and everything else claims to be groping for, but we will never get to use it.
If I want to use a passkey on my phone, I have to bio authenticate into it. Similarly, with Windows Hello as a passkey provider, via my camera scanner. It works well and is pretty seamless, all things considered. I prefer it to the email/code/magic link method.
Public/private keys with a second factor (like biometrics) as identity I think is a good option. A way to announce who you are, without actually revealing your identity (or your email address).
Tbh that's how all the age verification crap should work too for the countries that want to go down that road instead of having people upload a copy of their actual ID to some random service that is 100% guaranteed going to get breached and leaked.
We need psuedoanonymous verification
well, no wonder they’re after you as a demographic.
But he shrugged it off.
I bet there are quite a few shops online that may sell gift cards that are used in money laundering schemes. Bonus points if they accept bitcoin.
But those are all quite implicitly used by cybercrime. I can imagine there are quite a few tools at their disposal that are much more explicit.
I was involved in probably 15 operations with them while I was there. They would usually get C&C within six hours, every single time it was phishing lol.
But if we're holding users accountable because 1 out of every 100 clicks a link in a phishing email like clockwork, we're bad at both statistics and security.
Who is making money off of selling premium software, that's not marketed as for cybercrime, to non-governmental attackers? Wouldn't the attackers just pirate it?
> Wouldn't the attackers just pirate it?
Sometimes the software is SaaS (yes, even crimeware is SaaS now). In other cases, it has heavy DRM. Besides that, attackers often want regular updates to avoid things like antivirus detections.
did you have bulletproof hosting and they caught you through other means like going after your payment providers or you made opsec mistakes or how exactly?
was it a website like Sportsurge where it simply linked to streams or did it actually host the streams?
Do you mean they thought the scanner was effective and weren't fond of it because it disrupted their business? Or do you mean they had a low opinion of the scanner because it was ineffective?
explain
> We are sorry. We regret that this incident has caused worry for our partners and people. We have begun the process to identify and contact those impacted and are working closely with law enforcement and the relevant regulators. We are fully committed to maintaining your trust.
I know there will by a bunch of cynics who say that an LLM or a PR crisis team wrote this post... but if they did, hats off. It is powerful and moving. This guys really falls on his sword / takes it on the chin.> Like, how many other deprecated third party systems were identified handling a significant portion of your customer data after this hack?
The problem with that is that you'll never know. Because you'd have to audit each and every service provider and I think only Ebay does that. And they're not exactly a paragon of virtue either.
> Who declined to allocate the necessary budget to keep systems updated?
See: prevention paradox. Until this sinks in it will happen over and over again.
> But mere words like these are absolutely meaningless in today's world. People are right to dismiss them.
Again, yes, but: they are at least attempting to use the right words. Now they need to follow them up with the right actions.
Right! But, wouldn't a more appropriate approach be to mitigate the damage from being hacked as much as possible in the first place? Perhaps this starts by simplifying bloated systems, reducing data collection to data that which is only absolutely legally necessary for KYC and financial transactions in whatever respective country(ies) the service operates in, hammer-testing databases for old tricks that seem to have been forgotten about in a landscape of hacks with ever-increasingly complexity, etc.
Maybe it's the dad in me, years of telling me son to not apologize, but to avoid the behavior that causes the problem in the first place. Bad things happen, and we all screw up from time to time, that is a fact of life, but a little forethought and consideration about the best or safest way to do a thing is a great way to shrink the blast area of any surprise bombs that go off.
As a controls tech, I provide a lot of documentation and teach to our customers about how to deploy, operate and maintain a machine for best possible results with lowest risk to production or human safety. Some clients follow my instruction, some do not. Guess which ones end up getting billed most for my time after they've implemented a product we make.
Too often, we want to just do without thinking. This often causes us to overlook critical points of failure.
Even so, we still need to keep an eye out. A couple of days ago, an old account (not quite a year), started spewing connection requests to all the app users. It had been a legit account, so I have to assume it was pwned. We deleted it quickly.
A lot of our monitoring is done manually, and carefully. We have extremely strict privacy rules, and that actually makes security monitoring a bit more difficult.
Such data is a liability, not an asset and if you dispose of it as soon as you reasonably can that's good. If this is a communications service consider saving a hash of the ID and refusing new sign ups with that same ID because if the data gets deleted then someone could re-sign up with someone else's old account. But if you keep a copy of the hash around you can check if an account has ever existed and refuse registration if that's the case.
It's important that "delete all my information" also deletes everything after the user logs in for the first time.
Also, I'm not sure that Apple would allow it. They insist that deletion remove all traces of the user. As far as I know, there's no legal mandate to retain anything, and the nature of our demographic, means that folks could be hurt badly by leaks.
So we retain as little information as possible -even if that makes it more difficult for us to adminster, and destroy everything, when we delete.
The risk you have here is one of account re-use, and the method I'm suggesting allows you to close that hole in your armor which could in turn be used to impersonate people whose accounts have been removed at their request. This is comparable to not being able to re-use a phone number once it is returned to the pool (and these are usually re-allocated after a while because they are a scarce resource, which ordinary user ids are not).
Nah, but I understand the error. Not a big deal.
We. Just. Plain. Don't. Keep. Any. Data. Not. Immediately. Relevant. To. The. App.
Any bad actor can easily register a throwaway, and there's no way to prevent that, without storing some seriously dangerous data, so we don't even try.
It hasn't been an issue. The incident that I mentioned, is the only one we've ever had, and I nuked it in five minutes. Even if a baddie gets in, they won't be able to do much, because we store so little data. This person would have found all those connections to be next to useless, even if I hadn't stopped them.
I'm a really cynical bastard, and I have spent my entire adult life, rubbing elbows with some of the nastiest folks on Earth. I have a fairly good handle on "thinking like a baddie."
It's very important that people who may even be somewhat inimical to our community, be allowed to register accounts. It's a way of accessing extremely important resources.
> Some clients follow my instruction, some do not.
So you’re telling me you design a non-foolproof system?!? Why isn’t it fully automated to prevent any potential pitfalls?
What an odd thing to teach a child. If you've wronged someone, avoiding the behavior in future is something that'll help you, but does sweet fuck all for the person you just wronged. They still deserve an apology.
But yes, even if you try to make a healthy balance, there are still plenty of times when an apology are appropriate and will go a long way, for the giver and receiver, in my opinion anyway.
I did not mean to come off as teaching my kid to never apologize.
It’s 5-why’s style root cause analysis, which will build a person that causes less harm to others.
I am willing to believe that the same parent also teaches when and why it is sometimes right to apologize.
But of course, apologizing when you have definitely wronged a person is important, too. I didn't mean to come off as teaching my kid to never apologize, just think before you act. But you get the idea.
I don’t think I agree with this at all. Screwing up is, by far, the most impactful thing that can minimize the future blast radius.
Common sense, wisdom, and pain cannot be communicated very well. Much more effective if experienced. Like trying to explain “white as snow” to someone who’s never seen snow. You might say “white as coconut” but that doesn’t help them know about snow. Understanding this opens up a lot more grace and patience with kids.
Most often when we tell our kids, ”you know better”, it’s not true. We know better, only because we screwed it up 100 times before and felt the pain.
No amount of “think about the consequences of your actions” is going to prevent them from slipping on the ice, when they’ve never walked on the ice before.
<rolls eyes>
I feel like most of these people will never be senior managers at a tech company because they will "go broke" trying to prevent every last mistake, instead of creating a beautiful product that customers are desperate to buy! My father once said to me as a young person: "Don't insure yourself 'to death' (bankruptcy)." To say: You need to take some risk in life as a person, especially in business. To be clear: I am not advocating that business people be lazy about computer security. Rather, there is a reasonable limit to their efforts.
You wrote:
> Everybody gets hacked, sooner or later.
I mostly agree. However, I do not understand how GMail is not hacked more often. Literally, I have not changed my Google password in ~10 years, and my GMail is still untouched. (Falls on sword...) How do they do it? Honestly: No trolling with my question! Does Google get hacked but they keep it a secret? They must be the target of near-constant "nation state"-level hacking programmes.The flip side of this is how many people are wrongly locked out of their gmail. I bet there's quite a few of them that failed to satisfy whatever filters Google put in place.
To begin with, they have a culture of not following "industry standards".
(For the reason that the industry never had this scale yet)
But in the real world, you have words ie. commitment before actions and a conclusion.
Best of luck to them.
Name five.
Having a minimal attack surface and not being actively targeted is a meaningful advantage here.
And there's also a decent chance they have. Did we not just have a years long spate of ransomware targeting small businesses?
- Amazon
- Meta
That said...
We do our very best. But I don't know anyone here who would say "it can never happen". Security is never an absolute. The best processes and technology will lower the likelihood and impact towards 0, but never to 0. Viewed from that angle, it's not if Amazon will be hacked, it's when and to what extent. It is my sincere hope that if we have an incident, we rise up to the moment with transparency and humility. I believe that's what most of us are looking for during and after an incident has occurred.
To our customers: Do your best, but have a plan for what you're going to do when it happens. Incidents like this one here from checkout.com can show examples of some positive actions that can be taken.
Exactly. I think it is great for people like you to inject some more realistic expectations into discussions like these.
An entity like Amazon is not - in the longer term - going to escape fate, but they have more budget and (usually) much better internal practices which rule out the kind of thing that would bring down a lesser org. But in the end it is all about the budget, as long as Amazon's budget is significantly larger than the attackers they will probably manage to stay ahead. But if they ever get complacent or start economizing on security then the odds change very rapidly. Your very realistic stance is one of the reasons it hasn't happened yet, you are acutely aware you are in spite of all of your efforts still at risk.
Blast radius reduction by removing data you no longer need (and that includes the marketing department, who more often than not are the real culprit) is a good first step towards more realistic expectations for any org.
https://www.reuters.com/article/technology/exclusive-apple-m...
Facebook was also hacked in 2018. A vulnerability in the website allowed attackers to steal the API keys for 50 million accounts:
The Chinese got into gmail (Google) essentially on a whim to get David Petraeus' emails to his mistress. Ended his career, basically.
I'd bet my hat that all 3 are definitely penetrated and have been off and on for a while -- they just don't disclose it.
source: in security at big orgs
Disclosure: I work at Google but have no internal knowledge about whether Petraeus was related to Operation Aurora.
Considering the number of Chinese nationals who work for them at various levels... of course they're all penetrated. How could that possibly fail to be true?
This is what incident handling by a trustworthy provider looks like.
https://cloud.google.com/blog/topics/threat-intelligence/voi...
https://www.forbes.com/sites/daveywinder/2025/08/09/google-c...
The hackers called employees/contractors at Google (& lots of other large companies) with user access to the company's Salesforce instance and tricked them into authorizing API access for the hackers' machine.
It's the same as loading Apple TV on your Roku despite not having a subscription and then calling your neighbor who does have an account and tricking them into entering the 5 digit code at link.apple.com
Continuing with your analogy, they didn't break into the off-site storage unit so much as they tricked someone into giving them a key.
There's no security vulnerability in Google/Salesforce or your apartment/storage per se, but a lapse in security training for employees/contractors can be the functional equivalent to a zero-day vulnerability.
Disclosure: I work at Google, but don't have much knowledge about this case.
All of these companies have been hacked by nation states like Russia and China.
They too will get hacked, if it hasn't happened already.
We also have to remember that we have collectively decided to use Windows and AD, QA tested software etc (some examples) over correct software, hardened by default settings etc.
Here, Checkout has been the victim of a crime, just as much as their impacted customers. It’s a loss for everyone involved except the perpetrators. Using words like “betrayed” as if Checkout wilfully mislead its customers, is a heavy accusation to level.
At a point, all you can do is apologise, offer compensation if possible, and plot out how you’re going to prevent it going forward.
I totally agree – You've covered the 3 most important things to do here: Apologize; make it right; sufficiently explain in detail to customers how you'll prevent recurrences.
After reading the post, I see the 1st of 3. To their credit, most companies don't get that far, so thanks, Checkout.com. Now keep going, 2 tasks left to do and be totally transparent about.
Every additional nine of not getting hacked takes effort. Getting to 100% takes infinite effort i.e. is impossible. Trying to achieve the impossible will make you spin on the spot chasing ever more obscure solutions.
As soon as you understand a potential solution enough to implement it you also understand that it cannot achieve the impossible. If you keep insisting on achieving the impossible you have to abandon this potential solution and pin your hope on something you don't understand yet. And so the cycle repeats.
It is good to hold people accountable but only demand the impossible from those you want to go crazy.
As AI tools accelerate hacking capabilities, at what point do we seriously start going after the attackers across borders and stop blaming the victimized businesses?
We solved this in the past. Let’s say you ran a brick-and-mortar business, and even though you secured your sensitive customer paperwork in a locked safe (which most probably didn’t), someone broke into the building and cracked the safe with industrial-grade drilling equipment.
You would rightly focus your ire and efforts on the perpetrators, and not say ”gahhh what an evil dumb business, you didn’t think to install a safe of at least 1 meter thick titanium to protect against industrial grade drilling!????”
If we want to have nice things going forward, the solution is going to have to involve much more aggressive cybercrime enforcement globally. If 100,000 North Koreans landed on the shores of Los Angeles and began looting en masse, the solution would not be to have everybody build medieval stone fortresses around their homes.
Indeed, an apology is bad and no apology is also bad. In fact, all things are bad. Haha! Absolutely prime.
In terms of "downplaying" it seems like they are pretty concrete in sharing the blast radius. If less than 25% of users were affected, how else should they phrase this? They do say that this was data used for onboarding merchants that was on a system that was used in the past and is no longer used.
I am as annoyed by companies sugar coating responses, but here the response sounds refreshingly concrete and more genuine than most.
We are truly sorry for the impact this has no doubt caused on our customers and partners businesses. This clearly should never have happened, and we take full responsibility.
Whilst we can never put into words how deeply sorry we are, we will work tirelessly to make this right with each and every one of you, starting with a full account of what transpired, and the steps we are going to be taking immediately to ensure nothing like this can ever happen again.
We want to work directly with you to help minimise the impact on you, and will be reaching out to every customer directly to help understand their immediate needs. If that means helping you migrate away to another platform, then so be it - we will assist in any way we can. Trust should be earn't, and we completely understand that in this instance your trust in us has understandably been shaken.
> Whilst we can never put into words how deeply sorry we are
To my European ears that comes across as hyperbolic and insincere but maybe it’s fine for an American audience. These things are very culture-dependent.
"A quarter of user accounts were affected. We have calculated that to be 7% of our customers."
"We regret that we neglected our security to such degree that it has caused this incident."
It's very simple. Don't be sorry I feel bad, be sorry you did bad.
> This was our mistake, and we take full responsibility.
I wonder how much of the negative sentiment about this is from a knee jerk reaction and careless reading vs. thoughtful commentary.
In my country, this debate is being held WRT the atrocities my country committed in its (former) colonies, and towards enslaved humans¹. Our king and prime minister never truly "apologized". Because, I kid you not, the government fears that this opens up possibilities for financial reparation or compensation and the government doesn't want to pay this. They basically searched for the words that sound as close to apologies as possible, but aren't words that require one to act on the apologies.
¹ I'm talking about The Netherlands. Where such atrocities were committed as close as one and a half generations ago still (1949) (https://www.maastrichtuniversity.nl/blog/2022/10/how-do-dutc...) but mostly during what is still called "The Golden Age".
Letting business concerns trump human empathy is exactly the damn problem and exactly why these companies still deserve immense ire no matter how they word their "We don't want to admit fault but we want you to think we care" press release. This is also true of something like the Dutch crown or the USA having tons of people being extremely upset at the suggestion of teaching kids what the US has actually done in it's history.
That preceding line makes it, to me, a real apology. They admit fault.
Because these things take time, while you need to disclose that something happened as fast as possible to your customers (in the EU, you are mandated by the GDPR, for instance).
We are fully committed to rebuilding your trust.
"We will pay $500,000 to anyone who can provide information leading to the arrest and conviction of the perpetrators. If the perpetrators can be clearly identified but are not in a country which extradites to or from the United States, we will pay $500,000 for their heads."
Your recourse within US law is to petition the government to do something about it. Negotiate extradition. Go to war. Etc.
Hey donnie, these guys are "Venezuelan drug trafficers"
One places the company at the center as the important point of reference, avoiding some responsibility. The other places the customer at the center, taking responsibility.
- timely response
- initial disclosure by company and not third party
- actual expression of shame and remorse
- a decent explanation of target/scope
i could imagine being cyclical about the statement, but look at other companies who have gotten breached in the past. very few of them do well on all points
For that level of breach their response seems about right to me, especially waving the money in ShinyHunters' face before giving it away to their enemies.
Timely in what way? Seems they didn't discover the hack themselves, didn't discover it until the hackers themselves reached out last week, and today we're seeing them acknowledging it. I'm not sure anything here could be described as "timely".
If i build a house of cards in a week, that took way longer than the average house of cards, and it would not be fair to call it "timely".
In a world where most companies report breaches months after the fact, yes, I think "last week we found out about it and we're now confirming it" is fair. You need to work with Law Enforcement, you need to confirm the validity of the data and the hacker's claims, and that they data they are ransoming is all they actually took. You need to check the severity of the data they took. Was it user/passes? Was there any trademarked processes, IP, sensitive info? You need to ensure the threat actor is removed from your environment, and the hole they got in with is closed.
If you choose to pay the ransom, you may need to work even closer with LE to ensure you don't get flagged for aiding and funding criminals.
With them choosing not to pay, I'm sure they need to clear that with legal still. Finance needs to be on board. Can you actually call it a charitable donation for a tax write-off if its under this sort of duress? (And I'd assume there's other sort's of questions a SysAdmin can't be expected to come up with examples for)
While ALL of this is happening, you can't announce your actions. You can't put our a PR until you know for sure you were compromised, what the scope was, and that any persistence has been removed.
To borrow from a different context, if eating meat every day is being an evil animal abuser and being vegetarian but liking cheese sauce on you pasta is being an evil animal abuser, why should anyone consider eating less meat?
Warning: not very well thought-out generalisation ahead
We need to be able to express nuance, otherwise everything turns into a shitshow like, for example, the current state of political and social discourse. Americans will vote for privatisation because public healthcare is "literally communism" and "communism is the devil". Twitter users will vote for white supremacists because they get called "literal nazis" for the big nose jokes they occasionally make.
From customer perspective “in an effort to reduce the likelihood of this data becoming widely available, we’ve paid the ransom” is probably better, even if some people will not like it.
Also to really be transparent it’d be good to post a detailed postmortem along with audit results detailing other problems they (most likely) discovered.
It’s a sliding scale, where payment firmly pushes you in the more comfortable direction.
Also, the uncomfortable truth is that ransomware payments are very common. Not paying will make essentially no difference, the business would probably still be incredibly lucrative even if payment rates dropped to 5% of what they are now.
If there was global co-operation to outlaw ransom payments, that’d be great. Until then, individual companies refusing to pay is largely pointless.
No, it pushes you in a more comfortable direction, and I'm not you.
If your company gets hit by one of these groups and you want to protect your customers, paying is almost always the most effective way to do that. Someone who isn’t particularly interested in protecting their customers probably wouldn’t pay if the damage from not paying would be lower than the cost of paying.
A third possibility is that you simply feel uncomfortable about paying, which is fine, but it isn’t a particularly rational basis for the decision.
I think we can also fairly assume that the vast majority of people have no strong feelings about ransomware, so there’s likely going to be no meaningful reputational damage caused by paying.
The extortionist knows they cannot prove they destroyed the data, so they will eventually sell it anyway.
They will maybe hold off for a bit to prove their "reputation" or "legitimacy". Just don't pay.
The ransom payments tend to be so big anyway that selling the data and associated reputational damage is most likely not worth the hassle.
Basic game theory shows that the best course of action for any ransomware group with multiple victims is to act honestly. You can never be sure, but the incentives are there and they’re pretty obvious.
The big groups are making in the neighbourhood of $billions, earning extra millions by sabotaging their main source of revenue seems ridiculous.
However they don’t really need to because there are plenty of documented cases, and the incident response company you hire will almost certainly have prior knowledge of the group you’re forced to deal with.
If they had a history of fucking over their “customers”, the IR team you hired would know and presumably advise against paying.
Whoa. You're a crime organization. The data may as well "leak" the same way it leaked out of your victim's "reputable" system.
Yes, the data might still leak. It’s absurd to suggest that it’s not less likely to leak if you pay.
There’s a reason why businesses very frequently arrive at the conclusion that it’s better to pay, and it’s not because they’re stupid or malicious. They actually have money on the line too, unlike almost everyone who would criticise them for paying.
Until there is legislation to stop these payments, there will be countless situations where paying is simply the best option.
Paying the ransom is not exactly legal, is it? Surely the attackers don't provide you with a legitimate invoice for your accounting. As a company you cannot just buy a large amount of crypto and randomly send it to someone.
> As a company you cannot just buy a large amount of crypto and randomly send it to someone.
You can totally do that, why wouldn’t you be able to?
Sure, in the US, you want to have those things to prove your expenses to the IRS, but it’s all pretty freeform. You could just document the ransomware payment process with screenshots, for example.
Besides, if you ask, I’m sure the ransomware group will send you a very professional-looking invoice and receipt.
Normally, you’d be going through an IR company anyway, who would invoice you and handle the payment process on your behalf.
They hire a third party, sometimes their cyber insurance provider, to "cleanup" the ransomware. That third party then pays another third party who is often located in a region of the world with lax laws to perform the negotiations.
At the end of the day nobody breaks any laws and the criminals get paid.
And selling the data from companies like Checkout.com is generally still worth a decent amount, even if nowhere close to the bigger ransom payments.
It’s not great, but it’s the least shitty option.
This is like falling victim to a scam and paying more on top of it because the scammers promised to return the money if you pay a bit more.
I see no likelihood game to be played there because you can't trust criminals by default. Thinking otherwise is just naive and wishful. Your data is out in the wild, nothing you can do about that. As soon as you accept that the better are your chances to do damage reduction.
Picking up hundreds of thousands at best (very few databases would be worth so much) when your main business pays millions or tens of millions per victim simply isn’t worth it, selling the data would jeopardise their main business which is orders of magnitude more profitable.
Absolutely no IR company will advise their clients to pay if the particular ransomware group is known to renege on their promises.
Still, it's illegal or quite bureaucratic in some places to pay up.
And idk... It still feels like these ransom groups could well sit on the data a while, collect data from other attacks, shuffle, shard and share these databases, and then sell the data in a way that is hard to trace back to the particular incident and to a particular group, so they get away with getting the ransom money and then selling the db latter.
It's also not granted that even with the decrypt tools you'd be able to easily recover data at scale given how janky these tools are.
I don't know. I am less sure now than I was before about this, but I feel like it's the correct move not to pay up and fund the group that struck you, only so it can strike others, and also risk legal litigations.
I can’t think of anywhere it would be illegal, but the bureaucracy is usually handled by the incident response company who are experts at managing these processes.
> It's also not granted that even with the decrypt tools you'd be able to easily recover data at scale given how janky these tools are
Most IR companies have their own decryption tools for this exact purpose, they’ve reversed the ransomware groups decryptors and plugged the relevant algos into their own much less janky tools.
> And idk... It still feels like these ransom groups could well sit on the data a while, collect data from other attacks, shuffle, shard and share these databases, and then sell the data in a way that is hard to trace back to the particular incident and to a particular group, so they get away with getting the ransom money and then selling the db latter
Very few databases will be worth even $100k, ransoms tend to run in the millions and sometimes tens of millions. There have been individual payments of over $30M. Selling the data just isn’t worth it, even if you could get away with it without sabotaging your main business. It’d like getting a second job as a gas station attendant while working for big tech in SF, possible but ridiculous.
> I don't know. I am less sure now than I was before about this, but I feel like it's the correct move not to pay up and fund the group that struck you, only so it can strike others, and also risk legal litigations.
The UK government even has a website where they basically say “yeah we understand you might need to make a payment to a sanctioned ransomware group, it’s totally fine if you tell us”. The governments accept that these payments are necessary, to the point that they’ll promise non-enforcement of sanctions. I can’t think of anywhere you’d really be risking legal repercussions if you have some reasonable IR company guiding you through the process.
I totally get the concern about funding these groups, but unfortunately the payments are so common at this point (the governments even publish guidelines! That common) that it simply doesn’t make a difference if a few companies refuse to pay.
The cost of an attack like this is in the thousands of dollars at most, the ransom payments tend to be in the millions. The economics of not paying just don’t add up in the current situation.
You could very well be making a payment to a sanctioned individual or country, or a terrorist organization etc.
For example the UK government publishes guidelines on how to do this and which mitigating circumstances they consider if you do end up making a payment to a sanctioned entity anyway https://www.gov.uk/government/publications/financial-sanctio...
They directly state as follows:
> An investigation by the NCA is very unlikely to be commenced into a ransomware victim, or those involved in the facilitation of the victim’s payment, who have proactively engaged with the relevant bodies as set out in the mitigating factors above
i.e you’re not even going to be investigated unless you try to cover things up.
This is a solved problem, big companies with big legal departments make large ransomware payments every day. Big incident response companies have teams of negotiators to work through the process of paying, and to get the best possible price.
The problem can not be helped by research research against cybercrime. Proper practices for protections are well established and known, they just need to be implemented.
The amount donated should've rather be invested into better protections / hiring a person responsible in the company.
(Context: The hack happened on a not properly decomissioned legacy system.)
I see it more as a middle finger to the perps: “look, we can afford to pay, here, see us pay that amount elsewhere, but you aren't getting it”. It isn't signalling virtue as much as it is signalling “fuck you and your ransom demands” in the hope that this will mark them as not an easy target for that sort of thing in future.
For customers it signals sincerity and may help dampen outrage in their follow up dealings.
It's also a term you can use against political opponents because it's much easier to speak well than to actually do good.
Refusing to negociate with criminals and help fund security seems like the proper long-term reaction for everyone.
Yes there are negative externalities in funding ransomware operations, not paying is still much more likely to hurt your customers than paying.
Besides, if they were genuinely interested in positive externalities they would be spending the money lobbying for a ransomware payments ban and not donating to universities.
You send them the payment, they tell you they deleted the data, but they also sell the data to 10 other customers over the dark-web.
Why would you ever trust people who are inherently trustworthy and who are trying to screw you? While also encouraging further ransomware crimes in the future.
If you don’t pay, the odds they will publish your data are closer to 100%. If you do pay, the odds have historically been much closer to 0% than 100%
You aren’t paying to be sure, but to improve your chances.
Making it illegal to pay ransom is likely a much easier to implement and more effective solution.
And this isn’t virtue signaling - they literally did the virtuous thing that is better for society at the expense of their bottom line. That is just virtue.
The point here is that this is an expensive virtue signal. Although, it would be more effective if we knew how expensive it was.
Endpoint security is a well known open problem for what no sufficient practices and protections exist.
In french we call that a "pied de nez". "Turning the table" / "Poetic justice" / "Adding insult to injury" would all be more correct than "virtue signalling".
If there was no attacker and the company gave half a mil out of nowhere to a security company (or a charity) and boasted publicly about it, that would be virtue signalling.
But refusing to pay the ransom and giving the exact same amount to security researchers is just a big, giant, middle finger.
And a middle finger ain't no virtue signalling.
Or just properly follow best-practise, and their own procedures, internally.⁰
That was the failing here, which in an unusual act of honesty they are taking responsibility for in this matter.
--------
[0] That might be considered paying for security, indirectly, as it means having the resources available to make sure these things are done, and tracked so it can be proven they are done making slips difficult to happen and easy to track & hopefully rectify when they inevitably still do.
I think the answer is ok but the "third-party" bit reads like trying to deflect part of the blame on the cloud storage provider.
Often times it would have been easier to rebuild the whole project over trying to upgrade 5-6 year old dependencies.
Ultimately the companies do not care about these kinda incidents. They say sorry, everyone laughs at them for a week and then after its business as usual, with that one thing fixed and still rolling legacy stuff for everything else.
All work created by a company decays, it's legacy code within months.
Sure buddy, sure
They failed so damn bad and it's hilariously bad and I feel awful for the somewhat competent coworker who was stuck on that team and dealt with how awful it was.
Then we fired most of that team like 3 times because of how value negative they have been.
Then my coworker and I rebuilt it in java in 2 months. It is 100x faster, has almost no bugs, accidentally avoided tons of data management bugs that plague the python version (because java can't have those problems the way we wrote it) and I built us tooling to achieve bug for bug compatibility (using trivial to patch out helpers), and it is trivially scalable but doesn't need to because it's so much faster and uses way less memory.
If the people in charge of a project are fucking incompetent yeah nothing good will ever happen, but if you have even semi-competent people under reasonable management (neither of us are even close to rockstars) and the system you are trying to rewrite has obvious known flaws, plenty of time you will build a better system.
it was the ORM and the queries themselves
I can imagine that in a team that might be harder, but these are glorified todo apps. I am well aware that complete rebuilds rarely work out.
To me it seems most likely that this is data collected during the KYC process during onboarding, meaning company documents, director passport or ID card scans, those kind of things. So the risk here for at least a few more years until all identity documents have expired is identity theft possibilities (e.g. fraudsters registering their company with another PSP using the stolen documents and then processing fraudulent payments until they get shut down, or signing up for bank accounts using their info and tax id).
Essentially nobody checks the validity of document numbers, there’s rarely any automated mechanism to do this. You could just photoshop the expiry dates on the documents and use them for years and years, even if document designs changed you could just transplant the info from the old document into a new template.
So no, documents expiring does mostly nothing to alleviate identity theft risks in most of the world.
And anyway, targeted phishing attacks are of much much higher severity than identity theft. From this data you can probably gather everything you’d need to perform rather high quality phishing attacks against the bank accounts of checkout.com clients, easily causing tens or hundreds of millions of losses that would never be recovered.
If you read between the lines of the verbiage here, it looks like a general archived dropbox of stuff like PDF documents which the onboarding team used.
Since GDPR etc, items like passports, driving license data etc, has been kept in far more secure areas that low-level staff (e.g. people doing merchant onboarding) won't have easy access to.
I could be wrong but I would be fairly surprised if JPGs of passports were kept alongside docx files of merchant onboarding questionnaires.
How do you qualify this statement? Did you mean “should never”? Even then, you’re likely overstating things. Nothing prevents co-locating KYC/KYB information. On the contrary, most businesses conducting KYB are required to conduct UBO and they’re trained to combine them both. Register as a director/officer with any FSI in North America and you’ll see.
Couple of years ago I accidentally stumbled upon an open folder a fairly big Scandinavian bank was using to store tens of thousands of passport/id scans
Why would merchants fill out docx files? They would submit an online form with their business, director and UBO details, that data would be stored in the Checkout.com merchants database, and any supporting documents like passport scans would be stored in a cloud storage system, just like the one that got hacked.
If it was just some internal PDFs used by the onboarding team, probably they wouldn't make such a big announcement.
Every country you operate in has different rules and regulations and you have to integrate with many third party systems as well as governmental entities etc, and sometimes you have to do really really technically backwards things.
Some integrations I remember were stuff like cron jobs sending CSV files via FTP which were automatically picked up.
The sheer amount of effectively useless bingo sheets with highly detailed business (and process) information boggles the mind.
Some time ago I alluded to existence and proliferation of these questionnaires in another context: https://bostik.iki.fi/aivoituksia/random/crowdstrike-outage-...
I can't quite work out who they donated to - it seems there are a number of Oxford Uni cybersec/infosec units. Any idea which one?
"Cyber Security Oxford is a community of researchers and experts working under the umbrella of the University of Oxford’s Academic Centre of Excellence in Cyber Security Research (ACE-CSR)."
I don't think it's https://www.infosec.ox.ac.uk/
There's also this AI security research lab, https://lasr.plexal.com/
It looks like Oxford are quite busy in this space.
This sort of data is generally treated very differently to the actual PANs and payment information (which are highly encrypted using HSMs).
So it's obviously shitty to get hacked, but if it was just KYB (or KYC) type information, it's not harming any individuals. A lot of KYB information is public (depending on country).
Fair play on them for being open about this.
(If not, why not?)
(Imho, it would make sense if only the state can pay ransoms)
Why not? Legislators haven’t caught up yet, and banning ransom payments would likely cause some very uncomfortable situations.
This of course raises some pretty uncomfortable questions, should ransom payments in kidnapping cases be banned too? That would presumably cost actual human lives.
A more pressing issue is that banning ransom payments might dissuade ransomware, but wouldn’t affect the main problem of financially motivated hacking. The costs of these attacks are so low that a ransomware payments ban would probably not have stopped checkout.com from being hacked and having their customer data stolen, the criminals will still do crime even if they have to do slightly different crime that pays less.
The group responsible in this case was just selling data stolen from their victims for a long time before they pivoted to much more profitable ransom operations.
Instead, you would pay (exhorbitant) consulting fees to a foreign-based "offensive security" entity, and most of the time get some sort of security report that says if you'd simply plug this and that holes, your systems would now be reasonably safe.
Lots of US based incident response companies handling ransomware payments, this isn’t the domain of some sketchy foreign offsec joints.
Yes, that's why cryptocurrencies are a gift from heaven for these hacker groups.
Therefore, even if paying ransom money (somehow) must be legal, maybe it should be illegal to use crypto for it. You don't want to make it too easy to run this type of criminal business.
You go on some Russian crime forum and find a plenty of people offering to process bank transfers like these for some percentage of the money. As these particular payments would be somewhat consensual, you wouldn’t even have to worry about the funds getting frozen on the way.
> Jimmy, where did the cookies go?
> Something that was on the counter is gone! I don't know how! It might not even be my fault! But I'm sorry!
What kind of an apology is that? It's not. It's marketing for the public while they contact the "less than 25% of [their] current merchant base" whose (presumably sensitive) information was somehow in "internal operational documents".
Oh but also took some of what they charge their customers and gave that (undisclosed?) sum away to a university. They must be really sorry.
IMO, these aren’t safe to use anymore.
Probably someone was phished and they still had access to an old shared drive which still had this data. Total guess but reading between the lines it could be something like this.
Reading between the lines reveals the severity they're obfuscating, with contradictions:
> This incident has not impacted our payment processing platform. The threat actors do not have, and never had, access to merchant funds or card numbers.
> The system was used for internal operational documents and merchant onboarding materials at that time.
> We have begun the process to identify and contact those impacted and are working closely with law enforcement and the relevant regulators
They stress that "merchant funds or card numbers" weren't accessed, yet acknowledge contacting "impacted" users, this begs the question: how can users be meaningfully "impacted" by mere onboarding paperwork?
> The system was used for internal operational documents and merchant onboarding materials at that time.
Ah so just all of your KYC for founders, key personnel, and the corporation to impersonate business accounts
> We estimate that this would affect less than 25% of our current merchant base.
Yikes, this affects 25% of their current merchant base.
This submission's edited title reads like the "target headline" from The Office (US):
> Scranton Area Paper Company - Dunder Mifflin - Apologizes - to Valued Client - Some Companies - Still Know - How - Business - is - Done
In most cases they can get away with "We are sorry" and "Trust me, bro" attitude.
US indicts two rogue cybersecurity employees for ransomware attacks