> The attackers continued their search and eventually discovered a mounted NFS share on the web server. This file share contained notes and configuration files used by Equifax engineers, in which they found many database credentials.
Seriously, WTF? I get paranoid all the time worrying about my application security - it often feels like there is always some potential issue around the corner you don't know about.
But then I read about how lots of these kinds of breaches occur (storing prod DB credentials in plaintext on an NFS share, reusing passwords and not using 2FA, leaving your server password as "solarwinds123", etc.) and I think maybe I'm not so bad after all.
I've seen it, in pretty much every large business I've worked in.
This goes back to the saying: "you should never hire someone less good than yourself".
Sadly when the people hiring literally come from sales or airline customer service, your company is boned. It's only a matter of time.
One example: years ago I started work at a tech company (a fintech no less), and shortly after starting I asked the head of customer service how I could get an account to access an internal admin portal (I was an engineer and needed to understand some of ops processes). "Oh, you just log in with my account, and the password is <CompanyName><Year> - all the reps just use that shared account" I got an immediate sinking, sinking feeling of despair.
Meaning a strong security culture means you do appropriate secrets management, and importantly, everyone understands how secrets management should be done. That way if you have the occasional breach in your automated enforced guarantees (e.g. the article talks about how Equifax missed one of their vulnerable systems to patch), that if people see a problem they will speak up.
That is, I agree with enforcing guarantees as much as possible, but any engineer on that team who came across an NFS file with DB credentials should have spoken up loudly about "Why TF are these DB credentials present on a network drive?"
> any engineer on that team who came across an NFS file with DB credentials should have spoken up loudly about "Why TF are these DB credentials present on a network drive?"
This requires empowering your employees and the lower case a while with its cross functional teams which most managers hate.
I'm skeptical you can even fix this without a culture change, but you definitely can't do it just by taking things away.
Ironically, user accounts are in one sense more secure (than a system account with a shared password) because they can use 2fa (and there's no inherent need to distribute the password).
After the Equifax breach, I just assume now if an identity thief gives a half assed effort, he can pull up the PII for any American resident. How Experian/Equifax/Transunion can honestly say they have accurate data without physically verifying driver licenses, identity cards, or passports is beyond me.
It's a good idea, but now your product is twice as expensive as the other guy's, and you're not going to win any bids that way, and now you're out of business.
I did something very similar for a publishing system, where, well, long story short, but equivalent of local variables[1] were verboten in the production repo. On the other hand, the writers were constantly asking me to override the precommits, so, well, there's the muppet argument.
[1] https://docs.asciidoctor.org/asciidoc/latest/attributes/cust... Basically, declaring an attribute in an include target. Variable scope is one of those things getting debated in the Asciidoc world these days. Ha ha ha welcome to the 1997 SGML technical steering group, suckas. You're discovering why HTML doesn't have transclusion.
While it is the responsibility of devs, some system needs to be put in place to actually enforce it. Like, do not have an nfs shared volume, or incentivise anybody to report these, and give incentives for it. Otherwise "just be very careful" advice slows development to a halt.
Or you might not have shared storage, and then they'll just put the creds in a Google spreadsheet like I've seen very recently.
Once I was in it didn't take long from rummaging around in the files to first find the database credentials in a config file, then eventually finding the root password to their servers, which in fact was simply "internet" o_O
I was a nice guy so I sent them an email with their passwords and told them they might want to upgrade their forum software.
Once you've got a whole company that relies on dumping stuff on a network drive you're pretty much fucked, it's very difficult to get non-technical users to switch to SFTP. It's like pulling teeth.
https://www.justice.gov/opa/pr/chinese-military-personnel-ch...
As far as I am aware the data has never been seen on the open market, so there's a whole other National Security story around whether the information was used to compromise individuals with credit issues for commercial and military espionage purposes. It would seem that this was known very early on and possibly factored into the settlement.
Pre-0 days are one thing. But leaving systems unpatched for months, because your stack is too old, is a common, but inexcusable theme.
This is why it is vital to use libraries, frameworks, with a stable, unchanging LTS branch. Failure to do so, means a security update that needs to be applied instantly, cannot be done, without extensive app changes.
New shiny is fine. But it must never, ever override basic security concerns.
Security comes first. Not last. Always.
It's also another reason why it's important to provide such things.
It's amazing to me how many people seriously argue it's fine to aggressively drop support for old versions and old features to focus on the newest stuff (and that it's totally fine for table-stakes of "having software" is to have engineers continuously working to keep up with changing dependencies.
The reality is the cheapest thing for society is to offer very long term support for old versions, even if it's just security patches, or well-tested backwards-compatibly features in newer versions. It's not sexy work, but it's important.
But such things do exist, you just have to vet things first.
For example, stick to a non-rolling distro, such as debia n stable. Everything there will have around 3 years support, with all the security updates done for you.
Debian backports almost all security patches, or sticks with an LTS variant of something (like php) for its lifetime.
No surprise API changes, no sudden need for code changes.
So many people use the latest shiny, and literally only because they're told to. Many need nothing from that bleeding edge version.
When it comes to frameworks, some have LTS versions, stick with those.
And things like node? Heh.
But in this case, no LTS would have covered, since the system was decades old.
The issue was that they had a poorly maintained service, hugely outdated, which is hard to secure, mingled with their main up-to-date stack.
Lesson: isolate the bad lemons from the good ones.
That needs to 100% end. There are also cases where companies think, "Well, it will take use 3 weeks to update this stack, we'll leave the old, vulnerable code online for that 3 weeks, plus testing, and plus (of course) push, so 2 months, even though this is a very easily exploitable, high profile CVE".
That too needs to end.
The only way that can end, is if fines are WELL beyond any possible savings, including being 100s of times more than those savings, so that companies will TREMBLE IN FEAR at the very idea of leaving unpatched servers online. Your stack will take 2 months + testing to upgrade?
Because you chose a stack without an easy way to upgrade instantly?
then you take your stack offline, and tough if it bankrupts you.
Because otherwise, we'll fine the company dry, and its directors, and jail the CTO, and the employees who knew.
https://www.hsgac.senate.gov/wp-content/uploads/imo/media/do...
I wonder how they managed to figure that out. Did they have to look into each of the servers?
How did they get the names?
It should be noted, the "official" report is what investigators have been told, not what really happened behind the scenes. Naturally Equifax and its employees tried to play the poor, innocent, helpless corp, with those dastardly hackers almost mysteriously getting in.
The whole attack/investigation is super fascinating.
I also found this funny:
> “Today, we hold PLA hackers accountable for their criminal actions, and we remind the Chinese government that we have the capability to remove the Internet’s cloak of anonymity and find the hackers that nation repeatedly deploys against us.
And then:
> The details contained in the charging document are allegations. > The defendants are presumed innocent until proven guilty beyond a reasonable doubt in a court of law.
Edit - sorry, a better list of breaches is at https://haveibeenpwned.com/PwnedWebsites
So, what would be the motivation to avoid future things like this happening again?
The CIO got a $3M bonus, too. Odd thing is that she had a music degree and little experience in IT, but was an old friend of the board members.
The reason I'm so salty about your response is when the breach happened, there were tons of news reports denigrating the CISO because she had a music degree. There may be a ton of reasons she wasn't good at her job (though it's hard to say as CISO is often a "sacrificial lamb" job anyway), and I'm certainly not defending Equifax, but I take major issue with the implication that a music degree makes someone unqualified for a tech job.
First, as she was CISO, she was presumably done with college many, many years ago. Lots of people have college degrees that aren't necessarily directed to the career they end up in. More importantly, though, I've found that there is a direct correlation between highly trained musicians and great software engineers. I don't know if it's a "same part of the brain" thing or whatever, but I'm actually astounded at the sheer number of "best of the best" software engineers I've worked with that are classically trained musicians. It's to the point that when hiring I give "extra points" if you will to musicians because, it my experience at least, the correlation is so strong.
So, frankly, you can take your "she had a music degree" shade and shove it.
The music degree scrutiny is unnecessarily derogatory and borderline misogynistic. She was a fine executive and predictably the first one thrown under the bus. I can't say she revolutionized anything, but I had no complaints about her competence. (By comparison, the male C-levels in the company I currently work under have relevant degrees from impressive institutions. I see them watching porn, engaging in insider trading and doing God knows what on Tor...while our latest two product launches failed.)
Equifax's fate was sealed by the CEO himself. We had highly-competent security teams that kept up with CVEs, ran CABs, everything a "secure" org should do...but there was always a top-down culture of "I'm not saying don't patch systems, but don't impact production" at every level. This sort of event was inevitable under Smith's leadership.
Of course lots of people get into tech from non tech but it's not a reason to go off on the commenter with an angry screed.
Also, she was CSO not CISO or CIO not that there's much of a difference between those titles in practice anyway.
People who like computers do it as a hobby. They learn programming at a young age, they get a CS degree or a hard science degree, and then they spend their spare time on tech forums like HN.
People who don't like computers play music, learn painting, or do something else. They get degrees in the arts or humanities. They spend their spare time playing music at the local pub, or whatever.
PS: One of the worst programmers I had ever met is also one of the best musicians I had ever met.
Well that's me and trust me, you don't want me in charge of any IT department. Maybe it's cause I also like music.
I've observed an inverse relationship between technical skill and career progression in all technical industries.
It's always the pimply junior contractor tech who is the Global Administrator doing the actual work, and the "very high level people" struggle with copy-paste from one email to another.
OP never said 'a tech job', it is implied it would make someone unqualified for a CISO job though. And as a general rule, I tend to agree.
Sundar Pichai majored in metallurgy engineering. How much of his college coursework do you think he uses day-to-day?
Should it happen again then you would very likely hear for calls for Gov to step in and take direct control of the firm.
We heard calls for similar last time, but I don't think anybody expected the legal/regulatory response to be anything resembling an existential threat to Equifax, and it wasn't. I don't see why the second time would be any different—we are surrounded by examples of how our dogshit government is utterly derelict in its duty to protect workers and consumers, and arguably complicit across the vast scope of corporate abuse of the same.
I don't think "customers" is the right term, considering I never wanted them collecting data about me.
If it is information collected as part of doing business, then yes; they don't care. A good reason to question any Gov attempt to implement centralisation of data like identity or medical records.
But do these breaches affect their monopoly? My thinking is:
1. B2B customers won't go on darknet to source illegal data dumps.
2. This data, even if it doesn't quickly become effectively stale, would be considered stale by businesses very quickly if it's not connected to the continuous data ingestion pipeline.
2) This is not specific to the data that underlines consumer credit scoring; a broker could be selling products derived from data on historical house prices or car sales for example. A competitor might use it to compare and validate their own dataset or simply have a look. Third party investigators, journalists, etc though could have a field day fact-checking it.
Could you elaborate on how Equifax would have gone out of business if all their data had been stolen?
The reality is that despite Equifax showing a blatant disregard for the security of the data they have on people, the repercussions of this breach were trivial to them and their senior people.
So yes, I do agree that there is at least one company out there, Equifax, who does not care about PPI leaking from their systems.
Why did these things happen (or not happen)? Insufficient training? Insufficient processes? Were changes being reviewed and accepted by people who didn't really understand the changes, for expediency? Were there alerts but they were lost in the noise of thousands of bogus alerts people had learned to ignore? Was the lack of segmentation a known issue but allowed because it made some things easier? Were the credentials stored on NFS because they simply hadn't setup a more appropriate system yet and that was considered low-priority? Were business priorities getting in the way of technical priorities such that known issues were backlogged?
It's fairly easy to make a bullet list of things that should (or shouldn't) be done. It's a bit more difficult to figure out why, in a specific organization, those things aren't (or are) being done. Even if/when people might know that they should/shouldn't.
The surface level mistakes are interesting. The deeper organizational causes of those mistakes would be interesting. Solving those things at a higher systemic/organizational level can reduce the whack-a-mole nature of individual mistakes.
Equifax had no working firewall / intrusion detection for almost a year, because they did not update their snakeoil MITM certificate and forgot about it.
Remind me again, how did Equifax get SOC 1&2, and ISO27001 certified?
Oh yeah, they probably have a checklist for that, so they must be secure. /s
> Remind me again, how did Equifax get SOC 1&2, and ISO27001 certified?
You probably already know that these are compliance CYA focused around process not actual measure of how secure the system is (if there could be such a thing).
Yes, you're correct ISO standards are very focused on paperwork.
One of the fears some C++ people have is that today ISO 26262 (safety for road vehicles) says they can write car software in C++ because hey, there's an ISO standard for C++ so that's paperwork we can point to - But, wait, why is that enough? C++ is laughably unsuited to this work. Maybe 26262 should be revised to not say that C++ is suitable.
Eversince ISO 21434 got rolled out, all Tiers are panicking because they need to introduce modern CI/CD pipelines that work with source verification. Simple things like generating an SBOM become impossible because even the Tiers that sold you their software don't have the source code themselves and just redistribute binaries from another Tier down the line.
I am somewhat a strong opponent of using C for these kind of areas because in the automotive industry I learned the hard way that these firmwares are pretty much the definition of unmaintainable.
Sometimes Tiers even cannot compile their own software anymore because they lack licenses of old Vektor DaVinci versions, and they literally have deals with Vektor where they send zip files and an excel spreadsheet that reflects the dependencies of kernel modules, and a _person_ not a program sends back the compiled firmware.
Trying to defend a broken process isn't what this is about, my critic was about that there was an audit a decade ago, and that the auditors did not verify any of the claims or processes in place. Certifications and audits without any verification of claims are not valid certifications.
SOC2 and ISO27001 also include _mandatory_ pentests which obviously didn't happen that year. Either that or the pentesting agency wasn't actually doing more than a metasploit run ;)
It defines how you structure and operate a risk based security management system, that’s all. It’s perfectly valid to say “I should be doing pen testing but my risk appetite is high enough for me not to care”, and still get a 27001 certification.
I would agree with you if Equifax wouldn't be part of critical infrastructure.
It's completely unlike SOC in that regard.
Likewise with the certificate, if there was documentation to indicate when that cert expires (or monitoring to alert few weeks in advance) they would have a functioning ids and these web shells would be found immediately.
Unfortunately, out of half a dozen fortune 500 companies I worked for perhaps 2 had doc practices good enough to prevent this.
You also highlight a very good point. Things like security software should "break loudly", i.e. beyond just sending alerts (which can be ignored), there should be some explicitly "painful" steps that occur if the security system is in a broken state for long.
Maybe security certification should be more of a requirement in hiring software engineers; I don’t recall it ever being mentioned in job listings.
Anyway, it got me wondering, how did devs get away with storing database credentials in a file on an NFS share? That’s sheer recklessness. As a regular procedure, an audit should include scanning all files for passwords, for example; run find-grep-dired or similar on every mount, every disk, every cloud instance etc. And, obviously, require regular password changes.
It should be assumed that the entire system is vulnerable, and hardening should be done regularly and rigorously. A company as big as Equifax (or Target) should have a dedicated team whose job it is to constantly probe and audit. Since, after all, the black hats are constantly probing, too.
Please continue taking the security course. Scanning all files for passwords is madness. How do you differentiate "thisissupersecret" and "123fqfqlfni34235r4" and "git@somegitrepo.com" as passwords? You can't, they're all valid passwords for a majority of services.
At some point, you need to trust developers to do the right thing, which is impossible.
One approach would be to have passwords of a known format, that are rotated frequently, and to verify that you’re not finding any strings matching those patterns save to disk or in log files, etc.
I wonder how they realized about their NIDS expired certificate.
https://www.aon.com/cyber-solutions/aon_cyber_labs/an-analys...
Crazy that the user that ACIS was running as had enough permissions to access NFS mounts to begin with.
It’s also crazy that the attackers even found ACIS.
This was an insanely dedicated attack.
Hopefully security posture has increased since
They should not exist, or if they must exist they should be not for profit. It's a total scam.
That was hilarious to read about...
God forbid our executives be trained in creativity.
In places like China, there's personal accountability at the highest level of an org for major screw ups - sometimes even capital punishment.
If we put such options on the table here, perhaps corporations would be a little less callous with people's private data, and a little less eager to collect it.