The Privacy Theater of Hashed PII
27 points
8 days ago
| 8 comments
| matthodges.com
| HN
blitzar
2 days ago
[-]
This is not for privacy. It is done for the sellers/buyers of PII, buyers do not want to buy data they already own and the seller doesn't want to disclose data before they sell it.

There is no honour amongst data thieves.

reply
ozim
2 days ago
[-]
Yeah if you want to check if user is in someones else database you ask the user if the check can be performed. Then you will have the check already done if user doesn't agree even if he is in the other database it is not for you to make that check.
reply
fmajid
1 day ago
[-]
Serious private set intersection uses full homomorphic encryption or equivalent mechanisms. Microsoft Edge's compromised password detection uses FHE, for instance:

https://www.microsoft.com/en-us/research/blog/password-monit...

If anything, this article understates the problem. A single Nvidia RTX4090 can calculate 164 billion MD5 hashes per second running hashcat software:

https://gist.github.com/Chick3nman/32e662a5bb63bc4f51b847bb4...

That said, surprisingly few people are aware of this fact, even senior technical leadership at Big Tech companies, so I'm not surprised dodgy Ad-Tech companies are not either, and it might be an illustration of Hanlon's Razor: do not ascribe to malice what can be better explained by incompetence (even if ad-tech companies long ago forfeited the benefit of doubt).

reply
nevon
8 days ago
[-]
The company I work for has a similar, yet even worse instance of this. The employee satisfaction survey was advertised as anonymous, but when I looked into the implementation they were just hashing the email address, of which there were only a few thousand. A more conspiratorial mind would conclude that it is to easily be able to find who a particular piece of feedback came from, but in this case I legitimately think it's just incompetence and not being able to figure out a better way of ensuring each employee can only submit the survey once.

This year it's advertised as confidential, rather than anonymous, so I suppose that is an improvement.

reply
rented_mule
2 days ago
[-]
Not calling it anonymous is an improvement. Before I retired, I read many "anonymous" surveys taken by my reports. Any free-form text in the survey that goes beyond a sentence fragment usually made it obvious who wrote it. At least in the case of my teams, writing styles tended to be pretty distinct, as were the things each person cared about enough to write at any length. I tried to ignore the clues, but it was usually so obvious that it jumped out at me. The people administering such things insisted that anonymous meant their name wasn't on it, so it was fair to call it that.
reply
chii
2 days ago
[-]
A lot of people simply imagines that anonymity means un-identifiable. It's far from true, but i think some are honestly making the mistake, rather than being nefarious.
reply
rdtsc
2 days ago
[-]
It is mostly performative. They do it so nobody can point fingers and accuse them of not doing it.
reply
panstromek
2 days ago
[-]
Yea, this is pretty annoying and not the only problem in this field. There's a bunch of theather or misunderstanding in the marketing space. I feel like marketing people just don't get it. They seem to be hopelessly incapable to accepting that matching people in whatever way possible is the exact practice the laws like GDPR are trying to target. You cannot go around it by hashing, fingerprinting, ad ids, cookieless matching or whatever.
reply
iamacyborg
2 days ago
[-]
They’re heavily incentivised to not get it, both internally with company KPI’s that’ve not kept pace with the reality of GDPR and externally through ad platforms that continue to demand excessive amounts of data without providing suitable alternatives.
reply
panstromek
2 days ago
[-]
Yea, companies are probably abusing this (as I have noted in the sibling comment), but I think marketers themselves truly don't get it. I've been on the implementation side of this and it's always frustrating debate. It's pretty clear that they just think this is about just picking a different vendor with "GDPR" on list of features, not realizing that the law fundamentaly targets metrics they want to use, and they just cannot do it "the old way" as they are used to.
reply
iamacyborg
2 days ago
[-]
I don’t think this is a problem limited to marketers to be fair. How many developers are still also building all these data collection and delivery pipelines? They should know better too, no?
reply
panstromek
2 days ago
[-]
I also think that many vendors in this space are abusing the fact that marketers are not technical people, so they just wave around some "we're GDPR ready", "anonymized data" slogans such that marketers feel that they can tick the "GDPR" box and get all the metrics they are used to.

While of course not realising that GDPR implementation is partially on them and that some of those metrics are literally impossible to implement without breaching into GDPR territory. Any company saying that they are "fully GDPR compliant" but also giving you retention and attribution metrics by default is probably confusing you in this way.

reply
FooBarBizBazz
8 days ago
[-]
Isn't this solved with salt?
reply
bob1029
2 days ago
[-]
This is how I did it. You generate a salt per logging context and combine with the base into a sha2 hash. The idea is that you ruin the ability to correlate PII across multiple instances in different isolated activities. For example, if John Doe opened a new account and then added a co-owner after the fact, it wouldn't be possible for my team to determine that it was the same person from the perspective of our logs.

This isn't perfect, but there hasn't been a single customer (bank) that pushed back against it yet.

Salting does mostly solve the problem from an information theory standpoint. Correlation analysis is a borderline paranoia thing if you are practicing reasonable hygiene elsewhere.

reply
hlieberman
8 days ago
[-]
If it's salted, you can't share it with a third-party and determine who your customers in common are. (That's the point of the salt; to mean that my_hash(X) != your_hash(X)).
reply
OutOfHere
21 hours ago
[-]
You actually can join it when the salt provider is a dedication shared entity. The entity rehashes the data of both organizations to use a shared salt. That is how different organizations join hashed data.
reply
jstanley
2 days ago
[-]
> A 2020 MacBook Air can hash every North American phone number in four hours

If you added a salt, this would still allow you to reverse some particular hashed phone number in about 4 hours, it just wouldn't allow you to do all of them at the same time.

reply
OutOfHere
21 hours ago
[-]
I do not agree. How will you reverse a salt with sufficient entropy? Imagine the salt is a 512 bit hex, the data is a ten decimal digit phone number, the generated hash is 512 bits of which the first 160 bits are used as the value. Now exactly how will you get the phone number back? Do you really think you can iterate over half of the possibilities of 512 bits in four hours?
reply
jstanley
11 hours ago
[-]
You know the salt because it's stored alongside the hash. You're only iterating over the space of phone numbers.

If it's not stored alongside the hash it's not a salt, it's something else.

https://en.wikipedia.org/wiki/Salt_(cryptography)

reply
OutOfHere
7 hours ago
[-]
> If it's not stored alongside the hash it's not a salt, it's something else.

That is not even true. The definition in the article does not substantiate it. There is no requirement for the salt to be stored alongside the hash.

The definition in the article is sufficiently clear. This is all that a salt is:

> a salt is random data fed as an additional input to a one-way function that hashes data

With regard to effective anonymization, the salt is stored by the generator, but not in the exported dataset.

reply
jstanley
15 minutes ago
[-]
If the "salt" is kept secret then I agree you can't brute force all the phone numbers so easily. But I don't agree that "salt" is the correct term for that technique.
reply
chrisandchris
2 days ago
[-]
A salt is very good if the input varies. If the input stays within a pre-defined range (e.g. phone numbers), salt does not work very well.
reply
OutOfHere
21 hours ago
[-]
I do not agree that it doesn't work very well. How will you reverse a salt with sufficient entropy? Imagine the salt is a 512 bit hex, the data is a nine decimal digit SSN, the generated hash is 512 bits of which the first 160 bits are used as the value. Now exactly how is the salt not good enough?
reply
ozim
2 days ago
[-]
For me it seems like cracking hashes is irrelevant in grand scheme of things.

All the laws were passed so that companies don't not compare their customer lists without asking the customer first.

I hope some government agency picks that up and strikes such BS with might.

If you are BambooHR customer having people in your HR system - you have to ask person if you can check if they are up in BambooHR, guess what if they say no or yes you already have half of the job done.

Putting it into a hash and seeing if you have it in your database is still sharing that requires consent. Fuckers.

reply
meindnoch
2 days ago
[-]
"Can we just hash the IP addresses?"
reply
Nextgrid
2 days ago
[-]
That's the GDPR "compliance" approach of a lot of companies. Because of near-nonexistent enforcement, they get away with it.
reply