> please use the original title, unless it is misleading or linkbait; don't editorialize.
Title should be: AI and Trust
Policy wonks are often systemizers who think of society as a machine. That’s why they take the intuitive concept—scarcely even needs explaining—of informal everyday rituals like queueing and repackage it as yesteryear’s buzzword “trust”. We don’t need extrinsic rewards to queue politely. Amazing?
A computer guy is gonna take that and explain to us, of course, that society is like a machine. Running on trust. That’s the oil or whatever. Because there aren’t enough formal transactions to explain all the minute well-behavedeness.
Then condescend about how we think of (especially) corporations as friends. Sigh.
What policy wonks are intentionally blind to are all the people who “trust” by not making a fuzz. By just going along with it. Apathy and being resigned to your fate looks the same as trust from an affluent picket-fence distance. Or like being a naive friend to corporations.
The conclusion is as exciting as the thesis. Status quo with bad bad corporations. But the government must regulate the bad corporations.
I’m sure I’ve commented on this before. But anyway. Another round.
Scratch any surface and the gilt flakes off - almost nothing can be trusted anymore - the last 30-40 years consolidated a whole lot of number-go-up, profit at any cost, ruthless exploitation. Nearly every market, business, and product in the US has been converted into some pitiful, profit optimal caricature of what quality should look like.
AI is just the latest on a long, long list of things that you shouldn't trust, by default, unless you have explicit control and do it yourself. Everywhere else, everything that matters will be useful to you iff there's no cost or leverage lost to the provider.
AI, crypto, etc. feels like potentially new meta opportunities and it is eerie how similar the mania is whenever a new major patch for a game is released. Everyone immediately starts exploring how to exploit and min-max the new niche. Everyone wants to be the first to "discover" a viable meta.
Competition nowadays is so intense and fine-grained. Every new innovation or exploration is eventually folded into the existing exploits especially in monopolistic markets. Pricing models don’t change, revenue streams neither, consumer rarely benefits from these optimisation efforts, all leads to greater profit margins by any means.
"We already have a system for this: fiduciaries. There are areas in society where trustworthiness is of paramount importance, even more than usual. Doctors, lawyers, accountants…these are all trusted agents. They need extraordinary access to our information and ourselves to do their jobs, and so they have additional legal responsibilities to act in our best interests. They have fiduciary responsibility to their clients.
We need the same sort of thing for our data. The idea of a data fiduciary is not new. But it’s even more vital in a world of generative AI assistants."
I've not think about it like that, but I think it's a great way to legislate.
Not a popular take; especially within the HN crowd.
That said, it needs to be scaled. As he indicated, only certain professions need fiduciaries.
Anyone that remembers working in an ISO9001 environment, can understand how incredibly bad it can get.