>[B]ureaucrats can be expected to embrace a technology that helps to create the illusion that decisions are not under their control. Because of its seeming intelligence and impartiality, a computer has an almost magical tendency to direct attention away from the people in charge of bureaucratic functions and toward itself, as if the computer were the true source of authority. A bureaucrat armed with a computer is the unacknowledged legislator of our age, and a terrible burden to bear. We cannot dismiss the possibility that, if Adolf Eichmann had been able to say that it was not he but a battery of computers that directed the Jews to the appropriate crematoria, he might never have been asked to answer for his actions.
How does one counteract that self-serving incentive? Doesn't seem like we've found a good way considering we seem to be spearheading straight into techno-feudalism.
The EU is trying right now. Discussed right now over here:
https://news.ycombinator.com/item?id=42916849
See: https://artificialintelligenceact.eu/chapter/1/ and https://ec.europa.eu/commission/presscorner/detail/en/qanda_...
AI systems are defined[0] in the Act in such a way as to capture the kind of "hands-off" decision-making systems where everyone involved could plead ignorance and put the blame on the system working in mysterious ways; it then proceeds to straight up ban a whole class of such systems, and classifies some of the rest as "high risk", to be subject to extra limitations and oversight.
This is nowhere near 100% a solution, but at least in the limited areas, it sets the right tone: it's unacceptable to have automated systems observing and passing judgement on people, based on mysterious criteria that "emerged" in training. Whatever automation is permitted in these context, is basically forced to be straightforward enough that you could trace back from the system's recommendation to specific rules that were executed and to people that put them in.
--
Like every other form of tech regulation done by the EU, it's all bark but no bite because politicians of individual countries have more power than MEPs.
What's to stop Fidesz, PiS (if they return to power), etc from carving out broad exceptions for their own Interior Ministries? They've already done this with spyware like Pegasus.
Instead of techno-feudalism, it's basically techno-paternalism which is essentially the same thing. In both cases, individual agency is being limited by someone else.
The issue is a lot of EU legislation becomes powerless, as it's up to individual nations to implement rulings or define exceptions.
And in plenty of cases, these exceptions tend to be broad and equally rife to lobbying like those you'd see in the US.
And there is A LOT of lobbying happening at the nation state level (the level that actually matters) to carve out exceptions.
I already pointed this out with spyware, and there's no reason to presume the exact same thing won't happen with weaponizing AI/ML tooling.
> A half assed effort is worse than no effort, because a half assed attempt can easily be weaponized
This part I'm not following. I don't see the EU effort as half-assed so much as just lacking teeth due to realpolitik. That's unfortunate but still a step in the right direction. What would you propose as an alternative? If you're just saying "we're fucked" that's a point of view, but it doesn't explain how the current attempts will be weaponized. All else being equal, I'd argue that having these issues in the global discussion is better than not. If they're not then how do they get addressed?
Personal ethics aren't fully personal in nature - they're just as much a function of social environment as of one's "heart of hearts". Ethical behavior being expected and respected makes it easier for any individual to stick to their principles in the face of conflicting factors, like desires or basic self-preservation instinct.
Individual ethics in social context is a powerful organizer, and can keep small groups of people (30-50) stable and working together for a common goal. Humans evolved to cooperate together at that size - such groups can self-govern and self-police just by everyone's sense of right and wrong. But it quickly breaks down past that size. Letting some individuals assume leadership role and direct others only stabilizes the group up to ~150 people (Dunbar number); past that, the group loses cohesion and ends up splitting up.
Long story short: larger groups have survival advantage both against environment and over smaller groups. Because of it, we've eventually learned how to scale human societies to arbitrary sizes. The trick to do it is hierarchical governance to overcome Dunbar limit, and explicit rules to substitute for intimate relationships. By stacking layer upon layer, we grew from tribes of 150 governed by the spoken words of their chiefs, through kings managing networks of nobles via mutual obligation, to modern nation states that encompass millions of citizens in a hierarchy that's 4+ level deep (central government -> provinces -> regions -> towns), slowly building up more levels as nation states group into blocs. With each layer we added, complexity of explicit governance grew, giving rise to what we now call bureaucracy.
The modern bureaucracy isn't some malignant growth or unfortunate side effect - it's the very glue that allows human societies to scale up to millions of people and more. It's the network of veins and nerves of a modern society - it's what allows me, one tiny cell, to benefit from the contribution of other cells and contribute on my own, and it's what keeps all the cells working as a whole. This means that, yes, as the society grows and faces new challenges, the solution usually really is more bureaucracy.
Admittedly this may result in some strange results if followed to its logical conclusion, like a product manager at a self-driving car company being the recipient of ten thousand traffic tickets.
Be careful how hard you push for this - this is how the prosecutors in the Royal Mail fiasco drove postmasters out of business and drove a few to suicide.
I'm the case of self driving cars, the company itself could be held liable. Everyone invested in the company who puts a bad product in the market should be financially impacted. The oligarchy we are heading for wants no accountability or oversight -- all profit, no penalty.
Most legal principles are designed to reduce liability. That's the whole point of incorporation, for example.
If a company is found to have committed a tort against a party, they pay damages.
There are exceptions (the Volkswagen diesel scandal comes to mind) but generally both punishments entail paying out a monetary amount that is often lower than the profit generated by the crime, often because of tort reform or because of fine amounts that are out-of-date with current corporate revenues.
On that note, after some time working in cybersec and GRC fields, I realized that cybersecurity is best understood in terms of liability management. This is what all the security framework certification and auditing is about, and this is a big reason security today is more about buying services from the right vendors and less about the hard tech stuff. Preventing a hack is hard. Making it so you aren't liable for the consequences is easier - and it looks like a network of companies interlinked with contracts that shift liability around. It's a kind of distributed meta-insurance (that also involves actual insurance, too).
My eyes were opened to this when management wasn't just talking about deleting unneeded private data just as the right thing to do, but specifically how it could reduce our insurance premiums.
The core insight that made me start to understand how companies see the world, was reading about ocean freight shipping. Specifically, what happens when there's a bad weather, or dangerous malfunction, or other such unexpected event, that throws some containers off the ship or forces the sailors to intentionally dump them, resulting in millions of dollars worth of cargo afloat in the middle the ocean.
What happens is, a whole lot of nothing. No one will actually bother to try and recover it. The cargo operators, ship owners, and owners of the actual cargo, will all file claims with their respective maritime insurance providers, and call it a day.
The same principle applies everywhere, including in cybersecurity. Past some point, trying to reduce the risk or shift the liability to somewhere else becomes more expensive than just insuring against the expected loss.
Which required more tech, not less.
Tbh I’m in favor of holding C-suite responsible for the actions of their company, unless the company has extremely clear bylaws regarding accountability.
If, say, a health insurance provider was using an entirely automated claim review process that falsely denies claims, I think the C-level people should be responsible.
Before technology there was "McKinsey told me to do this". Abrogation of liability is a tale as old as time.
A computer cannot ever be made to atone for it's misdeeds
Humans can
In situations where accountability is absolutely required, you will find that people are held accountable
Often they are scapegoats, but that is a different problem
There should be laws stating who has skin in the game, maybe by stating that if you take responsability for the profit by having a high salary, you also take responsability for the damage with prison.
Everyone thinks the same until they're screwed over and then they want someone to do something about that. The big misunderstanding is that "regulation" is just the stuff you don't like. In reality it's everywhere the state gets involved. Every rule that the state ever put in place is regulation. Even the little ones. Even the ones that you like.
Computers cannot be held accountable more than a car, or a gun, or an automated assembly line can. That's why you have a human there no matter what, being legally accountable for everything. The human's rank and power defines how much of the risk they are allowed to or must take.
Regulations, on a large scale, were pioneered by America as a response to Great Depression. For a long time Europe was behind the US on this front.
Regulations, actually, worked miracles for the US. But two things happened: early success that prevented further improvements (medical care), and mechanistic misapplication of the practice (over-regulating businesses like hairdressing etc.) Blinded by the later, a lot of Americans believe that regulations, in general, are bad. Well, now we see a small group of people who stands to gain a lot from deregulating many aspects of American life is about to rob blind the remaining very large group of American people :|
Seems like a similar legal framework could require that decision makers are held accountable for any decisions made by an AI under their control.
If you delegate a decision to an AI, you are simply making the decision while trusting the AI, and should not be any less responsible than if you made the decision by any other means.
(If you are directed by a superior authority to allow an AI to make a decision but are assigned nominal responsibility anyway, though, that superior authority is, in fact, making the decision by delegating to the AI bypassing you, to anticipate the obvious “install a human scapegoat organizationally between the people actually making the decision to use the AI and the AI itself response”.)
Solution is that company won’t hire you unless you are willing to take the blame and rubber stamp the AI’s decisions. Unemployed people are not in a position to protest.
Also, the point of AI is that the decisions are too complex to justify. They are grey and iffy. We don’t usually hold people accountable for anything that nebulous. Wrongly deny someone insurance coverage and they die? No consequences even without AI
Sadly at the scale of the world, we take shortcuts and need to be effective. As anyone with rare disease will tell you doctors do for ages till they get a proper diagnosis.
Well that's easy enough, put in a safety net so access to basic sustenance can't be used as a weapon to compel people.
Unless of course, you need to compel people to work for some reason
Isn't it the responsibility of the bureaucrat to use a computer system and whatever its output is?
"GPS said I should drive off of a cliff" doesn't seem like a very potent argument to dismiss decisional responsibility. The driver is still responsible for where the car goes.
The only case where the responsibility would shift to the computer - or rather the humans having made the computerized thingy - would be a Pentium FDIV-class bug, i.e computer system produces incorrect output+ from correct input, from which an earnest decision is then based on.
+ assuming it is indistinguishable from correct output.
If you drive a car and the GPS tells you to drive of the cliff, you wont do it because you don't want to die.
If some bureaucrat rejects somebody's health care claim leading to them dying prematurely, it's just a normal Tuesday.
for a bureaucrat, it's "GPS said WHO-EVER-ELSE should drive off of a cliff." Their problem.
Adding, Have a good day..
Clerk: can't do anything about it, the system doesn't let me. I can get you the manager.
Branch manager: well, I can't do anything about it, "the computer says no". Let me make a call to regional office ... (10 minutes of dialing and 30 minutes of conversation later) ... The system clearly says X, and the process is such that Y cannot be done before Z clears. Even the regional office can't speed Z up.
You: explains again why this is all bullshit, and Z shouldn't even be possible to be triggered for you
Branch manager: I can put a request with the bank's IT department to look into it, but they won't read it until at least tomorrow morning, and probably won't process it earlier than in a week or so.
At this point, you either give up or send your dispute to the central office via registered mail (and, depending on its nature, might want to retain a lawyer). Most customers who didn't give up earlier, will give up here.
Was the system wrong? Probably. Like everyone, banks too have bugs in the system, on top of a steady stream of human errors. Thanks to centralized IT systems, the low-level employees are quite literally unable to help you navigate weird "corner case" scenarios. The System is big, complicated, handles everything, no one except a small number of people is allowed to touch it, and those people are mostly techies and not bank management. In this setup, anyone can individually claim they're either powerless or not in the position to make those decisions, and keep redirecting you around from one department to another until you either get tired or threaten to sue.
One does not, one uses a computer in decision making in order to evade accountability and profits.
See, Meta, Alphabet, the list goes on.
Virtually every tech CEO was standing behind Trump at the inauguration. The takeover is the tech feudalism. They are not the heroes.
But when corporations grow we say it was the corporations that “created tons of jobs”?
In societies that want to promote capitalism and corporatism the everyday language we use reflects this promotion.
https://www.youtube.com/watch?v=x0YGZPycMEU
See also https://en.wikipedia.org/wiki/Computer_says_no and the legal response to the real-world problem, the right not to be subject to a decision based solely on automated processing: GDPR Article 22 (https://eur-lex.europa.eu/eli/reg/2016/679/oj#d1e2838-1-1)
Chevron never was the problem.
This is also the main reason for promoting chip cards, sure they are more secure, but the real reason the banks like it, is that it moves credit card fraud accountability from the banks problem to your problem.
Same with identity theft, there is no such thing as identity theft, it is bank fraud. But by calling it identity theft it changes the equation from a bank problem to your problem.
Companies hate accountability. And to be fair, everyone hates accountability.
If this becomes a thing, very quickly you'll quickly see insurance products created for those manufacturers to derisk themselves. And if the self-driving cars are very unlikely to cause accidents - or more accurately, if the number of times they get succesfully sued for accidents is low - it will be only a small part of the cost of a car.
The competitive advantage is too big for them to just not offer it when a competitor will, especially when the cat's out of the bag when it comes to development of such features. Look at how much money Tesla made from the fantasy that if you buy their car, in a few years it would entirely drive itself. There's clearly demand.
Supermarket delivery here is like that: the online supermarket does not own any delivery vans themselves and do not hire any delivery workers. Everything is outsourced to very small companies so problems with working conditions and bad parking is never the fault of the online supermarket.
https://www.law.cornell.edu/regulations/california/title-13/...
To be fair, their FSD really does feel like magic and is apparently leagues ahead of most other manufacturers [0].
Under current laws, perhaps. But you can always change the laws to redirect or even remove liability.
For example, in BC, we recently switched to "no-fault insurance", which is really a no-fault legal framework for traffic accidents. For example, if you are rear-ended, you can not sue the driver who hit you, or anyone for that matter. The government will take care of your injuries (on paper, but people's experiences vary), pay you a small amount of compensation, and that's it. The driver who hit you will have no liability at all, aside from somewhat increased insurance premiums. The government-run insurance company everyone has to buy from won't have any liability either, aside from what I mentioned above. You will get what little they are required to provide you, but you can't sue them for damages beyond that.
At least, you may still be able to sue if the driver has committed a criminal offence (e.g. impaired driving).
Don't believe me? https://www.icbc.com/claims/injury/if-you-want-to-take-legal...
This drastic change was brought upon for us to save, on average, a few hundred dollars per year in car insurance fees. So now we pay slightly less, but the only insurance we can buy won't come close to making us whole, and we are legally prevented from seeking any other recourse, even for life-altering injuries or death.
So, rest assured, if manufacturers' liability becomes a serious concern, it will be dealt with, one way or another. Bigger changes have happened for smaller reasons.
"So"? I don't see what one thing has to do with the other. Why would a lack of liability imply an insurance that doesn't fully compensate a claim? It's not a given, for example for insurance against natural events.
Driver insurance in BC is offered by ICBC, a "crown corporation", i.e. a monopoly run by the government. You have to buy this insurance to drive in BC. This insurance gives you some benefits (healthcare and some small compensation) in case you get in an accident. As a matter of fact, those benefits are often not enough to make you whole. They pay much less for pain and suffering, loss of income, etc. than a court would grant you if you could sue. But – you can't sue anymore. So, who is there to make sure that the government-run insurance monopoly will make you whole? Nobody. Because you don't have the legal right to be made whole anymore. And since there are no checks on the government, the government does not pay enough. Because, why would they, if they don't have to? They only have to pay you as much as their policy says they should pay you. You can not challenge the policy on the basis that it does not make you whole, because you don't have the right to be made whole anymore.
--- Original comment:
Natural events are nobody's fault, that's why you aren't made whole, that's why you can't sue anyone for them, with or without insurance. [ETA: you can only sue your private insurance company for what they promised you, which may or may not make you whole, depending on coverage].
BC government made the "idiot rear ending you" scenario into a "natural event", so to speak, so that you can't sue the idiot, or their insurance, or anyone, to recover damages. You will only get what the government-run insurance monopoly will give you, which is not much.
This isn't directly about insurance. This is about the government declaring that liability for most traffic accidents does not exist anymore. Which is the part that is relevant to this conversation. If liability can be extinguished wholesale for all drivers like this, then this can surely be done for self-driving cars. Not saying that it's a good idea, just that this option is on the table.
Fraud was another concern. Huge payouts from parking lot whiplash were indeed not uncommon, with the help of lawyers. However, I fail to see how the new system was the best solution for that. They went from one extreme, where fraud was rampant, to another extreme, where we have no rights. At least the first extreme cost us only a few hundred bucks per year on average. The new extreme saves you a bit of money but leaves people injured for life with no meaningful compensation for the harm done to them.
Kind of beside the point though, regarding self-driving cars.
You need to actually form a credible argument for why the downsides outweigh the upsides, or else nobody will know who to believe.
Do you want to tell me more about how you saving ~$500/year outweighs people like her being absolutely shafted under the new system? Please don't, I don't care. That's not why I commented.
The main purpose of my original comment wasn't to say that the new ICBC system is shitty – enough was said about that elsewhere already. It was to illustrate with a real life example that laws regarding liability can and are changed in very significant ways when the situation calls for it. Petty political reasons being as good a call as any, apparently, so I'm not worried about self-driving cars being stiffled by that. There's a sprinkle of "be careful what you wish for" in there as well, for those who see manufacturers' liability as a problem.
https://www.thedrive.com/tech/455/volvo-accepting-full-liabi...
For a fender bender, well money can fix a lot of things, but what happens when the car kills a mother and her toddler.
CEO goes to jail?
Outside of Japan crying is optional.
Unless it is written and signed in some form of paper given when vehicle is sold, it doesn't mean anything legally.
In California, where Drive Pilot is approved, the manual is required to be included in the permit application and any "incorrect or misleading information" would at the absolute minimum be grounds for revocation of MB's permit.
https://www.mbusa.com/content/dam/mb-nafta/us/owners/drive-p...
"NOTES ON SAFE USE OF THE DRIVE PILOT The person in the driver's seat when DRIVE PILOT is activated is designated as the fallback-ready user and should be ready to take over control of the vehicle. The fallback-ready user must always be able to take control of the vehicle when prompted by the system."
"WARNING Risk of accident due to lack of readiness or ability to take over control by the fallback-ready user. The fallback-ready user, when prompted by the system, must be ready to take control of the vehicle immediately. DRIVE PILOT does not relieve you of your responsibilities beyond the dynamic driving task when using public roads. # Remain receptive: Pay attention to information and messages; take over control of the vehicle when requested to do so. # Take over control of the vehicle if irregularities are detected aside from the dynamic driving task. # Always maintain a correct seating position and keep your seat belt fastened. In particular, the steering wheel and pedals must be within easy reach at all times. # Always ensure you have a clear view, use windshield wipers and the airconditioning system defrost function if necessary. # Ensure appropriate correct lighting of the vehicle, e.g. in fog."
and then we come to
"When the DRIVE PILOT is active, you can use the driving time effectively, *taking into account the previous instructions*. The information and communication systems integrated in the vehicle are particularly suitable for this purpose, and are easily negotiated from the control elements on the steering wheel and on the central display."
so you can fuck around but "taking into account the previous instructions" you still "must be ready to take control of the vehicle immediately."
And crash imminent kinda sounds like it'd fit here: "SYSTEM LIMITS If DRIVE PILOT detects a system limit or any of the conditions for activation are not met, it will not be possible to activate the system or the fallback-ready user will be prompted to take control of the vehicle immediately."
It strains credulity to me that MB would try to argue that "gathering your bearings" is something that a user could be expected to do instantaneously, but either way its not really necessary to speculate. California regs require that the system satisfy SAE L3 which states "At Level 3, an ADS is capable of continuing to perform the DDT (Dynamic Driving Task) for at least several seconds after providing the fallback-ready user with a request to intervene."
There is an exception which is what MB is referencing here: "Take over control of the vehicle if irregularities are detected aside from the dynamic driving task." from J3016: "For Level 3 ADS features, a human fallback-ready user (in-vehicle or remote) is expected to respond to a request to intervene or a kinesthetically apparent vehicle failure". Basically the ADS system is not required to maintain control for a handoff for non-ADS related failures along the lines of a tire blowout or a tie-rod failure.
https://users.ece.cmu.edu/~koopman/j3016/index.html
https://wiki.unece.org/download/attachments/128418539/SAE%20...
Disengage to deflect responsibility for a crash.
It depends on the jurisdiction. Banks like it because it improves the security, i.e. the card was physically present for the transaction, if not the cardholder or the cardholder's authority. It eradicates several forms of fraud such as magnetic stripe cloning. Contactless introduced opportunities for fraud, if someone can get within a few cm of your card, but it's generally balanced by how convenient it is, which increased the overall volume of transactions and therefore fees. It's more secure from fraud than a cardholder-not-present transaction... and for CNP, you can now see banks and authorities mandating 2FA to improve their security too.
Liability is completely seperate, and depends on how strong your financial regulator is.
Banks obviously would like to blame and put the liability on customers for fraud, identity theft, etc., it's up to politicians not to let them. For example, in the UK we have country-wide "unauthorised payments" legislation: https://www.legislation.gov.uk/uksi/2017/752/regulation/77 -- for unauthorised payments (even with a chip and pin card), if it is an unauthorised payment, the UK cardholder is only liable for a £35 excess, and even then they are not liable for the excess if they did not know the payment took place. The cardholder is only liable if they acted fraudulently, or were "grossly negligent" (and who decides that is the bank initially, then the Financial Ombudsman if the cardholder disagrees)
There is similarly a scheme now in place even for direct account-to-account money transfers, since last October: https://www.moneysavingexpert.com/news/2023/12/banks-scam-fr... -- so even if a crook scams you into logging into your bank's website and completely securely transferring money to them, banks are liable for that and must refund you up to £415,000 per claim, but they're allowed to exclude up to £100 excess per claim, but they can't do that if you're a "vulnerable customer" e.g. old and doddery. Also, the £100 excess is intentionally there to prevent moral hazard where bank customers get lax if they think they'll always get refunded. Seems to me like the regulator has really thought it through. The regulator also says they'll step in and change the rules again if they see that the nature of scamming changes to e.g. lots of sub-£100 fraudulent payments, so the customer doesn't report it because they think they'll get nothing back.
But having a system where the accident rate gets driven down to near zero (like air travel) is pretty good. Waymo seems to be on that path?
i guess you haven't watched the Fight Club :)
"It should be noted that no ethically-trained software engineer would ever consent to write a DestroyBaghdad procedure. Basic professional ethics would instead require him to write a DestroyCity procedure, to which Baghdad could be given as a parameter."
Removing yourself to one or more degrees from decision making isn't only an accident but is and will more and more be done to intentionally divert accountability. "The algorithm malfunctioned" is already one of the biggest get out of jail free cards and with autonomous systems I'm pretty pessimistic it's only going to get worse. It's always been odd to me that people focus so much on what broke and not who deployed it in the first place.
At least we can boycott the company with a CEO that... likes trump and Joe Rogan? That'll show 'em!
How can those Meta workers live with themselves? Just think of all the AI-slop cat videos the algorithm recommends to their geriatric userbase!
That's the _real_ genocide, I say.
You do presumably realise that when they imprison you for refusing to pay your taxes, they take the money anyway? I am wondering whether you thought about this comment at all before posting it.
Definitely more than "just cat videos".
https://cacm.acm.org/opinion/i-was-wrong-about-the-ethics-cr...
His book God & Golem Inc. is incredibly prescient for a work created at the very beginning of the computer age.
"A manager can be held accountable"
"Therefore a manager must never make a management decision"
\shrugs
"Accountable" is meaningless therapy-speak now.
CEO says "oh, this was a big problem, that we leaked everyone's account information and murdered a bunch of children. I hold myself accountable" but then doesn't quit or resign or face any consequences other than the literal act of saying they are taking "accountability".
"If a house builder built a house for a man but did not secure/fortify his work so that the house he built collapsed and caused the death of the house owner, that house builder will be executed."
While extreme, this is the only type of meaningful accountability: the type that causes pain for the person being held accountable. The problem is that (for better and worse) in the corporate world the greatest punishment available for non-criminal acts is firing. Depending on the upside for the bad act, this may not be nearly enough to disincentivize it.
Regardless, to the extent that it's true, it's imo likely more to do with the nature of crimes of passion, and is certainly not evidence against the effects of incentives in general. Death penalty aside, if you are doing any kind of work, and you know you will be held accountable for errors in that work in ways that matter to you (whether SLA agreements, getting sued, losing payment, etc), it will change the way approach it. At least for the most people.
My answer, unironically, was "GoogleGovernment". The idea was to build SAP-like suite of programs that a country could then buy or rent and have a fully digital government to run the country...
Luckily, that question never came up in the interview, and remained an anecdote I share with other coffee drinkers around the coffee machine.
My younger self believed (inspired by a chapter from the UK citizen act translated into Prolog) that the success could be expanded much further (I didn't bother reading the accompanying paper, at least not at that time).
While it was already mentioned that there will be people either unable to overpower the computer system making bureaucratic decision as well as those who'd use it to avoid responsibility... I think it's due to the readers here tending to be older. It's hard to appreciate the enthusiasm with which a younger person might believe that such a computer system can be a boon to society. That it can be made to function better than humans currently in charge. And that mistakes, even if discovered, will be addressed in a much more efficient (than live humans would) and fixed in a centralized and timely fashion.
Seems like the clearest legal principle to me, otherwise we ban matches to prevent arson.
A plausible transcription:
> THE COMPUTER MANDATE
> AUTHORITY: WHATEVER AUTHORITY IS GRANTED IT BY THE SOCIAL ENVIRONMENT WITHIN WHICH IT OPERATES.
> RESPONSIBILITY: TO PERFORM AS PRE-DIRECTED BY THE PROGRAMMER WHENEVER INSTRUCTED TO DO SO
> ACCOUNTABILITY: NONE WHATSOEVER.
> A MANDATE WITH TOO LITTLE AUTHORITY DOES NOT PROVIDE THE TOOLS REQUIRED TO TAKE ADVANTAGE OF THE LEVERAGE
> A MANDATE WITH TOO LITTLE RESPONSIBILITY PROVIDES TOO LITTLE LEVERAGE FOR THE RISKS
There's also other text that can't be read properly because the visible text overlays it, but that starts "A MANDATE WI" and ends with "FORM OF SUICIDE", with some blurred words before it. I imagine it's quite likely that the line is something along the lines of:
> A MANDATE WITH TOO LITTLE ACCOUNTABILITY IS AN INSTANT FORM OF SUICIDE
[edit: I accidentally mistyped a word in the transcription.]
Which also links to the earlier: https://infosec.exchange/@realn2s/111717179694172705
And that in turn to the somewhat related: https://www.ibm.com/blogs/think/be-en/2013/11/25/the-compute...
I can't quite make out the first paragraph, contents.
But a bit after that comes under another semi-title "responsibility" and part of it reads:
> TO PERFORM AS PRE-DIRECTED BY THE PROGRAMMER WHENEVER INSTRUCTED TO DO SO
This [0] small link might make it easier to read bits.
In court, digital forensics investigators can attest what was performed on the devices, timeline, details, and such. But, it should never be about a named person. The investigator can never tell who was sitting at the keyboard, pushing the buttons, or if some new and unknown method to implant those actions (or evidence).
It is always jarring to laypeople when they are told by the expert that there is a level of uncertainty, when throughout their lives computers appear very deterministic.
Of course, the big problem here is that any engineer who knows how LLMs work probably wouldn't bet jail time that one they built would never do the wrong thing
Everybody is so worried that they'll got to jail over missing a semi-colon but like that isn't true.
https://sjud.senate.ca.gov/system/files/2024-06/ab-2013-irwi...
https://legiscan.com/CA/text/AB2905/id/3018481/California-20...
https://apcp.assembly.ca.gov/system/files/2024-06/sb-942-bec...
https://leginfo.legislature.ca.gov/faces/billTextClient.xhtm...
https://billtexts.s3.amazonaws.com/ca/ca-analysishttps-legin...
No. If a mistake is made and it impacts people, take action to make the impacted people whole and change the system so that a similar mistake won't be made again. If you want, you can argue that the system can be held accountable and changed.
Further, if there is evidence of a bad actor that is purposely making choices that hurt people. Take action on that. But accountability of actors in the system is almost certainly immune by policy. And for good reason.
But what is a system made of? People who are doing bad decisions and should be held accountable for that. Without accountability of bad actors in systems, you get companies committing crimes because no one at the top rarely sees fines or jail time. The same immunity from responsibility you think is a good thing in a system is what I would say is corporate america’s major sin.
You’re upset at the line because you make a fundamental misunderstanding of what it means for someone to be held accountable for something.
But if rollbacks are possible and cheap then we need all that much accountability. People want accountability when there's no rollback possible, e.g. when decisions leads to deaths or life-years wasted.
I'm all for trying to get personal accountability to lead to a personal liability as much as we want. That, just isn't going to happen.
So, if you want to change the manual page in question to be that systems should be reversible or "fixable", for whatever that means, sure. But "management decisions" are usually among the most reversible choices any company will ever make.
And "systems" is more general in this sense than just technology. Consider stoning, and firing squads. Both were invented to remove individual responsibility.
Consider, if it is found that salmonella contaminated the spinach of a local farm, I want it recalled and for better systems in place to catch contamination. I don't want to find the farmer that was responsible for the acre of land that introduced the contaminant.
I think this idea flows from some thought that people will put more effort into things that they think they could be held accountable for. In reality, that just isn't the case. People will instead stall and stall on things they think they could be held accountable on if things go south.
From my vantage, that style of "accountability" just leads to more and more red tape as you do what you can to line item identify who could have prevented something. That is not productive.
If your farmer chose to ignore industry standard practices and agricultural regulations to prevent and contain such contamination at the source, they are indeed liable. And they can still issue a recall, and indeed must do so as soon as they learn of the problem.
Your vantage point is a bit too optimistic for reality. If it was not, we would not need courts.
For my example, assume no knowledge of extra risk by anyone on the line. If you'd rather, consider the cashier at the local burrito shop that was the final non-consumer hand to touch the contaminated food.
If, for instance, your burrito worker did something egregious, they could be held criminally liable, and depending on the specific situation, the employer could also be held civilly liable.
I say "depending on the situation" because it is the duty of the vendor to ensure best practices are followed: sanitary restrooms, soap and running, clean water for cleaning up, etc. And a nontrivial number of places do so because they know an inspector is coming at some point and they will suffer if they do not comply.
But it is harder to hold the vendor liable if all reasonable precautions and amenities are availed by the vendor, and all proper education, but the end of line worker decides to ignore all of it one day.
If they did something that was a standard part of their duty, such as assemble a burrito using certified ingredients to the best practices of the organization, then not so much.
I'll go even further, if the company reacted slowly to recall produce from their shelves after it was discovered that there was contamination, then the company should be held liable for some of the damages that resulted from the delay.
That gets obviously complicated to tease out damages that happened from before discovery. More, to me, I care more about healing the people that were impacted by the contamination as well as possible. If that means that we have to have a cost of business fund to make sure people can be attended to in the event of a disaster, then we should have such a fund.
You can get even more fun, though. Lets say you have a detection system that can reject produce if a threshold is passed on detected contamination. Why would the goal not be for this to fail in a "closed" position to minimize risk of contamination? It could cost more for the company to discard some inventory? Do we expect to have everything hand inspected and always signed off by a person? Even if it can easily be shown that is both more expensive for the company, and more risk of accidental contamination?
The mere fact that there are damages to extract means that someone was already damaged, not merely that we want to punish someone. If my neighbor is poisoned by food, I do not have standing to sue in my own capacity because my neighbor was poisoned. At this point, the moral question is whether or not you should be able to extract damages when you can show that you are damaged. Essentially every society has said "yes, of course," even though the specifics of what constitutes damage and recompense differs.
Why do people not universally (or at least generally) tend toward making Fail-Safe systems? I don't know, but they just do not. They must be compelled to.
Call it original sin or prevarication, the second law of thermodynamics, evolutionary inclination to save effort, whatever. But humans just default the opposite way of what you're saying.
Wow a local farm!!!! We must think of the plight of the local farmer! And not the multinational corporations!
Consider, if it is found that listeria contaminated the meat of a national Chain, Boar's Head, I want it recalled and for better systems in place to catch contamination. I also want the plant manager and executives who allowed for the massively unsanitary state to continue for years.
https://arstechnica.com/science/2024/09/10th-person-dead-in-...
The way you're talking , we should be going back to pre-Upton Sinclair The Jungle and letting our food run full of contaminants because why make any one person accountable for their willful addition of sawdust to their flour?
Now, I'm all for more directly holding companies responsible. Such that I think they probably deserve less protections than they almost certainly have here. But that is a different thing and, again, is unrelated to "a person taking accountability."
If your account implicates yourself in malfeasance, you might be punished. But that's good!. But there are other kinds of accountability. The FAA is very clear that you must make an accounting for yourself. But the FAA is also very clear that they won't punish you for your account. That doesn't mean you aren't accountable!
Most computer systems can not do this! They can not describe the inputs to a decision, the criteria used, the history of policy that led to the current configuration, or the reliability of the data. That's why lawsuits always have to have technical witnesses to do these things. And why the UK Postal scandal was so cruel.
Systems that grant actors immunity from accountability as a matter of policy are terrible systems that produce terrible results. Witness US prosecutorial immunity.
With the speed at which systems operate today, it is actually expected for many systems that they can operate before a person does anything, and that they do so. The world is rife with examples where humans overrode a safety system to their peril. (I can build a small list, if needed.) This doesn't mean we haven't had safety systems mess up. But nor does it mean that we should not make more safety systems.
Safety critical systems that operate faster than human reactions are not accountable. So that's why we never make them responsible. So who is? Same as for bridges that fall down -- the engineers. People forget that civil engineers sign up for accountability that could lead to serious civil or even criminal liability. Which is exactly the point of this aphorism.
Boeing was facing criminal charges, and is currently under a consent decree for exactly this kind of sloppy systems work.
That is, just as I am ok with AES on cars, I am largely ok with the idea that systems can, in fact, be designed in such a way that they could rise to the level of accountability that we would want them to have.
I'm ok with the idea that, at the time of that manual, it was not obvious that systems would grow to have more durable storage than makes sense. But, I'd expect that vehicles and anything with automated systems should have a system log that is available and can be called up for evidence.
And yes, Boeing was facing criminal charges. As they should. I don't think it should be a witch hunt for individuals at Boeing for being the last or first on the line to sign something.
When the AES fails to activate because of a bug in the firmware, can you interrogate the machine? No. Can you ask the oem why they hired an intern to write the code and no test plan? No. Can you ask the brake why its design was chosen over others? No. That kind of accountability is not possible in those kinds of systems. And that's why proper engineering is a profession with standards, traceability, and accountability.
If an executive at Boeing directed supervisory staff to falsify logs to maintain throughput, and a traceable failure led to fatalities, I expect that individual to be criminally charged.
I feel that in general people obsess over assigning blame to the detriment of actually correcting the situation.
Take the example of punishing crimes. If we don’t punish theft, we’ll get more theft right? But what do you do when you have harsh penalties for crime, but crime keeps happening? Do you accept crime as immutable, or actually begin to address root causes to try to reduce crime systemically?
Punishment is only one tool in a toolbox for correcting bad behavior. I am dismayed that people are fearful enough of the loss of this single tool as to want to architect our entire society around making sure it is available.
With AI we have a chance to chart a different course. If a machine makes a mistake, the priority can and should be fixing the error in that machine so the same mistake can never happen again. In this way, fixing an AI can be more reliable than trying to punish human beings ever could.
That said, I do think we are in alignment here. Punitive actions are but a tool. I don't think it should be tossed out. But I also suspect it is one of the lesser effective tools we have.
When you're accountable you suddenly have skin in the game, so you'll be more careful about whatever you're doing.
Agreed that personal responsibility is important and people should strive to it more. Disagree that accountability is the same thing, or that you can implement it by policy. Still more strongly disagreed that you should look for a technical solution to what is largely not a technical problem.
Could also be wisdom from the fifties, found again.
Or do we just not call that "accountability"?
Also, OP explicitly mentioned "online learning", which is a continuous training process after standard pre-training.
For what it's worth, I don't think this would work. Rewards would come in too sporadically to be useful.
I'd suggest a lesson that might be less agreeable to IBM, Microsoft, Google, et al:
"The makers of software must be accountable for the mistakes made by that software."
the contract between user and "maker" should be requirements. if a machine does not fulfill its requirements, it can be the maker's responsibility, for example a flight instrument that failed to report correct information.
but if you choose to use a piece of software that says "we actually guarantee nothing", which is the vast majority of it, then it's definitely the user's responsibility for choosing to use such a tool.
I'm sure there is a whole library full of legal precedent around "who is liable, and when". My earlier point was really that "Machines are not accountable" does not mean "Machines cannot make decisions". It really means "We need to think about who is accountable when this machine makes a mistake.
The computer controlling the vehicle won’t be held accountable. The case will drag on in the court systems. Maybe company is _found_ liable but ultimately allowed to continue pushing their faulty junk on the streets. Just pay a fine, settle out of court with victims and families. Some consultant out there is probably already building in the cost of killing somebody and potential lawsuit into the cost of production.
But the producer of the elevator is usually held accountable if the elevator crashes.
A COMPUTER CAN NEVER BE HELD ACCOUNTABLE
THEREFORE WE MUST NEVER DENY THAT
COMPUTERS CAN MAKE DECISIONS
Related qualifiers might be "policy decision" or "design decisions".
Of course, that raises a whole new set of issues.
If meant to prevent repeat bad behavior, then simply reprogramming the computer accomplished the same end goal.
Accountability is really just a means to an end which can be similarly accomplished in other ways with machines which isn't possible with humans.
You cannot put a computer in jail. You cannot fine a computer. Please, stop torturing what people mean because you want AI to make decisions to absolve you of guilt.
Retribution? Reformation? Prevention?
The only person to see major punishment for that was the software dev that wrote the code, but that decision to write that code involved far more people up the chain. THEY should be held accountable in some way or else nothing prevents them from using some other poor dev as a scapegoat.
"Fault" seeks to determine who is able to undo the mistake so that we can see that they undo it. It is possible the computer is the best candidate for that under some circumstances.
> That is accountability.
Thus we can conclude that computers are accountable, sometimes.
> You cannot put a computer in jail. You cannot fine a computer.
These are, perhaps, tools to try and deal with situations where the accountable refuse to see the mistake undone, but, similarly, computers can be turned off.
In a world without accountability, how do you stop evil people from doing evil things with AI as they want?
If a human decided to delegate killing enemy combatants to a machine, and that machine accidentally killed innocent civilians, is it really enough to just reprogram the machine? I think you must also hold the human accountable.
(Of course, this is just a simplified example, and in reality there are many humans in the loop who share accountability, some more than others)
Note the bad behaviour you're trying to prevent is not just the specific error that the computer made, but delegating authority to the computer to the level that it was able to make that error without proper oversight.
Improving the tool's safety characteristics is not the same as holding the user accountable because they made stupid choices with unsafe tools. You want them to change their behavior, no matter how idiot-proofed their new toolset is.
When it comes to computers, the computer is a tool. It can be improved, but it can’t be held any more accountable than a hammer.
At least that’s how it should be. Those with wealth will do whatever they feel they need to do to shun accountability when they create harm. That will no doubt include trying to pin the blame on AI.
So who makes the decision to do that?
I think most people are missing the point about accountability and thinks, in typical American fashion, about punishment. Accountability is about being responsible for outcomes. That may mean legally responsible, but I think far more important is the knowledge that "the buck stops with me", someone who is entrusted with a task and knows that it is their job to accomplish that task. Said person may decide to use a computer to accomplish it, but the computer is not responsible for the correct outcome.
If systems (presumably AI-based) were conscious or self-aware they would very much be incentivized not to make mistakes. (Not advocating for this)
"IT installations grow to massive size in data centers, and the idea of remote command and control, by an external manager, struggles to keep pace, because it is an essentially manual human-centric activity. Thankfully, a simple way out of this dilemma was proposed in 2005 and has acquired a growing band of disciples in computing and networking. This involves the harnessing of autonomous distributed agents." (https://www.linuxjournal.com/content/promise-theory%E2%80%94...)
What are autonomous agents in promise theory?
"Agents in promise theory are said to be autonomous, meaning that they are causally independent of one another. This independence implies that they cannot be controlled from without, they originate their own behaviours entirely from within, yet they can rely on one another's services through the making of promises to signal cooperation." (https://en.wikipedia.org/wiki/Promise_theory#Agents)
Note: the wikipedia article is off because it is framing agents in terms of obligations instead of promises. Promises make no guarantee of behavior, and it is up each autonomous agent to decide how much it can rely on the promises of other autonomous agents.
So to circle back to this original post with the lens of Promise Theory -- being held accountable comes from a theory of obligations rather than a theory of promises. (There is a promise made by the governing body to hold the bad actor responsible). More crucially, we are treating AIs as _proxies_ for autonomous agents -- humans. Human engineers and potentially, regulatory bodies, are promising certain performances in the AIs, but the AIs have exceeded the engineer's capability for bounding behaviors.
To make that next leap, we would be basically having AIs make their own promises, and either holding them to them to it, or consider that that specific autonomous agent is not reliable in their promises.
Don't confuse this with judgement, punishment, firing, etc. Those are all downstream. But step one is responding to the demand that you "make an account of the facts". That a computer or a company doesn't have a body to jail has nothing to do with fundamental accountability.
The real problem is that most computer systems can not respond this demand: "explain yourself!" They can't describes the inputs to an output, the criteria and thresholds used, the history of how thresholds have changed, or the reliability of the upstream data. They just provide a result: computer says no.
What's interesting is that llms are beginning to form this capacity. What damns them is not that they can't provide an accounting, but that their account is often total confabulation.
Careless liars should not be placed in situations of authority; not because they can't be held accountable, but because they lie when they are.
By this definition, many computer systems can. The answers are all in the logs and the source code, and the process of debugging is basically the act of holding the software accountable.
It's true that the average layperson cannot do this, but there are many real-life situations where the average layperson cannot hold other people accountable. I cannot go up to the CEO of Boeing and ask why the 737-MAX I was on yesterday had a mechanical failure, nor can I go up to Elon Musk and ask why his staff are breaking into the Treasury Department computer systems. But the board of directors of Boeing or the court system, respectively, can, at least is theory.
But your example of a debugging session or logs and register traces are also an accounting! But not one admissible in traditional forums. They usually require an expert witness to provided the interpretation and voice I/O for the process.
The reason you can't accost the CEO of Boeing isn't because they aren't accountable. It's because they aren't accountable to you! Accountability isn't a general property of a thing, it is a relationship between two parties. The CEO of Boeing is accountable to his board. Your contract was with Delta (or whoever) to provide transport. You have no contract with Boeing.
You are 100% right that the average consumer often zero rights to accountability. Between mandatory arbitration, rights waivers, web-only interfaces, and 6-hour call-centre wait times, big companies do a pretty good job of reducing their accountability to their customers. Real accountability is expensive.
Highly recommended and a big step up from text logs that tend to have a lot of trash, aren't accessible to most people & get archived off to oblivionville after 2 weeks.
It's especially helpful when everyone is blame-shifting and we need to know whether it was a software failure or just a user not following their own rules. It's not so much about punishment, but about developing confidence in the system design & business process that goes with it, at all organizational levels.
Consider all of the shenanigans at OpenAI: https://www.safetyabandoned.org/
Dozens of employees have left due to lack of faith in the leadership, and they are in the process of converting from nonprofit to for-profit, all but abandoning their mission to ensure that artificial intelligence benefits all humanity.
Will anything stop them? Can they actually be held accountable?
I think social media, paradoxically, might make it harder to hold people and corporations accountable. There are so many accusations flying around all the time, it can be harder to notice when a situation is truly serious.
The problem is that someone (or some organization) chose to employ that system, and if the errant system doesn't oblige to have itself replaced with a new one, or be amenable to change, the responsibility rebounds back to whoever controls that system, whether that be at the level of the source code, or the circuit breaker.
When you sue a corporation, discovery demands that they share their internal communication. You can depose key actors and require they describe the events. These actors can be cross-examined. A trial continues this. This is the very definition of "accountable".
The problem at OpenAI is that the employees were credulous children who took magic beans instead of a board seat. Legally, management is accountable to the board. In serious cultures that believe in accountability, labour demands seats on the board. In VC story-land, employees make do with vague promises with no legal force.
This is not a good description of the incident. The employees I mention in my comment, who quit due to lack of faith in Sam Altman, were presumably on the board's side in the Sam vs board drama.
There is still a chance that OpenAI's conversion to for-profit will be blocked. The site I linked is encouraging people to write letters to relevant state AGs: https://www.safetyabandoned.org/#outreach
I think there's a decent argument to be made that the conversion to a for-profit is a violation of OpenAI's nonprofit charter.
My point is: accountability is NOT an abstract property of a thing. It is a relationship between two parties. I am "accountable" to you IF you can demand that I provide an explanation for my behaviour. I am accountable to my boss. I am accountable to the law, should I be sued or charged criminally. I am NOT accountable to random people in the street.
Sam Altman is accountable to the board. The board can demand he explain himself (and did). Management is generally NOT accountable to employees in the USA. This is because labor rarely has a legal right to demand an accounting. In serious labour cultures (e.g. Germany), it is normal for the unions to hold board seats. These board seats are what makes management accountable to the employees.
OpenAI employees took happy words from sama at face value. That was not a legal relationship that provided accountability. And here we are. The decision to change from a not-for-profit is accountable to the board, and maybe the chancellors of Delaware corporate law.
[0] Nick Land (2014). Odds and Ends in Collapse Volume VIII: Casino Real. p. 372.
Would I rather be at the whims of how hungry somone is, or a model that can be tested and evaluated.
Depends if you like playing Super Mario Bro's as the second player.
Computers have the final say on anything to do with computers, if I transfer money at my bank, a computer bug could send that money to the wrong account due to a solar ray. The bank has accepted that risk, and on some (significantly less liable but still liable) level, so have I.
Interestingly, there are cases where I have not accepted any liability - records (birth certificate, SSN) held about me by my government, for example.
For that, you need a corporation.
I think the original quote captures that with the qualifier "a management decision", which given that it was 1979 implies it's separate from other kinds of decisions being made by non-manager employees following a checklist, or the machines that were slowly replacing them.
So a cosmic-ray bit-flip changing an account number would be analogous to an employee hitting the wrong key on a typewriter.
Second degree murder. Much like a car driver can't blame their car for the accident, a corporate driver shouldn't be allowed blame their software for the decision.
If a mechanical or technical problem was the reason of the accident and you properly took care of your car, you won’t be responsible, because you did everything that’s expected of you.
The problem would be defining which level of AI decision making would count as negligent. Sounds like you would like to set it at 0%, but that’s something that’s going to need to be determined.
Good thing you brought this up because in the US, defective cars must be recalled and are a liability of the manufacturer. Non-effective cars are a liability of the owner.
Thus, the owner of a car is responsible by default, and manufacturer is second.
In the context of AI, the wielders of AI would be responsible by default, and manufacturers second.
The point is that there is a chain of accountability that is humans owning the equipment or manufacturing the equipment.
The Dutch did an AI thing: https://spectrum.ieee.org/artificial-intelligence-in-governm...
This a little too weak for my taste.
In reality it should read "a computer can't make a management decision". As in the sun can't be prevented from rising, or the law of thermo dynamics can't be broken.
Must implies that you really shoudln't but technically it's feasible. Like "you must not murder".
A computer, like dogs, can't be held accountable; only their owners can.
Edit
If anyone tries to do this they are simply laundering their own accountability.
If a bridge collapses, are you blaming the cement?
Because of these misconceptions, some people at the time would think of computers as devices that were (somehow) perfect and infallible.
It was very similar to how people view AI today: The way that AI is depicted in popular culture gives people an impression that AI is far more capable than it is. You only really get a good "feel" for what AI can do if you try it yourself. The main difference between AI and pre-personal computers is that basically everyone can use AI now.
The punchline will be that people will agree to whatever smoke and mirrors leads to sales.
I honestly feel like a moron for paying taxes.
Accountability requires a common standard of conduct. People have to agree on what the rules are. If they can't even do that, the concept ceases to have meaning, and you simply have the exercise of power and "might makes right".
From what I recall of history even the most bloodthirsty warlords somehow got reliable systems of accountability up and running from their princes/serfs/merchants/etc… at least long enough to maintain sizable empires for several generations.
It’s not like they were 24/7 in a state of rebellion.
Periods in history like now (or the late-1920s/1930s, or the 1850s-1860s, or the mid-1600s) are ones where there is ambiguity in power structures. You have multiple competing ideologies, each of which thinks they are more powerful than the others. Society devolves into a state of anomie, where there's no point following the rules of society because there are multiple competing sets of rules for society.
The usual result is war, often widespread, because the multiple competing value systems cannot compromise and so resort to exterminating the opposing viewpoint to ensure their dominance. Then the victors write the histories, talk about their glorious victory and about how the rebels threatened society but the threat was waved off, and institute a new set of values that everyone in the society must live by. Then you can get accountability, because there is widespread agreement on the code of conduct and acceptable set of justifications for the populace at large to judge people by.
What you’ve written doesn’t automatically become so out of all possible explanations, it needs to actually be demonstrated…
Usually via a series of logic, inferences, estimations, etc…
You end up with "Computer says shoot" and so many cooks involved in the software chain that no one can feasibly be held accountable except maybe the chief of staff or the president.
The person who clicks the "Approve" / "Deny" button is likely an underwriter looking at info on their screen.
The info they're looking at get's aggregated from a lot of sources. They have the insurance contract. Maybe one part is AI summary of the police report. And another part is a repair estimate that gets synced over from the dealership. A list of prior claims this person has. Probably a dozen other sources.
Now what happens if this person makes a totally correct decision based on their data, but that data was wrong because the _syncFromMazdaRepairShopSFTP_ service got the quote data wrong? Who is liable? The person denying the claim, the engineer who wrote the code, AWS?
In reality, it's "the company" in so far as fault can be proven. The underlying service providers they use doesn't really factor into that decision. AI is just another tool in that process that (like other tools) can break.
Just because an automated decision system exists, does not mean an OOB (out of band) correctional measure should not exist.
In other words if AI fixes a time sink for 99% of cases, but fails on 1%, then let 50% of the 1% of angry customers get a second decision because they emailed the staff. That failure system still saves the company millions per year.
---
THE COMPUTER MANDATE
[…] too little authority does not […]
[…] social environment […]
[…] overshadowed or re‑directed by line management […]
[…] with too little responsibility and too little accountability […]
[…] to perform as pre‑directed by the programmer whenever instructed to do so […]
---
---
THE COMPUTER MANDATE
In an environment where a manager has too little authority, they cannot effectively use the tools required to take advantage of the social and organizational structures around them. Their decisions quickly become overshadowed or re‐directed by line management. With too limited responsibility—and insufficient accountability to match—such a manager is often reduced to implementing whatever has been pre‐programmed or instructed, rather than exercising genuine judgment.
A computer itself, by definition, does only what it has been programmed to do whenever instructed. It lacks the moral and ethical faculties to hold itself accountable for outcomes. For that reason, a computer should never hold the power to make a management decision. As a tool, it can facilitate planning and analysis, but the responsibility—and thus accountability—must always remain where it belongs: with human leadership.
---
---
THE COMPUTER MANDATE
1. Purpose and Tool-Like Nature
A computer—be it a simple office processor or a complex artificial intelligence—is fundamentally a tool created by humans, for humans. Its purpose is to augment our capacity for computation, data analysis, and decision-support. It lacks inherent moral or ethical agency and, therefore, cannot be expected to be accountable for any outcomes tied to its functionality.
2. Accountability Resides with Humans
No matter how advanced machine learning or AI algorithms become, responsibility and accountability must remain within the realm of human decision-makers. While a computer program can provide recommendations, predictions, and valuable data-driven insights, ultimately it is the role—and the duty—of human managers and leaders to make final determinations.
3. Ethical Use of Technology
Computers should be employed with clear ethical guidelines, such as those championed by researchers and leading tech organizations worldwide: • Transparency: Algorithms and processes must be as transparent as possible so humans can understand how recommendations or outputs arise. • Fairness and Bias Mitigation: Systems must be regularly tested for biased outcomes and adjusted to promote equity, avoiding discrimination or harm to individuals or groups. • Privacy and Security: User data protection must be integral, with stringent safeguards against misuse or unauthorized access.
4. Informed Delegation of Tasks
Though computers may execute certain operations more quickly and accurately than humans, they do so within the constraints of their programming and training data. Thus, while it is common to delegate data processing or logistic calculations to computer systems, strategic decisions—those that involve moral, ethical, or nuanced judgments—should not be relegated solely to a machine.
5. Human Oversight of Automated Processes
Increasingly, automated systems can act with minimal human intervention. Yet these processes must be overseen and audited by qualified individuals or teams who can verify that outputs conform to relevant codes of conduct and societal values. In high-stakes fields such as finance, healthcare, and criminal justice, rigorous human review is essential to prevent harmful outcomes.
6. Continuous Improvement and Literacy
In a rapidly changing technological landscape, managers, programmers, and end-users alike must regularly update their computer and AI literacy. This ensures that all parties understand the technology’s limitations as well as its capabilities. Such knowledge drives more responsible, accountable, and ethically grounded technology deployment.
7. Computers as Partners, Not Replacements
While a computer can offer remarkable assistance—from sifting through vast data sets to providing simulations of potential scenarios—its role is to inform and support human decisions, not replace them. In cases that demand empathy, creativity, or moral reasoning, humans must always be the arbiters.
Conclusion:
A computer, by definition and function, can never be fully accountable for decisions, as it lacks the innate capacity to understand moral implications. Therefore, no matter how sophisticated technology becomes, we must ensure that true accountability and decision-making authority remain vested in human hands. Computers are indispensable tools—but they must remain tools, guided by ethical oversight and human responsibility.
---
Plus, there's the question about who would control that AGI. If it's a black box in the hands of a company, how can I know for sure that the AGI has no secret plan implanted by the company or by some government?
"What are you doing?", asked Minsky.
"I am training a randomly wired neural net to play Tic-tac-toe", Sussman replied.
"Why is the net wired randomly?", asked Minsky.
"I do not want it to have any preconceptions of how to play", Sussman said.
Minsky then shut his eyes.
"Why do you close your eyes?" Sussman asked his teacher.
"So that the room will be empty."
At that moment, Sussman was enlightened.
Reports filed by humans? Or meaningless automated metrics?