This plant is operated and deigned to the spec of an international corp with more than 20 factories, it's not a mom-and-pop operation. No one seems to think the excessive, useless, alarms are an issue and that any damage caused by missed warnings is the fault of the operator. When approaching management and engineering about this the responses range from "it's not in the budget" to " you're maintenance, fix all the problems and the alarms will go away".
The only way for this kind of issue to be resolved is with regulation and safety standards. An operator can't safely operate equipment when alarms are not filtered or sorted in some way. It's like forcing your IT guy to watch web server access logs live to spot vulnerabilities being exploited.
After all, read any post-mortem comments on HN. Many of those people can be hired as expert if you like. They will say “I would have put an alert on it and had testing”. You will lose the case.
“Oh but we are trying to keep error rate low”. Yes, but now your company is dead when high error rate company is alive.
In revealed preferences, most engineers prefer vendors who have CYA. This is obvious from online comments. This is not because they are engineer. It’s because most people want to believe that event is freak accident.
Building system for error budget is not actually easy. Even for engineer who say they want it. Because when error happens, they immediately say it should not have happened. Counterfactual other errors blocked, and business existing are not considered. Every engineer is genius in hindsight. Every person is genius in hindsight.
Why these genius never make failure proof company? They do not. Who would not pay same price for 100% reliable tech?
Indeed it is. That's why I said it's a larger societal problem in how we manage risk and react to failures.
> Why these genius never make failure proof company?
Because this is mostly a matter of unknown unknowns and predicting the future, so even a founder who makes zero mistakes is more likely than not to fail.
Absolutely, and we'd collectively be better served if we had tools to deal with it.
I think of it as "incentive ecology" -- as noted, everybody has their own incentives which shapes their behavior, which causes downstream issues that begin the process anew.
Obviously there's no simple one-shot solution to this, but what if we had ways to simplify and model this "web of responsibility" (some sort of game theory exposed as an easily consumed presentation, with computed outcomes that show the cost/ROI/risk/reward) that could be shared by all stakeholders?
Obscurity and deniability are the weapons wielded in most of these scenarios, so what if we could render them obsolete?
Sure, those in power would not want to yield their advantages, but the overall outcomes should reward everybody by minimizing risks and maximizing rewards for the enterprise and everybody wins.
Yes, I'm looking at it as a an engineer and a dreamer, but I think if such a tool existed that was open source and easily accessible that this work could be done by rogue participants that could put it out there so it's undeniable.
Those people have valuable input on issues the engineer may not understand and have little experience with. And engineers are just as likely to take the easy way out, like the caricature in the parent comment:
For example, for the manufacturer's engineering team it's much easier, faster and cheaper to slap an alarm on everything than to learn attention management and to think through and create an attention management system that is effective and reliable (and it had better be reliable - imagine if it omits the wrong alarms!). I think anyone with experience can imagine the decision to not delay the project and increase costs for that involved subproject - one that involves every component team, which is a priority for almost none of them, and which many engineers, such as the mechanical engineer working on the robotic arm, won't even understand the need for.
> And China's society is run by engineers, so it will win out over ours.
History has not been kind to engineers who do non-engineering, such as US President Herbert Hoover who built dams and but also had significant responsibility for the Great Depression. It's not that engineers can't acquire other skills and do well in those fields, but that other skills are needed - they aren't engineering. Those who accept as truth their natural egocentric bias and their professional community's bias toward engineering are unlikely to learn those skills.
> and it had better be reliable - imagine if it omits the wrong alarms!
This is entirely based on the premise that an error due to omitting the wrong alarm is worse than an error based on including too many alarms. That right there is lawyerthink. Also, these priorities don't conflict as you say, they just take different sides of a tradeoff. Managers and finance people are balancing a tradeoff of delivery speed, cost, and quality to maximize business value. And the bureaucrats and lawyers are choosing more expensive and less reliable systems because they better manage the emotions of panicky anxious people looking for a scapegoat in a crisis. This has a cost.
Besides having bad luck in timing to be president when the stock market crashed, and therefore scapegoated for it, Herbert Hoover was well regarded in everything he did before and after his term, including many non engineering related things. So I think he is a particularly poor example of this. Public blame for things like that tends to be exactly as rational as thinking a hangover has nothing to do with last night.
Also, I think this ignores the rest of my point to nitpick one part of a complex system, which was part of a larger point.
THe more of them you have, the more likely it is that there's a warning if something happens. Whether the warning is ever noticed is secondary, what matters is the fact that there was a warning and the operator didn't react to it appropriately, which makes the situation the fault of the operator.
In the eyes of the regulators and courts individual low level employees can not take responsibility. This is the logic by which they fine the company when someone does something you shouldn't need to be told not to do on a step ladder or whatever.
What this means is that low level employees become liability sinks. Show them all the warnings and make them figure it out. Give them all sorts of conflicting rules and let them sort out which ones to follow. Etc, etc.
Unfortunately, some systems either don't track criticality, or some of the alerts are tagged with the wrong level.
(One example of the latter is the Ruckus WAP, which has a warning message tagged at the highest level of criticality, so about two or three times a month, I see the critical alert: "wmi_unified_mgmt_rx_event_handler-1864 : MGMT frame, ia_action 0x0 ia_catageory 0x3 status 0x0", which should be just an informational level alert, with nothing to be done about it. I've reported this bug to Ruckus a few times over the past five years, but they don't seem to care.)
Are you sure that's not what caused the problem in the first place? Unqualified and/or captured regulators who come up with safety standards that are out of touch with how the system needs to work in the real world?
Regulators coming up with engineering standards is pretty rare in general. Usually they incorporate existing professional standards from organizations like SAE, IEEE, IEC, or ISO.
Eventually, we tried removing the dialogs altogether and the incident rate approached zero. If you take away the guardrails completely, it radically alters the psychology and game theory around user interaction. Imagine climbing a tall building with multiple layers of protection vs having none at all.
I strongly believe in ideas like "safety 3rd". It's not that I want the humans to be maimed by the machines. Quite the opposite. The difficulty is in understanding higher order consequences of "safety" and avoiding the immediate knee-jerk satisfaction that first order resolutions may provide.
I think there's evidence and studies on this. IIRC removing traffic lights forces people to be much more alert, reducing accidents.
Fun fact: Bhutan is perhaps the only country in the world without traffic lights!
No way this would work long-term in Germany. Maybe there wouldn't be that many more accidents but traffic would stutter, all the time, everywhere. Some safety-first drivers still don't get how roundabouts work ...
Things like that would probably break down at a certain level of crowded-ness, but it did somewhat change my view of regulation in general. I think there are a lot of cases where people will figure things out just fine if you leave them alone and count on them to be responsible, versus having a million detailed rules that are poorly enforced.
Afaict they have police officers regulating traffic instead. Not much difference in this particular discussion.
Cool! Did workers expect consequences for incidents? Did they get rewarded for lack of incidents?
Meaning, I imagine a world where there are no consequences for incidents and removing guardrails doesn’t lower incident rates because people aren’t incentivized to care?
Or you’re saying they naturally cared and removing guardrails allowed them to take ownership?
The issue with multiple dialogs is that the operator could claim that they were confused with conflicting wording and the implications of things like "Confirm" vs "Cancel" in certain contexts of use. This provides some degree of cover for moving with less care. With no dialog at all, the operator has nothing to point to but their own actions. There is nothing to hide behind.
The fact that this was also a heavily multi-lingual/cultural environment amplified the effect of poorly designed safety mechanisms dramatically.
My point is that the operator may be genuinely confused by a poor interaction model. Removing that interaction model entirely is certainly an option, but it's not clear that comparing "no dialog" vs "bad dialog" is a strong argument for "dialogs bad, better to have none" - you don't have data for the "good dialog" case, which may be better still.
UX Design is hard...
Also, I was that obnoxious kid who, after asking someone a yes/no question, used to add "Yes/No/Cancel" (probably to highlight my perceived absurdity of that button).
There have also been fatal aviation accidents where there's a problem with a common system (a dip in the power supply or hydraulic pressure, or a problem with a critical sensor) and dozens of systems sound alarms at the same time [1].
And for a technology example, a database server disappearing might raise a single alarm, but the applications that rely on that database might raise countless alarms as attempts to connect fail over and over again.
[1] https://en.wikipedia.org/w/index.php?title=Air_France_Flight...
The ECAM system is what displays issues to the crew. There is a lot of logic there on what to show and what to inhibit. For example some errors aren't shown during takeoff since it's more important to focus on safely starting the climb, they would only be shown once past 400ft altitude.
There is also a priority order based on criticality, they don't just show up in chronological order. Engine fire for example is a red warning and would always come above a yellow caution message. It's designed in a way where the results of the major problem (e.g. engine on fire, and then loosing electric power from that side) are lower priority than the problem itself.
Snark aside, this also impacts resolution time, because done well, this instantly points out the most critical problem, instead of all the consequences of one big breaking. "Dear operator, don't worry about the hundreds of apps, the database cluster is down".
ECAM messages are recorded by the DFDR/CVR but are not normally transmitted via ACARS. Pilots were not normally aware of ACARS messages (see MH370).
Your point is a fair one though. In the case of AF447 the crew, against their training, made no attempt at all to work their way through the multiple ECAM messages with the appropriate checklist so they died. The final report on QF32 shows what a high workload this can be.
Since AF447 was lost for nearly two years the ACARS messages were all that the investigators had to go on until they found the wreckage and the DFDR and CVR. Incidentally, one problem with those ACARS messages was that they are only timestamped to the minute and may arrive out of order which makes the interpretation of them more difficult.
Boeing there's physical feed back, when one control moves so does the other.
This was not the first time pilots were having conflicting input without noticing.
>https://bea.aero/uploads/tx_elyextendttnews/annexe.01.en.pdf
You can see in the CVR that the stall indicator stopped many times despite them being in a stall the entire time. The pilot (like every other pilot) knew how to recover from a stall on paper. But he had the plane telling him his airspeed was good (frozen tube) and that bringing the pitch down was causing a stall.
As a pretty decent driver, this terrifies me, because I first think, "What am I missing?" But then it hits me - these alarms and chimes are breeding generations of drivers, not just young ones, who are grossly incompetent and should not be driving.
The fact that I cannot control what alarms go off is asinine. And they put a lock on how low you can turn the chime volume. So, basically, you're telling me that I have to harass my neighbors at 5 am when I load the car for work, because you want to chime nonstop when the door is open and I have zero control to turn it off or lower than the locked minimum. Oh, and don't forget the threats of voiding the warrantee if you dig deeper to disable anything. My favorite alarm and warning pops up randomly when you're driving, sometimes blocking the map, and it says something to the effect of, "Remember to stay focused on driving!"
I see this slipping into not just alerts and notifications, but also ads. Waze does not care if they block my directions by blasting an ad on part of my screen, which has absolutely caused me to miss exits since they don't want to tell you the exit ahead of time on longer strips, only, "drive for 45 miles"...
I see this like popups, and if industry can't handle themselves they need to be forced to stop doing this altogether and find a rework, since it's clearly affecting private and public machines that could burnout or kill people.
For some it's distracting and frustrating, even increasing aggression and thereby increasing the risk. For others it breeds complacency, a "boy who cried wolf" scenario such that the alarms become meaningless. Either way, it doesn't work as intended.
Interesting to know ships have followed the same pattern, apparently to a worse extent. I wonder how many more walks of life, and industries, are suffering in the same way.
As soon as I drove off the lot, 3 warning indicator lamps lit up, including "Tire Pressure" so I stopped at a service station, thought for a moment, then drove back to the rental lot.
The other indicator had something to do with crash protection, and I think we worked out how to disable the system. After putting air into my tires, I was good to go.
So I'm thankful that those lamps indicated some actual conditions. I always kind of make a point of taking out the Owner's Manual and leafing through it, however briefly, just to see that it covers everything. They're still fairly comprehensive. I really appreciate that.
Oh and before you even start driving let us "bing!" you with a message that the temperature is below 4C. As if you didn't know that already.
If I spent more than 50k on a car like that, I would absolutely return it and file a complaint.
Car companies care a great deal about after sales stats. This trend will continue because we as users on average tolerate it.
Similar experience, lots of flashing and beeping which is just distracting whilst also being wrong often enough to be really annoying (this is a known problem with speed limits).
Exceeding the speed limit, needing to change gear and by far the worst, active lane assist which pushes you back into your lane if you cross the white line without indicating (I only found this out afterwards as the hire place didn't mention it or leave a manual) and something which can happen frequently if you're driving down narrow country roads where indicating wouldn't have seemed appropriate and may just confuse others.
I spoke to one of the mechanics at my local garage who said you can't permanently turn these features off as they turn back on when you start the car.
I wonder if anyone has who's had an accident caused by being distracted by all these alarms has successfully sued?
You most likely can if you have the factory diagnostics tool or a good aftermarket reimplementation.
These features are often mandated on a per-region basis, so the configuration bits are definitely there.
I’ve also never used as much windscreen wash in any other car as I have in this one
And they start at pretty ridiculous temperatures in the double digits. The only way those would be dangerous to you is if you were homeless and lacked any form of winter clothing, at which point you either already know or are too far mentally gone for a text alert to help you.
This sounds like engineers wanted to go traditional routes in the first place but the chain of command wanted to reduce production/constuction time and save money.
The message was never intended to help the human operator. It was intended to allow the company to avoid responsibility for cutting corners.
If the goal of the message was to communicate something important to the human operator, extraneous messages would be a serious problem. But if the goal is simply to cover the ass of the company, then extra error messages are not a problem at all. Thats why they never get fixed or pruned.
I wonder how much labor expense has to be saved to make up for a future catastrophic event?
https://www.maritime.dot.gov/sites/marad.dot.gov/files/2025-...