Artemis II Fault Tolerance
37 points
3 hours ago
| 8 comments
| alearningaday.blog
| HN
methodical
8 minutes ago
[-]
Candidly, while I understand the need for some amount of redundancy, I'm curious what this level of redundancy adds in terms of complexity to the system of a whole and whether or not that complexity-add almost outweighs the higher redundancy. I'm sure NASA has calculated the trade off, but I'd be curious to see the thoughts behind that.

I feel in a similar vein when learning of certain aircraft accidents over the years, where it feels like the redundancy of certain systems and the complexity it adds has been the indirect cause of accidents instead of preventing them. I suppose there's not really a way to quantify the accidents that it's prevent to be able to compare them directly.

reply
WorkerBee28474
1 hour ago
[-]
> Orion utilizes two Vehicle Management Computers, each containing two Flight Control Modules, for a total of four FCMs. But the redundancy goes even deeper: each FCM consists of a self-checking pair of processors.

Who sits down and determines that 8 is the correct number? Why not 4? Or 2? Or 16 or 32?

reply
croisillon
6 minutes ago
[-]
Eight shall be the number thou shalt count, and the number of the counting shall be eight. Nine shalt thou not count, neither count thou seven, excepting that thou then proceed to eight.
reply
echoangle
1 hour ago
[-]
They probably set an acceptable total loss rate for the mission and worked backwards to determine how many replicas of each system they need to achieve that while minimizing total cost/weight.

So the answer is "some engineers sat down after talking to management".

reply
y1n0
1 hour ago
[-]
This is correct.
reply
nine_k
1 hour ago
[-]
Given a list of estimates of failure probabilities, finding the right mix of redundancy becomes a very tractable problem, maybe even freshman-level.
reply
cubefox
1 hour ago
[-]
Getting the probabilities could be very difficult though, especially for issues that never occurred before.
reply
notahacker
23 minutes ago
[-]
The fault tolerance is mostly focused on background radiation flipping bits. We've got half a century of data on the frequency of those upsets and the extent to which they're correlated under different space conditions for that, not to mention the ability to irradiate prototypes of the flight computer with representative amounts of shielding in ground based facilities...
reply
kqr
44 minutes ago
[-]
For issues that have never occurred before, probabilities are the wrong tool. The right thing to do is list all the behaviour the vehicle must never exhibit and think of ways it still might, despite all redundancies -- maybe even despite every single component working as intended.

Lots of mission failures in history were caused by unexpected interactions between fully functional components. Probabilities of failures don't help with that.

reply
SauntSolaire
4 minutes ago
[-]
And why you test till failure (ideally under real/similar conditions): to surface the failures that have never occurred before, and start collecting data on them.
reply
9dev
1 hour ago
[-]
That is what you hire an army of engineers for.
reply
MiracleRabbit
1 hour ago
[-]
Interesting. In safety components we are using Lockstep Microcontrollers which are doing something similar in a much smaller scale.

https://en.wikipedia.org/wiki/Lockstep_(computing)

Example: https://www.st.com/resource/en/datasheet/spc574k72e5.pdf

reply
pclmulqdq
1 hour ago
[-]
Lockstep processors were used here, as well.

> each FCM consists of a self-checking pair of processors.

reply
willis936
53 minutes ago
[-]
Never take to clocks to sea. Always sail with one or three.
reply
tcp_handshaker
2 hours ago
[-]
For the Airbus they used different CPUs because CPUs have bugs too...
reply
echoangle
1 hour ago
[-]
Not just CPUs, they run a whole different (but also simpler) fallback program in case the main computers fail. I think they were more worried about programming errors but this should avoid all shared failures between the main computers (be it programming or hardware).
reply
kqr
40 minutes ago
[-]
It does not.

Even if different teams write software in different languages, they end up creating very similar bugs because the bugs crop up in the complexities of the domain and insufficiencies of the specification.

N-version programming doesn't work as well as you think. See Knight and Leveson (1986).

(N-version programming does guard against "random" errors like typos or accidentally swapping parameters to a subroutine call. But so does a good test suite and a powerful compiler.)

reply
ranger207
45 minutes ago
[-]
> The self-checking pairs ensure that if a CPU performs an erroneous calculation due to a radiation event, the error is detected immediately and the system responds.

How does a pair determine which of the pair did the calculation correctly?

reply
SauntSolaire
2 minutes ago
[-]
You just run the calculation again until both agree.
reply
AlotOfReading
32 minutes ago
[-]
It doesn't have to. It raises an error that the system can detect and take action on. Usually that'll be some combination of interrupt/reset and an external pin to let the rest of the system know what's happened.
reply
Ductapemaster
36 minutes ago
[-]
In simple terms, this works by doing an XOR on the outputs and if they disagree, performing a fault recovery.

There's also space systems that use 3 processors and a majority vote for the correct output, but that's different.

reply
_whiteCaps_
1 hour ago
[-]
I'm a big fan of Dissimilar Redundancies (but didn't know that was the term until today) for building system software.

Build for various Linux distros, and some of the BSDs. You'll encounter weird compile errors or edge cases that will pop up. Often times I've found that these will expose undefined behaviour or incorrect assumptions that you wouldn't notice if you were building for a single platform.

reply
y1n0
1 hour ago
[-]
What I would like to see is the fault data. Also a graph of the # of in sync FMCs over time and how well did it correlate with predictions.

I other words, how over engineered is it.

reply
m3kw9
1 hour ago
[-]
The training the astronauts need must be a lot
reply
kqr
35 minutes ago
[-]
When the Apollo astronauts learned that they might need to repair the computer if it breaks they joked they might as well learn brain surgery if they end up needing that too.

(This was when they planned on sending a modular computer with them. In the end they settled for sending up a fully assembled spare computer instead, which made replacement easier.)

reply