Children Of The Magenta
October 2, 2024 1:58 PM   Subscribe

"There are various factors that contributed to the crash of flight 447Some people point to the fact that the airbus control sticks do not move in unison, so the pilot in the left seat would not have felt the pilot in the right seat pull back on his stick, the maneuver that ultimately pitched the plane into a dangerous angle. But even if you concede this potential design flaw, it still begs the question, how could the pilots have a computer yelling 'stall' at them, and not realize they were in a stall?"

Captain Warren VanderBurgh, in 1997, replied: "We appear to be locked into a cycle in which automation begets the erosion of skills or the lack of skills in the first place and this then begets more automation."

His presentations on the risks of automation dependency - and how to respond to it in an emergency - have been restored and put on YouTube: Automation Dependency, Part One and Part Two.
posted by mhoye (21 comments total) 40 users marked this as a favorite
 
Admiral Cloudberg's excellent writeup of the crash and its causes.
posted by riotnrrd at 2:02 PM on October 2 [19 favorites]


In the Admiral Cloudberg writeup above, there's a good description of one possibility to explain the pilots' failure to address "a computer yelling 'stall' at them":
There are however several possible reasons why they ignored it. One is that their minds were already so saturated with information that they simply never heard it. Instrument indications were seemingly going haywire, nobody knew what instruments they could trust, the pilots were desperately trying to figure out what the plane was doing, the computer screen was covered in warning messages, and a continuous C-chord chime was running in the background. Scientific studies have shown that in such situations, the capacity of the human brain to tune out seemingly obvious auditory cues is considerable.
There are of course other possibilities, but I do think a good appraisal of the safety aspects of this flight/accident should include that many other airplanes have tactile warnings for a stall (stick shaker etc), but Airbus' fly-by-wire aircraft do not. I'm not suggesting that's inherently unsafe, but I think it would deserve studying if adding a stick shaker would at least marginally improve stall recognition under task saturation.
posted by thegears at 2:27 PM on October 2 [17 favorites]


Semantic satiation. Especially if the word is repeated identically and rhythmically. Very easy to just lose any sense of meaning at all. Possibly one of the worst warning methods to alert a human being is to repeat the same thing over and over.
posted by seanmpuckett at 2:58 PM on October 2 [15 favorites]


Previously
posted by uncle harold at 3:10 PM on October 2


When I start a job in corporate environments, the first thing I do is pay attention to each and every alert. They're usually entirely bullshit, but somehow nobody fixes them until I complain. They just tune them out. This is more alert overload than alert fatigue, but they are closely related.
posted by novalis_dt at 3:22 PM on October 2 [9 favorites]


Gawande's _Checklist_ is good about how much thought should go into every alert.
posted by Nancy Lebovitz at 4:01 PM on October 2 [5 favorites]


Driving west on US 2 back to Seattle with my girlfriend after a day of really stunning North Cascades sightseeing and a short hike, I got a speeding ticket in Gold Bar Washington because I hadn’t slowed down when we entered the city limits.

While waiting for the officer to hand me the ticket to sign, I remembered saying, over and over again for at least the last 45 seconds, under my breath but perfectly audibly, 'slow down! Slow down or you’re going to get a ticket!'.

After we got back into the flow of traffic I glanced over at my girlfriend and asked her whether she’d heard me whispering 'slow down or you’re going to get a ticket' over and over again, and she looked back with somewhat widened eyes and said 'Yes!'.

So I don’t have much trouble believing the pilots ignored the 'Stall!' warning, whether they heard it or not.
posted by jamjam at 4:09 PM on October 2 [6 favorites]


So, I think about AF 447 just about every single day at work.

I have, within the last month, in a bar on a Friday night, with a friend I had met the week before though a hobby, had an animated 30 minute conversation about Upset Recovery and the lessons of AF 447 and how it applies to everyday life.

Also MentourPilot's excellent video, and also previously.

Not turning this into a derail about Upset Recovery...

The thing about automation, is it's hard to fight automation when it's doing something humans are bad at. And making hundreds of imperceptible corrections to keep a flight level and smooth for hours at a time is not something humans are good at. I'd be curious how obvious it would be to passengers if autopilot was off for two hours.

Also, there are also plenty of stories of automated safety systems saving lives when the pilots were absolutely lost in terms of situational awareness, Ural Flight 178 comes to mind.

BUT I agree automation is a double edged sword. There have been a number of extremely tragic flights lost recently in icing conditions. I won't link any since investigations are still pending and I don't want to assume the accident sequence for any loss of life, but I will say something commenters have mentioned repeatedly in the aftermath of these accidents is YOU MUST disengage autopilot early in icing conditions since that computer can cover up that loss of lift that comes with early icing.
posted by midmarch snowman at 5:28 PM on October 2 [13 favorites]


XPilotYT is a YouTube channel that uses flight sims to recreate airline crashes. It's vaguely hypnotic and scary/cathartic, like an aviation true crime series.

Their video on flight 447 is where I first heard about this crash.
posted by AlSweigart at 7:33 PM on October 2 [2 favorites]


Langeweische
posted by j_curiouser at 12:03 AM on October 3 [2 favorites]


I've just read Matthew Syed's second book Black Box Thinking: Why Most People Never Learn from Their Mistakes (2015). He covers AF447 in some detail as well as the November Oscar "nobody died" near (5ft!) miss at LHR. Syed's sub-thesis is that, with automation, pilots can clock up 000s of flight hours without a) making mistakes b) recovering and learning from them.
Also: when a plane goes down, investigators will go to extraordinary lengths to work out why. The black box for AF447 was recovered from the mid-Atlantic floor 4,000m below the surface, 2 years after the plane crashed.
posted by BobTheScientist at 12:35 AM on October 3 [12 favorites]


I seem to recall that at one point in the accident sequence the captain took over from the relief pilot and did in fact use correct control inputs, which were dutifully averaged with the copilot's constant stick back input, leaving a net stick neutral input. All it would have taken to prevent the plane from falling into the sea would have been the captain holding down the override button that causes the flight computer to ignore the other pilot's stick input.

One of the most disturbing things about the whole incident to me is that the captain had time to notice the problem, leave the crew rest area, enter the cockpit, and start handling the controls. That's a long time to fall.
posted by wierdo at 4:19 AM on October 3 [2 favorites]


@BobTheScientist
I'd never heard of the NO incident. Thanks
posted by Pembquist at 7:52 AM on October 3


with automation, pilots can clock up 000s of flight hours without a) making mistakes b) recovering and learning from them.

IMO recovering from learning and recovering from mistakes is more of a training thing than a valid production thing. Failure has 1000 fathers just as success does - to examine every failure and expect to learn a pat lesson that oneself (or for others) never repeats again is the thing of MBAs and life coaches, not reality.

Production failure can mean the end (of business, of career, of life)- to imply that it's something one can learn from is not necessarily true.

Maybe they need hours in a trainer without the automation in place to learn to recover from mistakes.
posted by The_Vegetables at 10:01 AM on October 3 [1 favorite]


So the last part of Admiral Cloudberg's analysis is super interesting. This isn't just about semantic satiation. This is about error messages and warnings that are entirely disconnected from semantic meaning, and smart guides that aren't that smart. So rather than say what is failing (e.g., the pitot tubes), the error messages indicate what they interpret to be failing, which at various points during this was completely wrong. So you've got some error messages that are 100 percent real, do-or-die, like the stall warning, that get disregarded because they don't make sense in the context of other error messages, some of which are incorrect and seemingly contradictory because of the blocked pitot tubes. So for instance, the automated flight director kicked back in while the flight was incorrectly ascending in the first place, leading to error messages that were totally inappropriate for the situation when they attempted to level out or bring the nose down to fix the stall. The flight director warnings to maintain course with a 12-degree nose-up attitude were the worst thing they could heed, in a sea of cacophony.

It's a tragic reminder of the issues caused when people meet complex systems with poor training and unclear guidance.
posted by limeonaire at 11:55 AM on October 3 [1 favorite]


And yeah, to add to the notes about training, one of the things that is clear from the articles is that while they received training, it was often on things that couldn't easily be translated to other circumstances, and didn't always include the full error messages they might receive in a real scenario. So they learned how to address a stall...during takeoff. The way to address that is totally different in that situation than at cruise altitude. And the simulators didn't have all the error messages they might encounter during these situations on a real airplane. One part of the answer is more realistic, comprehensive training, including additional flight hours learning recovery procedures on small planes.
posted by limeonaire at 12:01 PM on October 3 [2 favorites]


Yea, I've been on about Langewiesche and AF 447 crash and really appreciate all of the related links above.
posted by zenon at 1:44 PM on October 3


Reading the transcript of the voice recorder is horrifying.
posted by grumpybear69 at 2:30 PM on October 3 [1 favorite]


If you think that's horrible, there's a play/movie of six such accidents/incidents. I couldn't get past the first one.

Also: Crew/Cockpit Resource Management. Could use a little more of this in healthcare.
posted by credulous at 3:06 PM on October 3


So many disasters I have read about would have been prevented by the operator letting go of the controls and allowing automation (or plain physics in negative feedback systems) to take over. My first thought tends to be that I would have been smart enough to do the right thing.

Then I remember all the times I tried to catch a falling soldering iron or touched hot glassware in the lab. We humans appear to have a very hard time not reacting the way that worked for millions of years of adaptation before we invented airplanes. See also trying to put out an oil fire in the kitchen with water.

At the moment I am involved in designing the user interface for a pretty complex by nature and complicated by regulation system where many things can go wrong. No one will die if mistakes are made, but people’s lives could be affected for years. A fine act balancing automation with manual input. One of my wins has been making manual overrides different on a random basis so operators can’t get used to overriding by ‘muscle memory’. The end users did not like the idea until I shared links to dozens of transportation and chemical industry post mortems.

Currently I am deeply confused trying to work out a way to pick the most important warning to present to the user in a way that is hard to ignore, without also hiding other warnings. There are at least two airplane crashes I can remember where the crew was so focused on faulty landing gear warnings that they ignored altitude and, I may be mistaken, fuel warnings.

I have an idea loosely based on Roger Fisher’s suggestion to implant nuclear launch codes in a volunteers heart so that the president would have to kill them to initiate a launch. Make overrides and warnings physically painful in proportion to how important they are. Tactile feedback like stick shakers have not been enough, same with loud alarms. Make it hurt. Like heat up the pilot’s and first officer’s controls if they disagree, a little bit at first, red hot in the extreme.
posted by Dr. Curare at 3:08 PM on October 3 [4 favorites]


There are of course other possibilities, but I do think a good appraisal of the safety aspects of this flight/accident should include that many other airplanes have tactile warnings for a stall (stick shaker etc), but Airbus' fly-by-wire aircraft do not. I'm not suggesting that's inherently unsafe, but I think it would deserve studying if adding a stick shaker would at least marginally improve stall recognition under task saturation.
posted by thegears at 5:27 PM on October 2 [17 favorites +] [⚑]


I was surprised to learn that A320/330 systems do not have stick shakers, but they really don't. Granted, they're designed to ignore control inputs that would create an excessive angle-of-attack, but the different control laws on those aircraft are a whole different ballgame.

It's worth noting that the Airbus 220, which is not an original Airbus design (it was the Bombardier C-series) does, in fact have a stick-shaker as part of the fly-by-wire system and it's extremely....effective...in getting your attention about airspeed decay when you lose situational awareness despite the visual and audible cues you're getting from the airplane. It, also, will not allow you to stall it unless it's in a reduced-system-protection mode called "direct."

AF447 is taught as one of a set of case studies in a number of airline indoctrination classes and certainly during ATP-CTP certification (ATP is the certificate required to operate an airliner in the USA.)

The thing is....unreliable airspeed indications are very, very tricky. Every pilot I know - myself included - trains to correlate instruments and "Do Pilot Stuff" but without visual references, it's much, much harder to do that than you'd think. It's astonishing to me (and every pilot I know) that a crew like that could stall that far into the ocean but here we are, and there but for the grace of God go we - which is why we continually study this one and read accident reports and try to refine CRM practices and learn as much as we can.

I'm about to go to annual recurrent training in the next month and one of the themes this year is flying without a great deal of automation - really turning things off and hand-flying as much as possible - as well as dealing with unreliable airspeed. I can tell you having gone through it in training last year, the processes we go through for dealing with that scenario are exhaustive, difficult, and draining by the end - and that's just in a simulator.
posted by Thistledown at 11:33 AM on October 7 [4 favorites]


« Older Pronouns reported to trigger brain cells   |   Your (2nd) weekly dose of female fronted... Newer »


This thread has been archived and is closed to new comments