Etiquetado: aviación
Navigating Safety: Necessary Compromises and Trade-Offs: Theory and Practice by René Amalberti
Amalberti explains very clearly safety related concepts and, whatever the reader agrees with 100% of contents or not, it is worth to be read and discussed. He goes against some sacred cows in the safety field and his book should be analyzed very carefully. Especially, these three points should deserve a special analysis:
• People learn by doing instead of observing. Asking for a full Situational Awareness before doing anything could drive to serious accidents while the operator tries to get a full Situational Awareness.
• There are different safety models. Many markets and companies try to imitate ultra-safe models like the ones coming from Aviation and Nuclear Energy when, actually, these models should not work in other activities more expert-based than procedure-based.
• Trying to train someone for every single exceptional situation is not the safest option. People can try to explore limits instead of remaining in the safe environment.
People learn by doing instead of observing. True, and perhaps that is one of the motives that people are so bad at monitoring while some automation designs still insist precisely on that. However, Amalberti reaches a conclusión related with Situational Awareness that, in my opinion, could be wrong: For Amalberti, we should not ask for a full Situational Awareness before trying a solution because we could get paralyzed under critical situations with serious time constraints. Explained like that, it’s true and that should be the phenomenon known as paralysis because of analysis but something is missing:
Situational Awareness cannot be understood as a condition to be met in critical situations but as a flowing process. In high risk environments, design should guarantee that flow at a level that, once the emergency appears, getting a full picture of the situation is easy. If we put together this single principle with the first one by Amalberti, that is, that people learn by doing instead of observing, we could reach different conclusions:
1. Top level automation should be used only under exceptional situations, using as default levels others where human operators should learn and develop Situational Awareness by doing instead of observing.
2. Information Technology used in critical systems should be under-optimized, that is, instead of using the most efficient design in terms of technology use, the alternative option should be using the most efficient design in terms of keeping Situational Awareness. Some new planes keep Intel processors that are out of the market many years ago –for instance, B777 using Intel 486- and nothing happens. Why then should we try to extract all the juice from every programming line building systems impossible to be understood by users?
Different safety models with an out-of-context imitation of ultra-safe systems as Aviation or Nuclear Plants. This is another excellent point but, again, something could be missing: Evolution. I have to confess on this point that my Ph.D. thesis was precisely trying to do what Amalberti rejects, that is, applying the Air Safety model to Business Management field. Some years later, Ashgate published it under the name Improving Air Safety through Organizational Learning but “forgetting” the chapter where learning from Air Safety was applied to Business Management.
The first thing to be said is that, in general terms, Amalberti is right. We cannot bring –unless if we want it to work- all the procedural weight of a field like Air Safety to many other fields like, for instance, Surgery, where the individual can be much more important than the operating rules. However, the issue that could be lost here is Organizational Evolution. Some fields have evolved through ultra-safe models and they did so because of their own constraints without anyone trying to imitate an external model. Different activities, while looking for efficiency improvement, evolved towards tightly coupled organizations as Charles Perrow called them and that produced an unintended effect: Errors in efficient organizations are also efficient because they spread their effects by using the same organizational channels that normal operation. Otherwise, how could we explain cases like Baring Brothers where an unfaithful employee was enough to take the whole Bank down?
Summarizing, it’s true that we should not make an out-of-context imitation of ultra-safe models but, at the same time, we should analyze if the field whose safety we are analyzing should evolve to an ultra-safe model because it already became a tightly coupled organization.
Trying to train someone for every single exceptional situation is not the safest option: Again, we can agree in general terms. For instance, we know that, as a part of their job, pilots practice in simulators events that are not expect to appear in their whole professional life. Perhaps, asking them to practice recovery from upside down positions or from spins should be an invitation to get closer to these situations since they should feel themselves able to recover. The “hero pilot” time is over long ago but…
We have known in the past of wrong risk-assessments where the supposedly low-probability event that should not require training since it should not happen…happened. A well-known example is United 232 where three supposedly independent hydraulic systems failed at the same time showing that they were not so independent as pretended. The pilot had practiced before in a flight simulator the skills that converted a full crash into a crash landing decreasing substantially the number of casualties. A similar case is the Hudson river landing where a double stop engine was supposed to happen only above 20.000 feet…and procedures developed for that scenario made the pilots lose a precious time when the full loss of power happened far lower than this height.
Even though, instead of focusing in different events showing a wrong risk assessment that could invite us to take with care the Amalberti idea -even when he clearly raises an important point there- is a different kind of problem that has already been under major accidents: Faulty sensors feeding wrong information to computers and occupants planes getting killed without the pilots getting the faintest idea about what was going on. “Stamping” this kind of events with a “lack of training” should be a way of telling us something that, at the same time, is true and useless and, by the way, it’s opposed to the Amalberti’s principle.
Daniel Dennett used a comparison that can be relevant here: The comparison between commands and information agencies: Commands are expected to react under unforeseen situations and that means redundancy in the resources and a very important cross training. By the other side, information agencies work under the principle of “need-to-know”. Should we consider as an accident that this same “need-to-know” idea has been used by some Aviation manufacturers in their rating courses? Should we really limit the training to the foreseen events or should we really recognize that some events are very hard to foresee and different approaches should be taken in design as well as in training?
Summarizing, this is an excellent book. Even if some of us could not agree with every single point, they deserve a discussion, especially when it’s clear that the points raised in the book come from a strong safety-related concept instead of politics or convenience inside regulators, operators or manufacturers.
I would not like finishing without a big Thank you to my friend Jesús Villena, editor of this book in Spanish under the name “Construir la seguridad: Compromisos individuales y colectivos para afrontar los grandes riesgos” because he made me know this book.