THE HARD LIFE OF AVIATION REGULATORS (especially, regarding Human Factors)
There is a very extended mistake among Aviation professionals: The idea that regulations set a minimum level. Hence, accepting a plane, a procedure or an organization means barely being at the minimum acceptable level.
The facts are very different: A single plane able to pass, beyond any kind of reasonable doubt, all the requirements without further questions would not be an “acceptable” plane. It would be an almost perfect product.
Then, where is the trick and why things are not so easy as they seem to be?
Basically, because rules are not so clear as pretended, giving in some cases wide room for interpretation and because, in their crafting, they are mirroring the systems they speak about and, hence, integration is lost in the way.
The acceptance of a new plane is a very long and complex process till the point that some manufacturers give up. For instance, the Chinese COMAC built a first plane certified by Chinese authorities to fly only in China or the French Dassault decided to stop everything jumping directly to the second generation of a plane. It be judged a failure, but it is always better than dragging design problems until they prove that they should have been managed. We cannot avoid to remind cases like the cargo door of DC10 and the consequences of a problem already known during the design phase.
The process is so long and expensive that some manufacturers keep attached to very old models with incremental improvements. Boeing 737, that started to fly in 1968, is a good example. Its brother B777, flying since 1995, keeps a very old Intel-80486 processor inside but changing it would be a major change, despite Intel stopped its production in 1997.
The process is not linear, and many different tests and negotiations are required in the way. Statements of similarity with other planes are frequent and the use of standards from Engineering or Military fields is common place when something is not fully clear in the main regulation.
Of course, some of the guidelines can contradict others since they are addressed to different uses. For instance, a good military regulation very used in Human Factors (MIL-STD-1472) includes a statement about required training, indicating that it should be as short as possible to keep full operating state. That can be justified if we think in environments where lack of resources -including knowledge- or even physical destruction could happen. It should be harder to justify as a rule in passenger’s transportation.
Another standard can include a statement about the worst possible scenario for a specific parameter, but the parameter can be more elusive than that. The idea itself of worst possible scenario could be nonsense and, if the manufacturer accepts this and the regulator buys it, a plane could by flying legally but with serious design flaws.
Regulations about Human Factors were simply absent a few years ago and HF mentions were added to the technical blocks. That was partially changed when a new rule for planes design appeared addressing precisely Human Factors as a block on its own. However, the first attempts were not much further than collecting all the scattered HF mentions in a single place.
Since then, it has been partially corrected in the Acceptable Means of Compliance, but the technical approach still prevails. Very often, manufacturers assemble HF teams with technical specialists in specific systems instead of trying a global and transversal approach.
The regulators take their own cautions and repeat mantras like avoiding fatigue levels beyond acceptability or planes that could not require special alertness or special skill levels to manage a situation.
These conditions are, of course, good but they should not be enough. Compliance with a general condition like this one, in EASA CS25 “Each pilot compartment and its equipment must allow the minimum flight crew (established under CS 25.1523) to perform their duties without unreasonable concentration or fatigue” is quite difficult to demonstrate. If there is not a visible mistake in the design, trying to meet this condition is more a matter of imagining potential situations than a matter of analysis and, as it should be expected, the whole process is driven by analysis, not by imagination.
Very often, designs and the rules governing them try to prevent the accident that happened yesterday, but a strictly analytic approach makes hard to anticipate the next one. Who could anticipate the importance of controls feedback (present in every single old plane) until a related accident happened? Who could anticipate before AA191 that, perhaps, removing the mechanical blockage of flaps/slats could not be so sound idea? Who could think that different presentations of artificial horizon could drive to an accident? What about different automation policies and pilots disregarding a fact that could be disregarded in other planes but not in that one?…
Now, it is still in the news the fact that a Boeing factory had been attacked by the WannyCry virus and the big question was if it had affected the systems of the B777s that were manufactured there. B787 is said to have 6,5 million of code lines. Even though B777 is far below that number, checking it should not be easy and it should be still harder if computers calculating parameters for the manufacturing must be also checked.
That complexity in the product drives not only to invisible faults but to unexpected interactions between theoretically independent events. In some cases, the dependence is clear. Everyone is conscious that an engine stopped can mean hydraulic, electric, pressure and oxygen problems and manufacturers try to design systems pointing to the root problem instead of pointing to every single failure. That’s fine but…what if the interaction is unexpected? What if a secondary problem -like oxygen scarcity, for instance- is more important than the root problem that drove to this? How are we going to define the right training level for operators where there is not a single person who understands the full design?
In the technical parts, the complexity is already a problem. When we add the human element, its features and what kind of things are demanded from operators, the answer is everything but easy. Claiming “lack of training” every time that something serious happens and adding a patch to the present training is not enough.
A full approach more integrated and less shy to speak about using imagination in the whole process is advisable long ago but now it is a must. Operators do not manage systems. They manage situations and, at doing so, they can use several systems at the same time. Even if there is not an unexpected technical interaction among them, there is a place where this interaction happens: The operator who is working with all of them and the concept of consistency is not enough to deal with it.