Categoría: Behavioral Economics

Air Safety and Hacker Frame of Mind

If we ask anyone what a hacker is, we could get answers going from cyberpiracy, cyberdelincuency, cybersecurity…and any other cyberthing. However, it’s much more than that.

Hackers are classified depending of the “color of their hats”. White hat hacker means individual devoted to security, black hat hacker means cybercriminal and grey hat hacker means something in the middle. That can be interesting as a matter of curiosity but…what do they have in common? Furthermore, what do they have in common that can be relevant for Air Safety?

Simonyi, the creator of WYSIWYG, warned long ago about an abstraction scale that was adding more and more steps. Speaking about Information Technology, that means that programmers don’t program a machine. They instruct a program to make a program to be run by a machine. Higher programming levels mean longer distance from the real thing and more steps between the human action and the machine action.

Of course, Simonyi warned of this as a potential problem while he was speaking about Information Technology but…Information Technology is now ubiquitous and this problem can be found anywhere including, of course, Aviation.

We could say that any IT-intensive system has different layers and the number of layers defines how advanced the system is. So far so good, if we assume that there is a perfect correspondance between layers, that is, every layer is a symbolic representation of the former one and that representation should be perfect. That should be all…but it isn’t.

Every information layer that we put over the real thing is not a perfect copy -it should be nonsense- but, instead, it tries to improve something in safety, efficiency or, very often, it claims to be improving both. However, avoiding flaws in that process is something that is almost impossible. That is the point where problems start and when hacker-type knowledge and frame of mind should be highly desirable for a pilot.

The symbolic nature of IT-based systems makes its flaws to be hard to diagnose since their behavior can be very different to mechanic or electric systems. Hackers, good or bad, try to identify these flaws, that is, they are very conscious of this symbolic layer approach instead of assuming an enhanced but perfect representation of the reality below.

What means a hacker frame of mind as a way to improve safety? Let me show two examples:

  • From cinema: The movie “A beautiful mind”, devoted to John Nash and showing his mental health problems shows at a moment how and why he was able to control these problems: He was confusing reality and fiction until a moment where he found something that did not fit. It happened to be a little girl that, after many years, continued being a little girl instead of an adult woman. That gave him the clue to know which part of his life was created by his own brain.
  • From Air Safety: A reflection taken from the book “QF32” by Richard de Crespigny: Engine 4 was mounted to our extreme right. The fuselage separated Engine 4 from Engines 1 and 2. So how could shrapnel pass over or under the fuselage, then travel all that way and damage Engine 4? The answer is clear. It can’t. However, once arrived there, a finding appears crystal-clear: Information coming from the plane is not trustable because in any of the IT-layers the correspondance reality-representation has been lost.

Detecting these problems is not easy. It implies much more than operating knowledge and, at the same time, we know that nobody has full knowledge about the whole system but only partial knowledge. That partial knowledge should be enough to define key indicators -as it happens in the mentioned examples- to know when we work with information that should not be trusted.

The hard part of this: The indicators should not be permanent but adapted to every situation, that is, the pilot should decide about which indicator should be used in situations that are not covered by procedures. That should bring us to other issue: If a hacker frame of mind is positive for Air Safety, how to create, nurture and train it? Let’s use again the process followed by a hacker to become such a hacker:

First, hackers look actively for information. They don’t go to formal courses expecting the information to be given. Instead, they look for resources allowing them to increase their knowledge level. Then, applying this model to Aviation should suppose a wide access to information sources beyond the information provided in formal courses.

Second, hackers training is more similar to military training than academic training, that is, they fight to intrude or to defend a system and they show their skills by opposing an active enemy. To replay a model such as this, simulators should include situations that trainers can imagine. Then, the design should be much more flexible and, instead of simulators behaving as a plane is supposed to do, they should have room to include potential situations coming from information misrepresentation or from situations coming from automatic answers to defective sensors.

Asking for a full knowledge of all the information layers and their potential pitfalls can be utopic since nobody has that kind of knowledge, including designers and engineers. Everybody has a partial knowledge. Then, how can we do our best with this partial knowledge? Looking for a different frame of mind in involved people -mainly pilots- and providing the information and training resources that allow that frame of mind to be created and developed. That could mean a fully new training model.

Published originally in my Linkedin profile

Sterile discussions about competencies, Emotional Intelligence and others…

When “Emotional Intelligence” fashion arrived with Daniel Goleman, I was among the discordant voices affirming that the concept and, especially, the use of it, was nonsense. Nobody can seriously reject that personal features are a key for success or failure. If we want to call it Emotional Intelligence that’s fine. It’s a marketing born name not very precise but, anyway, we can accept it.

However, losing the focus is not acceptable…and some people lose the focus with statements like “80% of success is due to Emotional Intelligence, well above the percentage due to “classic” intelligence. We lose focus too with statements comparing competencies with academic degress and the role of each part in professional success. These problems should be analyzed in a different and simpler way: It’s a matter of sequence instead of percentage.

An easy example: What is more important for a surgeon to be successful? The academic degree or the skills shown inside the OR? Of course, this is a tricky question where the trick is highly visible. To enter the OR armed with an scalpel, the surgeon needs an academic recognition and/or a specific license. Hence, the second filter -skills- is applied over the ones who passed the first one -academic recognition- and we cannot compare in percentage terms skills and academic recognition.

Of course, this is an extreme situation but we can apply it to the concepts where some sterile discussions appear. Someone can perform well thank to Emotional Intelligence but the entrance to the field is guaranteed with intelligence in the most common used meaning. Could we say that, once passed an IQ threshold we should better improve our interaction skills than -if possible- improve 10 more IQ points? Possibly…but things don’t work that way, that is, we define the access level through a threshold value and performance with other criteria, always comparing people that share something: They all are above the threshold value. Then…how can I say “Emotional Intelligence is in the root of 80% of success”? It should be false but we can convert it into true by adding  “if the comparison is made among people whose IQ is, at least medium-high level”. The problem is that, with this addition, it is not false anymore but this kind of statement should be a simple-mindedness proof.

We cannot compare the relative importance of two factors if one of them is referred to job access while the other is referred to job performance once in the job. It’s like comparing bacon with speed but using percentages to appear more “scientific”.

Flight-Deck Automation: Something is wrong

Something is wrong with automation. If we can find diagnostics performed more than 20 years ago and the conclusions are still current…something is wrong.

Some examples:

Of course, we could extend the examples to books like Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering published by Rasmussen in 1986, Safeware written by Leveson in 1995, Normal Accidents by Perrow in 1999, The Human Interface by Raskin in 2000 and many others.

None of these resources is new but all of them can be read by someone with interest in what is happening NOW. Perhaps there is a problem in the basics that is not still properly addressed.

 Certainly, once a decision is made, going back is extremely expensive and manufacturers will try to defend their solutions. An example that I have used more than once is the fact that modern planes have processors so old that the manufacturer does not make them anymore. Since the lifetime of a plane is longer than the lifetime of some key parts, they have to stock those parts since they cannot ask the manufacturers to send them.

The obvious solution should be renewal but this should be so expensive that they prefer having brand-new planes with old-fashioned parts to avoid new certification processes. Nothing to oppose to this practice. It’s only a sample of a more general practice: Keeping attached to a design and defend it against any doubt –even if the doubt is reasonable- about its adequacy.

 However, this rationale can be applied to products already in the market. What about the new ones? Why the same problems appear once and again instead of being finally solved?

 Perhaps, a Human Factors approach could be useful to identify the root problem and help to fix it. Let’s speak about Psychology:

 The first psychologist that won a Nobel Prize was Daniel Kahnemann. He was one of the founders of the Behavioral Economics concept showing how we use heuristics that usually works but we can be misguided in some situations by heuristics. To show that, he and many followers designed interesting experiments that make clear that we all share some “software-bugs” that can drive us to commit a mistake. In other words, heuristics should be understood as a quick-and-dirty approach, valid for many situations but useless if not harming in others.

 Many engineers and designers would be willing to buy this approach and, of course, their products should be designed in a way that would enforce a formal rational model.

 The most qualified opposition to this model comes from Gigerenzer. He explains that heuristics is not a quick-and-dirty approach but the only possible if we have constraints of time or processing possibilities. Furthermore, for Gigerenzer people extracts intelligence from context while the experiments of Kahnemann and others are made in strange situations and designed to misguide the subject of the experiment.

An example, used by Kahnemann and Tversky is this one:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

 Which is more probable?

  •  Linda is a bank teller.
  • Linda is a bank teller and is active in the feminist movement.

The experiment tries to show the conjunction fallacy, that is, how many people should choose the second alternative while the first one is not only wider but comprises the second one.

The analysis of Gigerenzer is different: Suppose that all the information about Linda is the first sentence Linda is 31 years old. Furthermore, suppose you don’t give information and simply makes the questions…we could expect that the conjunction fallacy should not appear. It appears because the experimenter provides information and, since the subject is given information, he supposes that this is RELEVANT…otherwise, why is the subject fed with this information?

In real life, relevance is a clue. If someone tells us something, we understand that it has a meaning and that this information is not included to deceive us. That’s why Gigerenzer criticizes the Behavioral Economics approach, which can be shared by many designers.

For Gigerenzer, we decide about how good a model is comparing it with an ideal model –the rational one- but if, instead, we decide about which is the best model looking at the results, we could find some surprises. That’s what he did at Simple Heuristics that Make Us Smart, that is, comparing complex decision models with others that, in theory, should get a worse performance and finding that, in many cases, the “bad” model could get better results than the sophisticated one.

Let’s go back to automation design. Perhaps we are making the wrong questions at the beginning. Instead of “What information would you like to have?”  getting a Santa Claus letter as an answer, we should ask what are the cues that you use to know that this specific event is happening?

FAA, in its 1996 study, complained about the fact that some major failures as an engine-stop can be masked by a bunch of warnings about different systems failing, making hard to discern that all of them came from a common root, that is, the engine stop. What if we ask “Tell me one fact –exceptionally I would admit two- that should tell you in a clear and fast way that one of the engines is stopped.”

We have a nice example from QF32 case. Pilots started to distrust the system when they got information that was clearly false. It was a single fact but enough to distrust. What if, instead of deciding this way jumping to the conclusion from a single fact, they should have been “rational” trying to assign probabilities in different scenarios? Probably, the plane should not have fuel enough to allow this approach.

Rasmussen suggested one approach –a good one- where the operator should be able to run cognitively the program that the system was performing. The approach is good but something is still missing: How long should it take for the operator to replicate the functional model of the system?

In real life situations, especially if they have to deal with uncertainty –not calculated risk- people use very few indicators easy and fast to obtain. Many of us remember the BMI-092 case. Pilots were using an indicator to know which engine had the problem…unfortunately, they came from a former generation of B737 and they did not know that the one they were flying had air bleeding in both engines instead of only one. The key used to determine the wrong engine should have been correct in an older plane.

Knowing the cues used by pilots, planes could be designed in a human-centered approach instead of creating an environment that does not fit with the ways used by people to perform real tasks in real environments.

When new flight-deck designs appeared, manufacturers and regulators were careful enough to keep the basic-T, even though it could appear in electronic format but that was the way that pilots used to get the basic information. Unfortunately, this has disappeared in many other things and things like position of power levers with autopilot, position of flightsticks/horns and if they have to transmit pressure or not or if the position should be common to both pilots or not…had a very different treatment from a human-centered approach. Instead, the screen-mania seems to be everywhere.

A good design starts with a good question and, perhaps, questions are not yet good enough and that’s why analyses and complains 20 and 30 years old still keep current.

 

 

 

 

 

 

Frederick W. Taylor: XXI Century Release

Any motivation expert, from time to time, devotes a part of his time to throw some stones to Frederick W. Taylor. It seems, from our present scope, that there are good reasons for the stoning: Strict splitting between planning and performing is against any idea considering human beings as something more than faulty mechanisms.

However, if we try to get the perspective that Taylor could have a century ago, things could change: Taylor made unqualified workers able to manufacture complex products. These products were far beyond the understanding capacity of those manufacturing them.

From that point of view, we could say that Taylor and his SWO meant a clear advance and Taylor cannot be dismissed with a high-level theoretical approach out of context.

Many things have happened since Taylor that could explain so different approach: The education of average worker, at least in advanced societies, grew in an amazing way. The strict division between design and performance could be plainly justified in Taylor time but it could be nonsense right now.

Technology, especially the information related, not only advanced. We could say that it was born during the second half of the past century, well after Taylor. Advances have been so fast that is hard finding a fix point or a context to evaluate its contribution: When something evolves so fast, it modifies the initial context and that removes the reference point required to evaluate the real value.

At the risk of being simplistic, we could say that technology gives us “If…Then” solutions. As technology power increases, situations that can be confronted through an “If…Then” solution are more and more complex. Some time ago, I received this splendid parody of a call-center that shows clearly what can happen if people work only with “If…Then” recipes, coming, in this case, from a screen:

http://www.youtube.com/watch?v=GMt1ULYna4o

Technology evolution again puts the worker -now with an education level far superior to the one available in Taylor age- in a role of performer of routines and instructions. We could ask why so old model is still used and we could find some answers:

  • Economics: Less qualified people using technology can perform more complex tasks. That means savings in training costs and makes turnover also cheaper since people are easier to replace.
  • Knowledge Ownership: People have a brain that can store knowledge. Regretfully, from the perspective of a company, they have also feet that can be used to bring the brain to other places. In other words, knowledge stored by persons is not owned by companies and, hence, they could prefer storing knowledge in processes and Information Systems managing them.
  • Functionality: People commit more mistakes, especially in these issues hard to convert into routines and required going beyond stored knowledge.

These points are true but, when things are seen that way, there is something clear: The relation between a company and people working there is strictly economical. Arie de Geus, in The living organization, said that the relation between a person and a company is economic but considering it ONLY economic is a big mistake.

Actually, using If…Then model as a way to make people expendable can be a way to guarantee a more relaxed present situation…at the price of questionning the future. Let’s see why:

  • If…Then recipes are supplied by a short number of suppliers working in every market and, of course, having clients who compete among them. Once reduced the human factor to the minimum…where is it going to be the difference among companies sharing the same Information Systems model?
  • If people are given stricly operative knowledge…how can we advance in this knowledge? Companies outsource their ability to create new knowledge that, again, remains in the hands of their suppliers of Information Systems and their ability to store more “If…Then” solutions.
  • What is the real capacity of the organization to manage unforeseen contingencies, if they have not been anticipated in the system design or, even worse, contingencies coming from the growing complexity of the system itself?

This is the overview. Taylorism without Taylor is much worse than the original model since it’s not justified by the context. Companies perform better and better some things that they already knew how to manage and, at the same time, it is harder and harder for them improving at things that previously were poorly performed. People, under this model, cannot work as an emergency resource. To do this, they need knowledge far beyond the operative level and capacity to operate without being very constrained by the system. Very often they miss both.

Jens Rasmussen, expert in Organization and Safety, gave a golden rule that, regretfully, is not met in many places: Operator has to be able to run cognitively the program that the system is performing. Features of present Information Systems could allow us working under sub-optimized environments: Instead of an internal logic that only the designer can understand -and not always- things running and keeping the Rasmussen rule would be very different.

The rationale about training and turnover costs would remain but advantages from ignoring it are too important to dismiss them. The sentence of De Geus is real and, furthermore, it has a very serious impact about how our organizations are going to be in the next future.

 

Air safety: When statistics are used to kill the messenger

Long time ago, I observed that big and long-range planes -with a few exceptions- always had a safety record better than the one more little planes had. Many explanations were given to this fact: Biggest planes get the most experienced crews, big planes are more carefully crafted… it was easier than that: The most dangerous phases in flight are on ground or near to the ground. Once the plane is at cruise level, the risk is far lower. Of course, biggest planes make long flights or, in other terms, every 10 flown hours a big plane could perform, as an average, one landing while a little one could land 10 times. In a statistical report based in flown hours…which one is them is going to appear as safer? Of course, the big one. If statistics are not carefully read, someone would started to be worried about the high accidentability rate of little planes, if compared with the big ones.

Now, the American NTSB has discovered that helicopters are dangerous: http://www.flightglobal.com/news/articles/ntsb-adds-helicopters-ga-weather-to-quotmost-wantedquot-394947/  and the explanation could be similar, especially if they address the HEMS activity: Emergency medical services are given an extremely short answering time. That means flying with machines that can be cold at the moment of performing an exigent take-off, for instance, very near to a hospital or populated places where they need to make an almost vertical take-off. Once they are airborne, they need to prepare a landing near to an accident place. The place can have buildings, unmarked electrical wires and, of course, it can be from fully flat at sea level to a high mountain place. Is the helicopter risky or the risk is in the operation?

Of course, precisely because the operation is risky, everything has to be as careful as possible but making statistical comparisons with other operations is not the right approach. Analyze in which phase of the flight accidents happen; if the pilot does not have full freedom to choose the place to land, at least, choose an adequate place for the base. Some accidents happened while doctor onboard was seeing that they were very near to an electrical wire and assumed that the pilot had seen it…all the eyes are welcome, even the non-specialized ones. Other times, non-specialized people asked and pressed for landing in crazy places or rostering and missions are prepared ignoring experience and fatigue issues. That is, there is a lot of work to do in this field but, please, do not use statistical reports to justify that by comparing things that are really hard to compare.

Does CRM work? Some questions about it

Let’s start by clarifying something: CRM is not the same that Human Factors concern. This is a very specific way to channelize this concern in a very specific context, that is, the cockpit…even though, CRM philosophy has been applied to Maintenance through MRM and other fields where a real teamwork is required.

Should we have to improve CRM training or is it not the right way? Do we have to improve indicators quality or should we be more worried about the environment in which these indicators appear?…

An anecdote: A psychologist, working in a kind of jail for teenagers had observed something around the years: The center had a sequence known as “the process”, whose resemblance to Kafka work seemed to be more than accidental, and inmates were evaluated according to visible behavior markers included in “the process”. Once all the markers in the list appeared, the inmate was set free. The psychologist observed that smarted inmates, not best ones, were the ones able to pass the process because in a very short time were able to exhibit the desired behavior. Of course, once out of the center, they behave as they liked and, if they were caught again, they would exhibit again the required behavior to go out.

Some CRM approaches are very near to this model. The evaluator looks for behavioral markers whose optimum values are kindly offered by evaluated people and, once passed the evaluation, they can behave in agreement with their real drive, whatever it is coincident with the CRM model or not.

Many behaviorist psychologists say that the key is which behavioral markers are selected. They can even argue that this model works in clinical psychology. They are right but, perhaps, they are not fully right and, furthermore, they are wrong in the most relevant part:

We cannot try to use the model from clinical psychology because there is a fundamental flaw: In clinical psychology, the patient goes by himself asking for a solution because his own behavior is felt like a problem. If, through the treatment, the psychologist is able to suppress the undesired behavior, the patient himself will be in charge of making this situation remain. The patient wants to change.

If, instead of speaking about clinical psychology, we focus in undesired behaviors from teamwork perspective, things do not work that way:  Unwanted behaviors for the organization or the team could be highly appreciated by the one who exhibits them. Hence, they can dissappear while they are observed but, if so, it does not mean learning but, perhaps, craftiness from the observed person.

For a real change, three variables have to change at the same time: Competence, Coordination and Commitment. Training is useful if the problem to be solved is about competence. It does not work if the organization does not make a serious effort to avoid contradictory messages and, of course, it is useless if there is not commitment by individuals, that is, if the intention to change is not clear or, simply, it does not exist.

Very often, instead of a real change, solutions appear under the shape of shortcuts. These shortcuts try to subvert the fact that the three variables are required and, furthermore, they are required at the same time. Instead of this, it is easier to look for the symptom, that is, the behavioral marker.

Once a visible marker is available, the problem is redefined: It is not about attitude anymore; it is about improving the marker.  Of course, this is not new and everyone knows that the symptomatic solution does not work. Tavistock consultants use to speak about “snake oil” as an example of useless fluid offered by someone who knows it does not work to any other who knows the same. However, even knowing it, they can buy the snake oil because it satisfies the own interest…for instance, not being accused of inaction about the problem.

The symptomatic solution goes on even in front of full evidence against it. At the end of the day, who sells it makes a profit and who buys it save the face. The next step should be alleging that the solution does not perform at the expected level and, hence, we should improve it.

Once there, some crossed interests make hard changing things for anyone who has something to lose. It is risky telling that “The Emperor is naked”.Instead of that, there is a high probability that people will start to praise the new Emperor gown. 

Summarizing, training is useful to change if there is in advance a desire to change. Behavioral markers are useful if they can be observed under conditions where the observed person does not know to be observed. Does CRM meet these conditions? There is an alternative: Showing in a clear and undisputed way that the suggested behavior gets better results than the exhibited by the person to be trained. Again…does CRM meet this condition?

Certainly, we could find behavioral markers that, for deep psychology lovers, are predictive. However, this is a very dangerous road that some people followed in selection processes. This could easily become a kind of witch-hunting. As an anecdote, a recruiter was very proud about his magical question to know a candidate: His magic question was asking for the name of the second wife of Fernando the Catholic. For him, this question could provide him a lot of keys about the normal behavior of the candidate. Surprisingly, these keys dissappeared if the candidate happened to know the right answer.

If behavioral markers have a questionnable value and looking for other behaviors with a remote relation with the required ones, it should be required looking in different places if we want a real CRM instead of pressure to agreement -misunderstood teamwork- or theatrical exercises aimed to provide the desired behavior to the observer.

There is a lot of work to do but, perhaps, in different ways that the ones already stepped:

  1. Recruiting investment: Recruiting cannot be driven only by technical ability since it can be acquired by someone with basic competences. Southwest Airlines is said to have rejected as a pilot a candidate because he addressed in a rude way to a receptionist. Is it a mistake?
  2. Clear messages from Management: Teamwork does not appear with messages like “We’ll have to get along” but having shared goals and respect among tem members avoiding watertight compartments. Are we prizing the “cow-boy”, the “hero” or the professional with the guts to make a hard decision using all the capabilities of the team under his command?
  3. CRM Evaluation from practicioners: Anyone can have a bad day but, if on continuous bases, someone is poorly evaluated by those in the same team, something is wrong, whatever could say the observer in the training process. If someone can think that this is against CRM, think twice:Forget for a moment CRM: Do pilots behave in the same way in a simulator exercise under observation and in a real plane?
  4. Building a teamwork environment: If someone feels that his behavior is problematic, there is a giant step to change. If, by the other side, he sees himself as “the boss” and he is delighted to have met himself, there is not way for a real change.

No shortcuts.CRM is a key for air safety improvement but it requires much more than behavioral markers and exercises where observers and observed people seem to be more concerned about looking polite than about solving problems using the full potential of a team.

When Profits and Safety are in different places: An historic approach to Aviation

All of us heard that Aviation is the safest Transportation way. That is basically true but, if 94% of accidents happen while on ground or near to ground, we should think that some flight phases have a risk level to be studied.

 It’s true that some activities bring an intrinsic risk and Safety means balancing acceptable risk level .vs. efficiency. Aviation is in that common situation but it has its own problems: The lack of an external assessment made the safety-related decisions to be inside a little group of manufacturers, regulators and operators. Consumers listen the mantra “Aviation is the safest Transportation way” but they cannot know if some decisions could drive Aviation to leave that privileged position.

 A little summary of technology evolution in the big manufacturers could show how and why some decisions were made and how, in the best possible scenario, these decisions meant losing an opportunity to improve safety level. In the worst one, they should mean a net decrease in safety level:

Once jets appeared, safety increased as a consequence of higher engines reliability. At the same time, navigation improvements like ground-based stations (VOR-DME), inertial systems and, later, GPS appeared too.

However, at the same time that these  and other improvements appeared, like making zero visibility landings possible, some other changes whose contribution could be considered as negative appeared too.

One of the best known cases is the engines number, especially in long haul flights. Decades ago, the standard practice for transoceanic flights was using four engine planes. The only exceptions were DC-10 and Lockheed Tristar with three engines. However, in places like U.S.A., long flights where, if required, planes could land before their planned destination, were performed by big planes with only two engines.

Boeing, one of the main manufacturers, would use this fact to say that engines reliability could allow transoceanic flights with twin planes. Of course, maintaining two engines is cheaper than maintaining four and, then, operators should have a strong incentive to embrace the Boeing position but…can we say that crossing an Ocean with two engines is as safe as doing it with four engines, keeping the remaining parameters constant?

Intuition says that it’s not true but messages trying to oppose this simple fact started to appear. Among them, we can hear that a twin modern plane is safer than a four-engine old plane. Nobody said that, if so, the parameter setting safety level should be how old the plane was. Then, the right option should be…a modern plane with four engines.

 Airbus, the other big manufacturer, complained because at that moment did not have its own twin planes to perform transoceanic flights but, some time after, they would accept this option starting their own twin planes for these long haul flights. This path –complain followed by acceptance and imitation- has been repeated regarding different issues: One of the manufacturers proposes an efficiency improvement, “its” regulator accepts the change asking for some improvements and the other manufacturer keeps complaining until the moment they have a plane that can compete in that scenario.

In the specific case about twin engines, regulators imposed a rule asking the operators to keep a certain distance from airports that could be in their way. That made twin planes design longer routes and, of course, that meant time and fuel expenses. However, since statistical information showed that engines reliability is very high, the time span allowed to fly with only one engine working while loaded with passengers was increasing until the present situation. Now, we have planes that are certified to fly with only one engine working until arriving to the nearest airport…assuming that it could be 5 hours and a half far. Is that safe?

We don’t really know how safe it is. Of course, it is efficient because that means that a twin engine certified in that way can fly virtually through any imaginable route. Statistics say that it’s safe but the big bulk of data about reliability does not come from laboratories but from flying planes and that’s where statistics could fail: Engines reliability makes that big amount of data come from flights where both engines have been working in uneventful flights. We can add that twin planes have more remaining power than four-engine planes due to the exigence that, if an engine fails after a moment during take-off, the plane has to be able to take-off with only one engine. Of course, the four-engine plane has to be able to perform this action with three engines, not with one.

In other words, during cruise time, the engines of a twin plane work in a low effort situation that, of course, can have a favorable impact in reliability. The question that statistical reports could not address because of lack of the right sample should be: Once one engine failed, the remaining one starts to work in a much more exigent situation. Does it keep the same reliability level that it had while both engines were working? Is that reliability enough to guarantee the flight under these conditions for more than 5 hours? Actually, the lack of a definitive answer to this question made the regulators to ask for a condition instead: The remaining engine should not get out of normal parameters while providing all the required power to keep the plane airborne.

At least, we could have some doubts about it but, since the decision was made among “insiders” without any kind of external check, nobody questioned it and, nowadays, the most common practice at boarding a transoceanic flight, is doing it in a twin plane. We will attend to the masks and lifejackets show but it’s unlikely that some could say:

“By the way, the engines in this plane are so reliable that, in the very unlikely event that one of them fails, we can fly with full safety with the remaining one until reaching the nearest airport, no more than 5 hours and a half far”.

How many users are informed about this little detail with they board a plane with the intention of crossing an Ocean? This is only and example because it’s not the only field where improvement followed by complains and acceptance was the common behavior.

Engines number is an issue especially visible –for obvious reasons- but a similar case can be observed in matters like codkpit crewmembers decrease or automation. Right now, there is not a single passengers plane from any of the big manufacturers bringing flight engineer. In this case, Airbus was the innovator in its A310 model and, like in the engines issue, we could ask if removing the flight engineer has made Aviation more or less safe.

Boeing was the one complaining in this case but…it happened to be designing its models 757 and 767 that, in the final configuration, would be launched without a flight engineer.

Is a flight engineer important for safety? Our starting point should be a very easy one: The job of a pilot does not know the concept of “average workload”. It goes from urgencies and stress to boredom and viceversa. In a noneventful flight overflying an Ocean and without traffic problems, there are not many things to do. The plane can fly without a flight engineer and even without pilots. They remain in their place “just-in-case”, that is, in a situation quite similar –with some differences- to the one we can find in a firemen place. However, when things become complex, there is a natural división of tasks: One of the pilots flies the plane while the other one takes care of navigation and communications and, if there is a serious technical problem, they have to try to fix it…it seems that someone is missing.

This absence was very clear in 1998 in Swissair-111, where a cabin smoke situation should make a MD-11, without a flight engineer, crash. In a few moments, they passed from an uneventful flight prepared to cross Atlantic Ocean to a burning hell where they had to land in an unknown airport, to find the place and runways orientation, radio frequencies…while keeping the plane controlled, throwing fuel and trying to know the origin of the fire to extinguish it.

The accident research, performed by “insiders” did not address this issue. Two people cockpit was already considered as a given, even though another almost identical plane –DC10- with flight engineer could have invited them to make the comparison. Of course, nobody can say that having a flight engineer should have saved the plane but the workload that pilots confronted should have been far lower.

This issue was not addressed neither when a plane from Air Transatt landed at Azores islands with both engines stopped. That happened because they were losing fuel and a wrong fuel management made the pilots transfer fuel to the tank that was losing it. Should it have happened if someone had been devoted to analyze carefully fuel flow and how the whole process was working? Perhaps not but this scenario was simply ignored.

Flight engineers dissappeared because automation appeared and that started a new problem: Pilots started to lose skills for manual flying and it drove to a situation named “automation paradox”:

Automation gets an easier user interface but this is a mirage: A cockpit with less controls and cleaner from a visual scope does not mean that the plane is simpler. Actually, it’s a much more complex plane. For instance, every Boeing 747 generation has been decreasing the number of controls in the cockpit. Even though, newer planes are more complex and that’s how the automation paradox works:

Training is centered in interface design instead of internal design. That’s why we find planes more and more complex and users who know less and less about them. A single comparison can be made with Windows systems, almost universal in personal IT. Of course, it allows much more things than the old DOS but…DOS never got blocked. Unlike DOS, Windows is much more powerful but, if blocked, the user does not have available options.

The question should be if we can admit a Windows-like system in an environment where risk is an intrinsic part of the activity. The system allows more things and can be properly managed without being an expert but, if it fails, there are not options for the average user.

“Fly-by-wire” system was introduced by Airbus in commercial Aviation, with the Concorde exception, and it confronted complains from Boeing. We have to say that Boeing had a high experience in fly-by-wire systems because of its military aircrafts. Again, we find a situation where efficiency is bigger even though some pilots complain about facts like losing kinestesic feeling. In a traditional plane, a hand on the controls can be enough to know how the plane is flying and if there is a problem with speed, center of gravity and others. In fly-by-wire planes, by default, this feeling does not exist (Boeing kept it in its planes but, to do so, they had to “craft” the feeling since the controls by themselves don’t not provide it).

 This absence could partially explain some major accidents, labeled “Human Error” or “Lack of Training” without anybody analyzing what features of the design could drive to an error like, for instance, a defective sensor triggering an automatic response without the pilots knowing what’s going on.

 What is the situation right now? If we check the last planes from the big manufacturers, we can get some clues: Boeing 787 .vs. Airbus A350. Both are big twin and long-haul planes, there is not a flight engineer, they are highly automated and they both have fly-by-wire system. Coincidence? Not at all. Through a dynamic of unquestionned changes agreed by insiders and without knowledge by the consumers, the winner will be always the most efficient solution. Then, both manufacturers finished with two models that share a good part of the philosophy. There are differences –electric .vs. hydraulic controls, feeling .vs. no-feeling from the controls, more or less use of composite materials, lithium .vs. traditional batteries…- but the main parameters are the same.

 Issues that were discussed time ago are seen as already decided. The decision always favored the most efficient option, not the safest one. Could that be changed? Of course, but it’s not possible if everything keeps working as an “insiders game” instead of giving clear and transparent information outside.

We should understand too the position of “insiders”: A case like GermanWings was enough for some people -like NYT- to question the plane before knowing what really happened. A few days ago, we had an accident with a big military plane manufactured by Airbus and some people started already to question the safety of a single manufacturer…perhaps someone near to the other one?

Information has to flow freely but, at the same time, many people make a living from scandal and it’s hard to find the right point: Truth and nothing but the truth and, at the same time, deactivate the ones who want to find or manufacture a scandal. Nowadays, the environment is very closed and in that environment efficiency will have always the upper hand…even in cases where it shouldn’t. By the other side, we have to be careful enough to address real problems instead of invented ones. The examples used here can be illustrated not only with the referenced cases but with some others whose mention has been avoided.

A %d blogueros les gusta esto: