Categoría: Seguridad aérea

#GermanWingsCrash: The use of an accident

One day after the accident, as it could be expected, there is not a clear knowledge about what happened. Some witnesses saw the plane flying near the ground in a high place and controllers reported that the plane was descending for several minutes without reporting anything. That could be enough to discard the terrorism hypothesis and not much more.

Even though, many people used this accident to sell some ideas that could be interesting for them. Thus, NYT asked if A-320 was safe while some others charged against low-cost companies, against the automation and against the recruiting and training policies applied to pilots.

All of them can be legitimate concerns -if you want, suspicious the article by NYT coming from USA where the main Airbus competitor, Boeing, comes from- but…now?

We don’t know nothing yet. Perhaps, when we have the facts, we can conclude that an official enquiry is biased -as many of us could think of cases like AF447- but now, we only can wait to know the facts. This should not be an opportunity to get a loudspeaker for our own concerns when we don’t have the faintest idea about if they are related or not with the accident. The respect for the victims of the accident demands from us not to use them in such a dirty way.

Flight-Deck Automation: Something is wrong

Something is wrong with automation. If we can find diagnostics performed more than 20 years ago and the conclusions are still current…something is wrong.

Some examples:

Of course, we could extend the examples to books like Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering published by Rasmussen in 1986, Safeware written by Leveson in 1995, Normal Accidents by Perrow in 1999, The Human Interface by Raskin in 2000 and many others.

None of these resources is new but all of them can be read by someone with interest in what is happening NOW. Perhaps there is a problem in the basics that is not still properly addressed.

 Certainly, once a decision is made, going back is extremely expensive and manufacturers will try to defend their solutions. An example that I have used more than once is the fact that modern planes have processors so old that the manufacturer does not make them anymore. Since the lifetime of a plane is longer than the lifetime of some key parts, they have to stock those parts since they cannot ask the manufacturers to send them.

The obvious solution should be renewal but this should be so expensive that they prefer having brand-new planes with old-fashioned parts to avoid new certification processes. Nothing to oppose to this practice. It’s only a sample of a more general practice: Keeping attached to a design and defend it against any doubt –even if the doubt is reasonable- about its adequacy.

 However, this rationale can be applied to products already in the market. What about the new ones? Why the same problems appear once and again instead of being finally solved?

 Perhaps, a Human Factors approach could be useful to identify the root problem and help to fix it. Let’s speak about Psychology:

 The first psychologist that won a Nobel Prize was Daniel Kahnemann. He was one of the founders of the Behavioral Economics concept showing how we use heuristics that usually works but we can be misguided in some situations by heuristics. To show that, he and many followers designed interesting experiments that make clear that we all share some “software-bugs” that can drive us to commit a mistake. In other words, heuristics should be understood as a quick-and-dirty approach, valid for many situations but useless if not harming in others.

 Many engineers and designers would be willing to buy this approach and, of course, their products should be designed in a way that would enforce a formal rational model.

 The most qualified opposition to this model comes from Gigerenzer. He explains that heuristics is not a quick-and-dirty approach but the only possible if we have constraints of time or processing possibilities. Furthermore, for Gigerenzer people extracts intelligence from context while the experiments of Kahnemann and others are made in strange situations and designed to misguide the subject of the experiment.

An example, used by Kahnemann and Tversky is this one:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

 Which is more probable?

  •  Linda is a bank teller.
  • Linda is a bank teller and is active in the feminist movement.

The experiment tries to show the conjunction fallacy, that is, how many people should choose the second alternative while the first one is not only wider but comprises the second one.

The analysis of Gigerenzer is different: Suppose that all the information about Linda is the first sentence Linda is 31 years old. Furthermore, suppose you don’t give information and simply makes the questions…we could expect that the conjunction fallacy should not appear. It appears because the experimenter provides information and, since the subject is given information, he supposes that this is RELEVANT…otherwise, why is the subject fed with this information?

In real life, relevance is a clue. If someone tells us something, we understand that it has a meaning and that this information is not included to deceive us. That’s why Gigerenzer criticizes the Behavioral Economics approach, which can be shared by many designers.

For Gigerenzer, we decide about how good a model is comparing it with an ideal model –the rational one- but if, instead, we decide about which is the best model looking at the results, we could find some surprises. That’s what he did at Simple Heuristics that Make Us Smart, that is, comparing complex decision models with others that, in theory, should get a worse performance and finding that, in many cases, the “bad” model could get better results than the sophisticated one.

Let’s go back to automation design. Perhaps we are making the wrong questions at the beginning. Instead of “What information would you like to have?”  getting a Santa Claus letter as an answer, we should ask what are the cues that you use to know that this specific event is happening?

FAA, in its 1996 study, complained about the fact that some major failures as an engine-stop can be masked by a bunch of warnings about different systems failing, making hard to discern that all of them came from a common root, that is, the engine stop. What if we ask “Tell me one fact –exceptionally I would admit two- that should tell you in a clear and fast way that one of the engines is stopped.”

We have a nice example from QF32 case. Pilots started to distrust the system when they got information that was clearly false. It was a single fact but enough to distrust. What if, instead of deciding this way jumping to the conclusion from a single fact, they should have been “rational” trying to assign probabilities in different scenarios? Probably, the plane should not have fuel enough to allow this approach.

Rasmussen suggested one approach –a good one- where the operator should be able to run cognitively the program that the system was performing. The approach is good but something is still missing: How long should it take for the operator to replicate the functional model of the system?

In real life situations, especially if they have to deal with uncertainty –not calculated risk- people use very few indicators easy and fast to obtain. Many of us remember the BMI-092 case. Pilots were using an indicator to know which engine had the problem…unfortunately, they came from a former generation of B737 and they did not know that the one they were flying had air bleeding in both engines instead of only one. The key used to determine the wrong engine should have been correct in an older plane.

Knowing the cues used by pilots, planes could be designed in a human-centered approach instead of creating an environment that does not fit with the ways used by people to perform real tasks in real environments.

When new flight-deck designs appeared, manufacturers and regulators were careful enough to keep the basic-T, even though it could appear in electronic format but that was the way that pilots used to get the basic information. Unfortunately, this has disappeared in many other things and things like position of power levers with autopilot, position of flightsticks/horns and if they have to transmit pressure or not or if the position should be common to both pilots or not…had a very different treatment from a human-centered approach. Instead, the screen-mania seems to be everywhere.

A good design starts with a good question and, perhaps, questions are not yet good enough and that’s why analyses and complains 20 and 30 years old still keep current.

 

 

 

 

 

 

Air safety: When statistics are used to kill the messenger

Long time ago, I observed that big and long-range planes -with a few exceptions- always had a safety record better than the one more little planes had. Many explanations were given to this fact: Biggest planes get the most experienced crews, big planes are more carefully crafted… it was easier than that: The most dangerous phases in flight are on ground or near to the ground. Once the plane is at cruise level, the risk is far lower. Of course, biggest planes make long flights or, in other terms, every 10 flown hours a big plane could perform, as an average, one landing while a little one could land 10 times. In a statistical report based in flown hours…which one is them is going to appear as safer? Of course, the big one. If statistics are not carefully read, someone would started to be worried about the high accidentability rate of little planes, if compared with the big ones.

Now, the American NTSB has discovered that helicopters are dangerous: http://www.flightglobal.com/news/articles/ntsb-adds-helicopters-ga-weather-to-quotmost-wantedquot-394947/  and the explanation could be similar, especially if they address the HEMS activity: Emergency medical services are given an extremely short answering time. That means flying with machines that can be cold at the moment of performing an exigent take-off, for instance, very near to a hospital or populated places where they need to make an almost vertical take-off. Once they are airborne, they need to prepare a landing near to an accident place. The place can have buildings, unmarked electrical wires and, of course, it can be from fully flat at sea level to a high mountain place. Is the helicopter risky or the risk is in the operation?

Of course, precisely because the operation is risky, everything has to be as careful as possible but making statistical comparisons with other operations is not the right approach. Analyze in which phase of the flight accidents happen; if the pilot does not have full freedom to choose the place to land, at least, choose an adequate place for the base. Some accidents happened while doctor onboard was seeing that they were very near to an electrical wire and assumed that the pilot had seen it…all the eyes are welcome, even the non-specialized ones. Other times, non-specialized people asked and pressed for landing in crazy places or rostering and missions are prepared ignoring experience and fatigue issues. That is, there is a lot of work to do in this field but, please, do not use statistical reports to justify that by comparing things that are really hard to compare.

Does CRM work? Some questions about it

Let’s start by clarifying something: CRM is not the same that Human Factors concern. This is a very specific way to channelize this concern in a very specific context, that is, the cockpit…even though, CRM philosophy has been applied to Maintenance through MRM and other fields where a real teamwork is required.

Should we have to improve CRM training or is it not the right way? Do we have to improve indicators quality or should we be more worried about the environment in which these indicators appear?…

An anecdote: A psychologist, working in a kind of jail for teenagers had observed something around the years: The center had a sequence known as “the process”, whose resemblance to Kafka work seemed to be more than accidental, and inmates were evaluated according to visible behavior markers included in “the process”. Once all the markers in the list appeared, the inmate was set free. The psychologist observed that smarted inmates, not best ones, were the ones able to pass the process because in a very short time were able to exhibit the desired behavior. Of course, once out of the center, they behave as they liked and, if they were caught again, they would exhibit again the required behavior to go out.

Some CRM approaches are very near to this model. The evaluator looks for behavioral markers whose optimum values are kindly offered by evaluated people and, once passed the evaluation, they can behave in agreement with their real drive, whatever it is coincident with the CRM model or not.

Many behaviorist psychologists say that the key is which behavioral markers are selected. They can even argue that this model works in clinical psychology. They are right but, perhaps, they are not fully right and, furthermore, they are wrong in the most relevant part:

We cannot try to use the model from clinical psychology because there is a fundamental flaw: In clinical psychology, the patient goes by himself asking for a solution because his own behavior is felt like a problem. If, through the treatment, the psychologist is able to suppress the undesired behavior, the patient himself will be in charge of making this situation remain. The patient wants to change.

If, instead of speaking about clinical psychology, we focus in undesired behaviors from teamwork perspective, things do not work that way:  Unwanted behaviors for the organization or the team could be highly appreciated by the one who exhibits them. Hence, they can dissappear while they are observed but, if so, it does not mean learning but, perhaps, craftiness from the observed person.

For a real change, three variables have to change at the same time: Competence, Coordination and Commitment. Training is useful if the problem to be solved is about competence. It does not work if the organization does not make a serious effort to avoid contradictory messages and, of course, it is useless if there is not commitment by individuals, that is, if the intention to change is not clear or, simply, it does not exist.

Very often, instead of a real change, solutions appear under the shape of shortcuts. These shortcuts try to subvert the fact that the three variables are required and, furthermore, they are required at the same time. Instead of this, it is easier to look for the symptom, that is, the behavioral marker.

Once a visible marker is available, the problem is redefined: It is not about attitude anymore; it is about improving the marker.  Of course, this is not new and everyone knows that the symptomatic solution does not work. Tavistock consultants use to speak about “snake oil” as an example of useless fluid offered by someone who knows it does not work to any other who knows the same. However, even knowing it, they can buy the snake oil because it satisfies the own interest…for instance, not being accused of inaction about the problem.

The symptomatic solution goes on even in front of full evidence against it. At the end of the day, who sells it makes a profit and who buys it save the face. The next step should be alleging that the solution does not perform at the expected level and, hence, we should improve it.

Once there, some crossed interests make hard changing things for anyone who has something to lose. It is risky telling that “The Emperor is naked”.Instead of that, there is a high probability that people will start to praise the new Emperor gown. 

Summarizing, training is useful to change if there is in advance a desire to change. Behavioral markers are useful if they can be observed under conditions where the observed person does not know to be observed. Does CRM meet these conditions? There is an alternative: Showing in a clear and undisputed way that the suggested behavior gets better results than the exhibited by the person to be trained. Again…does CRM meet this condition?

Certainly, we could find behavioral markers that, for deep psychology lovers, are predictive. However, this is a very dangerous road that some people followed in selection processes. This could easily become a kind of witch-hunting. As an anecdote, a recruiter was very proud about his magical question to know a candidate: His magic question was asking for the name of the second wife of Fernando the Catholic. For him, this question could provide him a lot of keys about the normal behavior of the candidate. Surprisingly, these keys dissappeared if the candidate happened to know the right answer.

If behavioral markers have a questionnable value and looking for other behaviors with a remote relation with the required ones, it should be required looking in different places if we want a real CRM instead of pressure to agreement -misunderstood teamwork- or theatrical exercises aimed to provide the desired behavior to the observer.

There is a lot of work to do but, perhaps, in different ways that the ones already stepped:

  1. Recruiting investment: Recruiting cannot be driven only by technical ability since it can be acquired by someone with basic competences. Southwest Airlines is said to have rejected as a pilot a candidate because he addressed in a rude way to a receptionist. Is it a mistake?
  2. Clear messages from Management: Teamwork does not appear with messages like “We’ll have to get along” but having shared goals and respect among tem members avoiding watertight compartments. Are we prizing the “cow-boy”, the “hero” or the professional with the guts to make a hard decision using all the capabilities of the team under his command?
  3. CRM Evaluation from practicioners: Anyone can have a bad day but, if on continuous bases, someone is poorly evaluated by those in the same team, something is wrong, whatever could say the observer in the training process. If someone can think that this is against CRM, think twice:Forget for a moment CRM: Do pilots behave in the same way in a simulator exercise under observation and in a real plane?
  4. Building a teamwork environment: If someone feels that his behavior is problematic, there is a giant step to change. If, by the other side, he sees himself as “the boss” and he is delighted to have met himself, there is not way for a real change.

No shortcuts.CRM is a key for air safety improvement but it requires much more than behavioral markers and exercises where observers and observed people seem to be more concerned about looking polite than about solving problems using the full potential of a team.

When Profits and Safety are in different places: An historic approach to Aviation

All of us heard that Aviation is the safest Transportation way. That is basically true but, if 94% of accidents happen while on ground or near to ground, we should think that some flight phases have a risk level to be studied.

 It’s true that some activities bring an intrinsic risk and Safety means balancing acceptable risk level .vs. efficiency. Aviation is in that common situation but it has its own problems: The lack of an external assessment made the safety-related decisions to be inside a little group of manufacturers, regulators and operators. Consumers listen the mantra “Aviation is the safest Transportation way” but they cannot know if some decisions could drive Aviation to leave that privileged position.

 A little summary of technology evolution in the big manufacturers could show how and why some decisions were made and how, in the best possible scenario, these decisions meant losing an opportunity to improve safety level. In the worst one, they should mean a net decrease in safety level:

Once jets appeared, safety increased as a consequence of higher engines reliability. At the same time, navigation improvements like ground-based stations (VOR-DME), inertial systems and, later, GPS appeared too.

However, at the same time that these  and other improvements appeared, like making zero visibility landings possible, some other changes whose contribution could be considered as negative appeared too.

One of the best known cases is the engines number, especially in long haul flights. Decades ago, the standard practice for transoceanic flights was using four engine planes. The only exceptions were DC-10 and Lockheed Tristar with three engines. However, in places like U.S.A., long flights where, if required, planes could land before their planned destination, were performed by big planes with only two engines.

Boeing, one of the main manufacturers, would use this fact to say that engines reliability could allow transoceanic flights with twin planes. Of course, maintaining two engines is cheaper than maintaining four and, then, operators should have a strong incentive to embrace the Boeing position but…can we say that crossing an Ocean with two engines is as safe as doing it with four engines, keeping the remaining parameters constant?

Intuition says that it’s not true but messages trying to oppose this simple fact started to appear. Among them, we can hear that a twin modern plane is safer than a four-engine old plane. Nobody said that, if so, the parameter setting safety level should be how old the plane was. Then, the right option should be…a modern plane with four engines.

 Airbus, the other big manufacturer, complained because at that moment did not have its own twin planes to perform transoceanic flights but, some time after, they would accept this option starting their own twin planes for these long haul flights. This path –complain followed by acceptance and imitation- has been repeated regarding different issues: One of the manufacturers proposes an efficiency improvement, “its” regulator accepts the change asking for some improvements and the other manufacturer keeps complaining until the moment they have a plane that can compete in that scenario.

In the specific case about twin engines, regulators imposed a rule asking the operators to keep a certain distance from airports that could be in their way. That made twin planes design longer routes and, of course, that meant time and fuel expenses. However, since statistical information showed that engines reliability is very high, the time span allowed to fly with only one engine working while loaded with passengers was increasing until the present situation. Now, we have planes that are certified to fly with only one engine working until arriving to the nearest airport…assuming that it could be 5 hours and a half far. Is that safe?

We don’t really know how safe it is. Of course, it is efficient because that means that a twin engine certified in that way can fly virtually through any imaginable route. Statistics say that it’s safe but the big bulk of data about reliability does not come from laboratories but from flying planes and that’s where statistics could fail: Engines reliability makes that big amount of data come from flights where both engines have been working in uneventful flights. We can add that twin planes have more remaining power than four-engine planes due to the exigence that, if an engine fails after a moment during take-off, the plane has to be able to take-off with only one engine. Of course, the four-engine plane has to be able to perform this action with three engines, not with one.

In other words, during cruise time, the engines of a twin plane work in a low effort situation that, of course, can have a favorable impact in reliability. The question that statistical reports could not address because of lack of the right sample should be: Once one engine failed, the remaining one starts to work in a much more exigent situation. Does it keep the same reliability level that it had while both engines were working? Is that reliability enough to guarantee the flight under these conditions for more than 5 hours? Actually, the lack of a definitive answer to this question made the regulators to ask for a condition instead: The remaining engine should not get out of normal parameters while providing all the required power to keep the plane airborne.

At least, we could have some doubts about it but, since the decision was made among “insiders” without any kind of external check, nobody questioned it and, nowadays, the most common practice at boarding a transoceanic flight, is doing it in a twin plane. We will attend to the masks and lifejackets show but it’s unlikely that some could say:

“By the way, the engines in this plane are so reliable that, in the very unlikely event that one of them fails, we can fly with full safety with the remaining one until reaching the nearest airport, no more than 5 hours and a half far”.

How many users are informed about this little detail with they board a plane with the intention of crossing an Ocean? This is only and example because it’s not the only field where improvement followed by complains and acceptance was the common behavior.

Engines number is an issue especially visible –for obvious reasons- but a similar case can be observed in matters like codkpit crewmembers decrease or automation. Right now, there is not a single passengers plane from any of the big manufacturers bringing flight engineer. In this case, Airbus was the innovator in its A310 model and, like in the engines issue, we could ask if removing the flight engineer has made Aviation more or less safe.

Boeing was the one complaining in this case but…it happened to be designing its models 757 and 767 that, in the final configuration, would be launched without a flight engineer.

Is a flight engineer important for safety? Our starting point should be a very easy one: The job of a pilot does not know the concept of “average workload”. It goes from urgencies and stress to boredom and viceversa. In a noneventful flight overflying an Ocean and without traffic problems, there are not many things to do. The plane can fly without a flight engineer and even without pilots. They remain in their place “just-in-case”, that is, in a situation quite similar –with some differences- to the one we can find in a firemen place. However, when things become complex, there is a natural división of tasks: One of the pilots flies the plane while the other one takes care of navigation and communications and, if there is a serious technical problem, they have to try to fix it…it seems that someone is missing.

This absence was very clear in 1998 in Swissair-111, where a cabin smoke situation should make a MD-11, without a flight engineer, crash. In a few moments, they passed from an uneventful flight prepared to cross Atlantic Ocean to a burning hell where they had to land in an unknown airport, to find the place and runways orientation, radio frequencies…while keeping the plane controlled, throwing fuel and trying to know the origin of the fire to extinguish it.

The accident research, performed by “insiders” did not address this issue. Two people cockpit was already considered as a given, even though another almost identical plane –DC10- with flight engineer could have invited them to make the comparison. Of course, nobody can say that having a flight engineer should have saved the plane but the workload that pilots confronted should have been far lower.

This issue was not addressed neither when a plane from Air Transatt landed at Azores islands with both engines stopped. That happened because they were losing fuel and a wrong fuel management made the pilots transfer fuel to the tank that was losing it. Should it have happened if someone had been devoted to analyze carefully fuel flow and how the whole process was working? Perhaps not but this scenario was simply ignored.

Flight engineers dissappeared because automation appeared and that started a new problem: Pilots started to lose skills for manual flying and it drove to a situation named “automation paradox”:

Automation gets an easier user interface but this is a mirage: A cockpit with less controls and cleaner from a visual scope does not mean that the plane is simpler. Actually, it’s a much more complex plane. For instance, every Boeing 747 generation has been decreasing the number of controls in the cockpit. Even though, newer planes are more complex and that’s how the automation paradox works:

Training is centered in interface design instead of internal design. That’s why we find planes more and more complex and users who know less and less about them. A single comparison can be made with Windows systems, almost universal in personal IT. Of course, it allows much more things than the old DOS but…DOS never got blocked. Unlike DOS, Windows is much more powerful but, if blocked, the user does not have available options.

The question should be if we can admit a Windows-like system in an environment where risk is an intrinsic part of the activity. The system allows more things and can be properly managed without being an expert but, if it fails, there are not options for the average user.

“Fly-by-wire” system was introduced by Airbus in commercial Aviation, with the Concorde exception, and it confronted complains from Boeing. We have to say that Boeing had a high experience in fly-by-wire systems because of its military aircrafts. Again, we find a situation where efficiency is bigger even though some pilots complain about facts like losing kinestesic feeling. In a traditional plane, a hand on the controls can be enough to know how the plane is flying and if there is a problem with speed, center of gravity and others. In fly-by-wire planes, by default, this feeling does not exist (Boeing kept it in its planes but, to do so, they had to “craft” the feeling since the controls by themselves don’t not provide it).

 This absence could partially explain some major accidents, labeled “Human Error” or “Lack of Training” without anybody analyzing what features of the design could drive to an error like, for instance, a defective sensor triggering an automatic response without the pilots knowing what’s going on.

 What is the situation right now? If we check the last planes from the big manufacturers, we can get some clues: Boeing 787 .vs. Airbus A350. Both are big twin and long-haul planes, there is not a flight engineer, they are highly automated and they both have fly-by-wire system. Coincidence? Not at all. Through a dynamic of unquestionned changes agreed by insiders and without knowledge by the consumers, the winner will be always the most efficient solution. Then, both manufacturers finished with two models that share a good part of the philosophy. There are differences –electric .vs. hydraulic controls, feeling .vs. no-feeling from the controls, more or less use of composite materials, lithium .vs. traditional batteries…- but the main parameters are the same.

 Issues that were discussed time ago are seen as already decided. The decision always favored the most efficient option, not the safest one. Could that be changed? Of course, but it’s not possible if everything keeps working as an “insiders game” instead of giving clear and transparent information outside.

We should understand too the position of “insiders”: A case like GermanWings was enough for some people -like NYT- to question the plane before knowing what really happened. A few days ago, we had an accident with a big military plane manufactured by Airbus and some people started already to question the safety of a single manufacturer…perhaps someone near to the other one?

Information has to flow freely but, at the same time, many people make a living from scandal and it’s hard to find the right point: Truth and nothing but the truth and, at the same time, deactivate the ones who want to find or manufacture a scandal. Nowadays, the environment is very closed and in that environment efficiency will have always the upper hand…even in cases where it shouldn’t. By the other side, we have to be careful enough to address real problems instead of invented ones. The examples used here can be illustrated not only with the referenced cases but with some others whose mention has been avoided.

Air Safety and low-cost

Low-cost started as an almost marginal issue but, nowadays, some low-cost airlines have outgrown their traditional competitors. Of course, something like that does not happen by chance. A high growth rate for many years usually points to a serious business project. In this context, “serious” means the opposite to a “take the money and run” model, so common in many activities, including Aviation.

 Then, we should start with this fact: There are low-cost operators that did not come looking for easy money. This fact, evident in the behavior of some operators, asks for an analysis where respect is deserved and it is not going to be denied here.

 Once made clear that we do not speak about people looking for easy model, the business model linked to low-cost shows itself as very interesting, not only because of results but because of eventual hidden weaknesses. We’ll center our analysis in safety and potential impact over safety of low-cost practices:

First, a little bit of common sense: If an operator wants to get better prices, costs are the enemy to beat and safety can be translated into costs. Furthermore, since yield-management appeared, it is hard finding two passengers in a plane who have paid the same for their tickets and, whatever traditional operators can tell us, it is a way to sell below costs to beat low-cost operators with not-so-deep pockets. In this environment, differences in prices have to be really important to resist this kind of competition.

Common sense tells us, also, that decreasing costs in safety can be a hard-to-resist temptation. However, this should be a conclusion that requires a deeper analysis:

Low-cost operators are very conscious that this an easy to reach conclusion and, if true, it can damage them very seriously.

When someone notoriously better known than popular like Michael O’Leary, Ryanair CEO was asked for the risks in the Ryanair business model, he was quite explicit: The risk of making something stupid from our side or an accident in an important low-cost operator.

The investigation after an accident can discover inadequate practices in any airline. However, a low-cost operator has a different risk level: Inadequate practice, if discovered, should not be read in terms of negligence or error but as an usual practice to decrease costs and, hence, as a part of their business model.

Hence, low-cost operators are fully conscious that a single major accident can put at risk their business continuity at a bigger level than the one who should suffer traditional operators. They have tried to minimize this risk in different ways and with different success levels:

Public Relations people from low-cost operators tell anyone willing to listen to them that they are controlled under the same rules that every other. Of course, this statement tries to put in the mind of the listener the idea that they have the same safety level that any other. This statement is true but, without entering in the real capacity of rulemakers and inspectors, can be deactivated with a simple example: Rules for car-makers are the same. Does it mean that a Dacia Logan offers the same safety level that an Audi A8 since both share the same rules?

When there is a serious business project and, of course, big low-cost operators have it, safety cannot be reduced to craft ingenious slogans but has to go much further:

Southwest Airlines, still the most copied model among low-cost operators, based cost reduction in a very specific operating objective: 25 minutes from landing to take-off. This objective has to be hard to reach since other operators, like Jet-Blue, decide to leave it looking for the cost reduction in other places.

Southwest based this objective in a very deep knowledge by every single worker about how his activity was affecting others. Without trying the everyone makes everything started by People Express and hard to keep in the long term, Southwest kepts specialization but, at the same time, created an environment based in the ability of the workers performing the job to detect improvement opportunities.

Ryanair trajectory has been much tougher: Time between flights is only one of the ways to reduce costs. Many others, like who pays the uniform of the workers or the invitation to grab ballpoints from the hotel rooms or the price paid by the new recruits for the privilege of working there…Probably, O’Leary himself should not be offended if defined like a CFO that became CEO because since he is fully conscious of that.

O’Leary is so conscious of his importance in Ryanair as financial watchdog that he decided not to attend meetings where maintenance decisions are made. The decision, together with having someone with high technical profile as the Maintenance Head, is positive but it should be quite reasonnable asking ourselves if that is enough. It is extremely hard creating watertight compartments in any organization and, in this case, it seems that they try to create such a watertight compartment to have Maintenance out of pressure looking for cost reduction in any area in the organization.

However, it is easy forgetting that safety is not a function but a perspective covering all the operations in the organization. If an organization is known for a very specific perspective -cost reduction- asking what will happen when both perspectives clash is a must.

As an example, it is possible deciding to have a good spare parts stocks but, in a cost-reduction driven organization…what should happen if a maintenance work is delayed beyond expectations? what should happen if a pilot put more fuel than stricly legal requirements? what if a pilot, already delayed, does not want to speed-up tasks or make checklists faster than usual? We could find hundred of examples of clashing perspectives and, of course, having Maintenance isolated from cost decrease pressure is not enough. If lowering costs is the dominant perspective, that is something that will be over every decision that someone can take as well in a cockpit as in any other position. That, of course, will affect the real safety level that someone can reach.

 Low-cost operators are in the market time enough to be able to make differences among them. Possibly, a soft model, like the Southwest one, centered in cost-reduction in very specific ways, will be less sensible to safety issues than other operators more hard-nosed. These ones will pursue costs wherever they are to exterminate them and this attitude can drive them very often to conflict of perspectives.

For good of all of stake-holders, including passengers, it should be good for the most aggresive companies in their cost-reduction practices  to be able to solve the organizational problem that two conflicting perspectives bring, especially if one of them has always the winning hand.

The challenge will not be easy and it is going to require from very imaginative and energetic operators at least so much imagination and energy as the one they devoted to cost reduction and, perhaps, it will make them change some habits that they see as very important since they were an important part of their success.

We will know if they are successful in this effort or if they will make good the Drucker statement success makes obsolete the factors that made it possible. If so, the factor that could be obsolete even though it drove to the past success is precisely the fundamentalism in cost reduction. Fundamentalism, in this context, should be understood in its most literal meaning: Invasion of fields that are not theirs.

Lessons from 11S about Technology

 

Long time ago, machines started to be stronger and more precise than people. That is not new but…are they smarter too? We can forget developments near to SciFi like artificial intelligence based in quantum computing or interaction among simple agents. Instead, we are going to deal with present technology, its role in an event like 11S and the conclusions that we can get from that.

 

 Let’s start with a piece of information: A first generation B747 plane required three/four people in a cockpit with more than 900 elements. A last generation B747 only requires two pilots and the number of elements inside the cockpit decreased in two thirds. Of course, this has been posible through I.T. introduction and, as a by-product, rhrough automation of tasks that, previously, had to be performed manually. The new plane appears as easier than the old one. However, the amount of tasks that the plane performs now on its own makes it a much more complex machine.

 

 Planes used in 11S could be considered as state-of-the-art planes at that time and this technological level made the fact possible, of course, together with a number of things far from technology. Something like 11S should have been hard with a less advanced plane. Handling old planes is harder and the collaboration of pilots in a mass-murder should have been required. Not an easy task getting the collaboration of someone in his own death under death threat.

 

The solution was making the pilot expendable and that, if the plane is flying, requires another pilot willing to take his own life. How is the training cost for that pilot? In money terms, a $120.000 figure could be more less adjusted if speak about training a professional pilot. However, this could not be hard to get for the people that organized and financed 11S. A barrier harder to pass is the time required for this training. Old planes were very complicated and their handling required a good amount of training to be acquired along several years. Should terrorists be so patient? Could they trust in the commitment of future self-killers along the years?

 

 Both questions could invite the organizers to reject the plans as unfeasible. However, technology played its role in a very easy way: Under normal situations, modern planes are easier to handle and, hence, they can be flown by people less knowledgeable and less expert. Coming from this point, situation appears under a different light: How long it takes for a rookie pilot getting the dexterity required to handle the plane at the level required by the objectives? Facts showed the answer: A technologically advanced passenger plane is easy to handle –at the level required- by a low-experienced pilot after an adaption through simulator training.

 

Let’s go back to the starting question: Machines are stronger and more precise than people. Are they smarter too? We could start discussing the different definitions about intelligence but, anyway, there is something that machines can do: Once a way to solve a problem is defined, that way can be programmed into a machine to get the problem automatically solved once and again. As a consequence, there is displacement of complexity from people to the machine, allowing modern and complex machines to be handled by people less able than former machines with more complex interfaces.

 

 Of course, there is an economic issue here: An important investment in technological design can be recovered if the number of machines sharing the design is high enough. Investment in design is made only once but it can drive to important savings in thousands of pilots training. At this moment, automation paradox appears: Modern designs produce more complex machines with a good part of the tasks automated. Automation makes these machines easier to handle under normal conditions than the previous ones. Hence, less trained people can operate machines that, internally, are very complex. Once complexity is hidden at interface level, less trained people can drive more complex machines and that is the place where automation payback is.

 

 The scaring question is this one: What happens in unforeseen situations and, hence, not included in technological design? If we speak about high risk activities, the manufacturer uses to have two answers to this questions: Redundancy and manual handling. However, both possibilities require a previous condition: The problem has to be identified as such in a clear and visible way. If not or if, even after being identified, the problem appears in a situation where there is not available time, people trained to operate the machine can find that the machine “becomes crazy” without any clue about the causes of the anomalous behavior.

 

 Furthermore, if the operator receives a full training, that is, not only related with interface but related with the knowledge of the principles of internal design, automation could not be justified due to increased training costs. We already know the alternative: The capacity to answer to an unforeseen event is seriously jeopardized. 11S is one of the most dramatic tests about how people with low training can perform tasks that, before, should have required more training. However, this is not an uncommon situation and it is nearer to our daily life than we could suspect.

 

Everytime we have a problem in the phone, an incidence with the Bank, an administrative problema in the gaz or electricity bill…we can start a process calling the Customer Service. How many times, after bouncing from one Department to other, someone tells us that we have to dial the number that we had dialed at the beginning? Hidden under these experiences, there is a technological development model based in complex machines and simple people. Is this a sustainable model? Technological development produce machines harder and harder to understand by their operators. In that way, we make better and better things that we already knew how to do and things that already were hard become harder and harder.

 

 11S was possible, among other things, as a consequence of a technological evolution model. This model is showing to be exhausted and requiring a course change. Rasmussen stated the requirements of this course change under a single condition: Operator has to be able to run cognitively the program that the machine is performing. This condition is not met and, in case of being mandatory, it could erase the economic viability driving to a double challenge: One of them should be technological making technology understandable to users beyond operating level under known conditions and the other one is organizational avoiding the loss of economic advantages..

 

Summarizing, performing better in things that we already performed well and, to do that, performing worse in things that we already were performing poorly is not a valid option. People require answer always, not only when automation and I.T. allow it. Cost is the main driver of the situation. Organizations do not answer unforeseen external events and, even worse, complexity itself can produce events from inside that, of course, do not have an answer neither.

 

 A technological model aimed to make easier the “what” hiding the “why” is limited by its own complexity and it is constraining in terms of human development. For a strictly economic vision, that is good news: We can work with less, less qualified and cheaper people. For a vision more centered in human and organizational development, results are not so clear. By one side, complexity puts a barrier preventing the technological solution of problems produced by technology. By other side, that complexity and the opacity of I.T. make the operators slaves without the opportunity to be freed by learning.