Some time ago, I had a friendly discussion with an IFALPA member about this issue.
His opinion, even though he was not very glad about it, was that the future of aviation was in the pilotless plane. The mine, even though I am not so concerned about, was and still is the opposite one.
The first question: Is technically feasible to fly without a cockpit and, hence, without a pilot? Nothing to discuss about it. Far before the recent experience with a passenger plane and drones, we know that during II World War the famous V1 were launched over England. These flying bombs already were pilotless planes taking off from a ramp and, for obvious reasons, without requirements about how to land. Furthermore, some of them were built with a cockpit to solve a technical problem at the beginning. Since then, technology has evolved so much that we do not need to speak about present feasibility. We do not need even to speak about planes where the pilot is grounded. We can speak about planes without a pilot…onboard nor grounded. When VLJ started and they were certified for a single pilot, someone spoke about a TMH (Take Me Home) switch to be used in case of uncapability of the only pilot in the plane.
There are the plain facts about feasibility. If we speak about costs, we’ll find important advantages if compared with the present situation. If the installation of a cockpit can be avoided, design is simplified and costs decrease. If we need grounded cockpits…should we need as many of them as planes or the number of cockpits could be lower than the number of planes? Could be use a cockpit to control more than a plane model?
We do not need to speak about costs linked to salaries and travel costs of crewmembers and flight time limitations enforcing pilots to remain in place after a long flight. If we speak about cargo, more savings appear as the feasibility of operating planes without requirements of cabin pressure.
Furthermore, in more than 100 years since the beginning of powered aviation, we have learned very much. We know many things about plane behavior, about materials, about engines, about meteorology, about control systems, about communications….in other words, it seems that not many unforeseen situations could appear.
With all of these pieces… should a pilot be thinking about looking for another job? Pilots have a technical training and, hence, they are very often driven to a technical mentality in the sense that “if something can be done, it will be done”…even though it should be against their interest and they could fight against any development that could be so damaging for them but knowing or thinking that they are defeated in advance.
I hope all of this can reflect accurately the reasoning of someone who believes in this kind of development…someone who thinks that this is unstoppable and going against it should be equivalent to try to stop a train with the hands. However, that is not the real situation.
Some pilots say that every year they avoid three or four accidents and they are paid to do that. This value of Human Factor is something that many people have written about, since well-know ICAO members like Daniel Mauriño to authors like Sidney Dekker, James Reason or, if I’m allowed to be in this group, myself. Data like the 64% of accidents coming from Human Factores published by Boeing just before changing the classification criteria can drive to a serious mistake, even assuming the data as correct.
OK. Let’s suppose that this is the reality and, furthermore, let’s suppose that these are not errors happened as a wrong reaction after a technical problem. Let’s suppose that all of them are unforced piloting errors. Certainly, it is to much for a supposition but that is fine. Let’s accept it. Now, the reasoning mistake clearly identified by people from Behavioral Economics field like Dan Kahnemann: Does that mean that by eliminating pilots we could avoid 64% of accidents? It seems so but it should be a false assumption:
The complement of 64% should not be the remaining 36% but the number of Non-Accidents that happened due to human -and often, trivial- intervention. These Non-Accidents coming from human intervention will be a number much bigger than 36% but we do not have information to know the exact figure. That should be the point where the pilots should have to fight back. Why these situations, that everyone knows about their existence, are not counted? Why not to set up a system similar to ASRS where, instead of speaking about committed errors everyone could speak about sound actions? Cases of Non-Accidents in situations that, leaving the system to behave as planned, could have finished in an accident are a first-class argument against passenger planes without a pilot.
Does the present knowledge guarantee that unplanned situations are not going to happen? A recording about the manufacturing process of Airbus A-380 showed how the main gear resisted to go down. Someone could ask if we do still not know how to design a gear working at the first attempt…of course, all of this happened before the QF32 case that showed much more unforeseen situations. Interactions among different systems in the plane are so complex than unforeseen situations appear even in parts of the system that are supposedly fully controlled.
Bigger knowledge is not a guarantee of having everything foreseen. Some centuries ago, Pascal said that knowledge is like a sphere that, as it grows in size, has a bigger number of contact points with the unknown. That principle is as true today as it was in the moment it was established.
Revolutions that enforce change in aviation happen almost everyday. The attempt to guarantee the operativity in dense traffic, with hard meteorology and others produce important change enforcing to review from time to time everything and driving to the situation that Charles Perrow defined as “tightly-coupled organization” where a little problem can start a process finishing in a disaster. The only element in the system that can break the snowball effect once started even by a trivial fault is precisely a person.
If planes have to be flown by automatic systems, those trivial situations should be planned and resulting systems should be even more complex and, hence, prone to unforeseen situations coming from their own design. Can we avoid it? As a part of the answer, I would like you to meet Mr. Charles Simonyi, person defined by Wikipedia as software developer and spatial tourist. Simonyi was the person who managed the creation of Microsoft Office and, especially, the one who promoted the idea known as WYSIWYG (What You See is What You Get) that makes easier the task of the user because, as name shows, the printer will print things that we see on the screen.
Some years later, Simonyi started to worry about his own work because WYSIWIG was introducing more complexity and was producing problems hard to fix. The reasoning is easy: Early information systems were programmed directly in machine code or something similar called Assembler. When a problem appeared, it was not hard to find where to look: The order given to the computer was wrong. However, the power of programming languages was growing and the correspondance with machine code was decreasing, entering in a kind of abstraction scale that had more and more steps. WYSIWYG was, when it appeared, the last step of this stairs but the abstraction scale had “leaks” driving to differences between the code written by the programmer -even if he did not commit any mistake- and the action performed by the machine because the problem could be buried in any intermediate step. A programmer does not program machines anymore; he programs programmes whose products are other programmes.
The present ambition of Simonyi -with many skeptical people about it- is starting a new generation of information systems working on something that he calls “intentional software”, that is, to avoid the abstraction stairs and all of its steps because there are the problems that systems are unable to foresee because they are an intrinsic part of the design and a part of the tools used to perform the design itself. When Simonyi speaks about present problems in information systems is hard to deny its existence. If these are the pieces that supposedly are going to be used to build unbreakable automatisms…are they going to be as unbreakable as expected? In aviation, the pilot has to understand the functional model -not only the operating one- of the onboard systems to be able to diagnose and to be an alternative to a failing system.
If a sneaky company wanted to get rid of pilots, the sequence to do so should be very well defined and, probably, that sequence is already in the mind of some manufacturers or operators:
- Start working with pilotless planes in cargo flight where the risk is only related with goods.
- Once the results are presented as good enough to justify the use of pilotless planes in passenger transportation, some more things should be required:
- An explanation about how good the system was in cargo flights…even if some details were convenient to be omitted.
- The passenger chooses but there should be a big difference in price or in features like punctualicty and others.
- Complains about pilots and how they are an obstacle to progress.
- Any human error related accident should be widely publicized.
These pieces could be enough to gain the public opinion creating an idea of pilots as people that did not provide value, contributed to high prices and could be responsible for accidents that, without them, had not been produced. In this way, nice profits could come in the short term even though they could be the equivalent of using a vehicle with nitroglicerine as fuel. Perhaps it could be faster bus the risk of a big explosion could be always present.
Now, pilots, passenger and even regulators are in a situation where a message of “unavoidable progress” is shown and, at the end of the day, planes without a cockpit should be an improvement in safety, costs and, of course, anyone opposing this should not have a reason different from reluctance to progress or personal interest. Perhaps this is the moment where the real value of human factor has to be made clear through some specific actions:
1. Specific records about unforeseen situations where a good outcome came from human flexibility.
2. Good information to consumers and their organizations instead of crafted messages designed to keep them ignorant.
The mirage of a technological development aimed to reduce remaining uncertainties and allowing us to eliminate human factor is simply that: A mirage. The behavior of technological development regarding unforeseen situations is similar to the one of a hydraulic press where, if we increase the power, the pressed material occupies less space but, at the same time, it is harder and more difficult to introduce a hand to operate it. In other words, an increasing capacity from a 90% to a 95% is not always good news. Actually, it could be worse if, at the same time that we increase the percentage, we introduce a full inability to manage the remaining 5% of unforeseen situations.
In aviation, the technological mirage have been working for a long time. That’s why a very relevant author in Artificial Intelligence like Daniel Hillis offered himself as a passenger in a pilotless plane programmed by an artificial intelligence system able to generate programmes hard to understand for a human being but able to fly. Hillis said that he did not know how the system could work but, at the end of the day, he did not know how a human pilot worked. However, that is a half-truth. Certainly, we do not know how a human pilot works but we know something that could be enough:
There is a very high probability that the human pilot will want to enjoy a happy retirement and he will make any thing that can be required to get it. From the scope of the passenger in a flying plane, this simple purpose is a full guarantee that should not be removed and, of course, this is a guarantee that cannot be offered by any artificial system.