I found this interesting article about automation in Avweb: http://www.avweb.com/news/features/Automation-Friend-Or-Foe220153-1.html Probably, nobody -including myself- is going to discuss that automation came to remain in Aviation and there is nothing wrong with that.
The problem that can make articles like this one uncomfortable for some of us is a kind of default position where a deeper analysis is missing. Can we say that in the past automation was to support humans and now it is the reverse? Probably that is right since, at this moment, many features of Aviation should be lost if someone insists in people supported by technology. A CATIII landing, for instance, simply cannot be done by a human pilot. The precision required to navigate our crowded skies with decreased distance among planes should be again impossible to keep for a human hand during a long period of time. So, what is the problem?
First, if we accept that humans are going to perform as support for automation, they should have resources -understood in a wide sense- to perform that function. People, to work properly in Aviation and probably in any other field, cannot be subject only to a feedback loop. They have to be able to anticipate what is coming, that is, they need to have information enough to forecast the next minutes or the next hours. If they do not have this kind of situation awareness, they cannot be expected to solve an important problem once it appears.. Is all the automation in the market designed to support a pilot whose head is before the plane or is the pilot expected to remain quiet passing checklists and to intervene only after a problem unmanageable for automation appeared?
That’s the problem of the “default position”: When someone speaks about automation, it is very common forgetting that automation is divided in two big classes: Good and Bad. We cannot assume that automation is always good and all the effort has to come from the human operator who has to adapt to it. Adaptation has to be mutual and that means also getting rid of features that can be error inducing or hide key information. This is not new and when highly-automated planes started to crowd the market, pilots were supposed to adapt to the situation, even if that situation could be shouting about a bad design. Some problems are so old that they already were addressed by people like the deceased designer of MacIntosh, Jef Raskin, but they are still among us.
Automation? OK, but if we want to have an alternative resource to automated systems, we need to keep current knowledge and abilities as the ones required to fly manually a plane and it seems that there is a problem here that was made public for someone as relevant in Aviation as the CAA-UK. Furthermore, automation has to be designed thinking in someone that could be required to take the controls in delicate situations. If a design does not meet these criteria and, even though, once in the market a design is considered as unquestionnable, let’s be coherent: Do not have pilots. If they cannot be that alternative resource, it should be more coherent to fully trust in automation and, if a crash happen, tell the relatives of victims that the probability was extremely low and it probably never will happen again. Acceptable? Obviously, not.
Nothing to say about the presence and relevance of automation and/or I.T. in Aviation. The only part that could not be acceptable is this default position that makes automation an unquestionnable piece of the environment without stopping to think that, perhaps, automation and Information Technology can also be wrongly designed and, when it happens, it is not fair asking everyone to “adapt” to the creative genius of the designer that, in some cases, could not be creative nor genius and, especially not a designer. Designs, as everything, have to be open to question and, when something shows to be wrong, the “Human Error” or “Lack of Training” labels cannot be used to hide this fact.