Three myths in technology design and HCI: Back to basics

It has been a coincidence driven by the anniversary of Spanair accident but, for a few days, comments about the train accident in Santiago de Compostela and about the Spanair accident appeared together. Both have a  common feature beyond, of course, a high and deadly cost. This feature could be stated like this: “A lapsus cannot drive to a major accident. If it happens, something is wrong in the system as a whole”.

The operator -pilot, train driver or whoever- can be responsible if there is negligence or clear violation but a lapsus should be avoided by the environment and, if it is not possible, its consequences should be decreased or nullified by the system. Clearly, it did not happen in any of these cases but…what was the generic problem? There are some myths related to technology development that should be explicity addressed and they are not:

  • First myth: There is not an intrinsic difference between open and closed systems. If a system is labeled as open, that comes only from ignorance and technology development can convert it into a closed one: To be short and clear, a closed system is one where everything can be foreseen and, hence, it is possible to work with explicit instructions or procedures while an open one has different sources of interaction from outside or inside and it makes impossible to foresee all posible disturbances. If we accept the myth as a truth, no knowledge beyond operative level is required from the operator once technology reached the right point to consider a system as closed. Normative approach should be enough since every disturbance can be foreseen.

Kim Vicente, in his Cognitive Work Analysis used a good metaphor to attack this idea: Is it better having specific instructions to arrive to a place or is it better a map? Specific instructions can be optimized but they fail under closed streets, traffic jams and many other situations. A map is not so optimized but it provides resources under unforeseen situations. What if the map is so complex that including it in the training program should be very expensive? What if the operator was used to a roadmap and now he has to learn how to read an aeronautical or topographic map? If the myth works, there is not problem. Closed street and traffic jams do not exist and, if they do, they always happen in specific places that can be foreseen.

  • Second myth: A system where the operator has a passive role can be designed in a way that enables situation awareness. Perhaps to address this myth properly, we should go back to a classic experiment in Psychology:  http://bit.ly/175gKIc where a cat is transporting another one in a cart. Supposedly, the visual learning of both cats should be the same since they have a common information. However, results say that it does not happen. The transporting cat get a much better visual learning than the transported one. We don’t really need the cats nor the experiment no know that. Many of us can go a lot of times to a place while other person is driving. What happens when we are asked to go alone to that place? Probably, we did not learn how to go. If this happens with cats and with many of us…is it reasonnable to believe that the operator will be able to solve an unplanned situation where he has been fully out of the loop? Some designs could be removing continuous feedback features because they are hard and expensive to keep and, supposedly, they add nothing to the system. Time ago, a pilot in a highly automated plane told me: “Before, I was able to drive the plane; now the plane drives me”…this is other way to describe the present situation.
  • Third myth: Availability bias: We are going to do our best with our resources. This can be a common approach by designers: What can we offer with the things that we have or we can develop at a reasonnable cost? Perhaps that is not the right question. Many things that we do in our daily life could be packed in an algorythm and, hence, automated. Are we stealing pieces of situation awareness at doing so? Are we converting the “map” into “instructions” without resources if these instructions cannot be applied? However, for the last decades, designers have been behaving like that: Providing an output under the shape of a light, a screen or a sound it quite easy while handles, hydraulic lines working -and transmitting- pressure and many other mechanical devices are harder and expensive to include.

Perhaps whe should remember again “our cat” and how visual and auditive cues could not be enough. The right question is never about what technology is able to provide but about what is the situation awareness that the operator has at any moment and what are his capabilities and resources to solve an unplanned problem. Once we answer this question, perhaps some surprises could appear. For instance, we could learn that not everything that can be done, has to be done and, by the same token, some things that should be done have not a cheap and reliable technology available. Starting a design trying to provide everything that technology can provide is a mistake and, sometimes, this mistake is subtle enough to pass undetected for years.

Many recent accidents are pointing to these design flaws, not only Spanair and Renfe ones:  Automated pilots that get data from faulty sensors (Turkish Airlines, AF447 or Birgenair) , stick-shakers that can be programmed -instead of behaving as the natural reaction of a plane near to stall- provoking an over-reaction from fatigued pilots (Colgan), indicators where a single value can mean opposite things (Three-Mile Island) and many others.

It’s clear that we live in a technological civilization. That means assuming some risks, even catasthropic ones, like an EPM or a massive solar storm. However, there are other minor and current risks that should be controlled. Having people to solve the problems while, at the same time, we steal them the resources they should need to do that is unrealistic. If, driven by cost-consciousness, we assume that unforeseen situations are below one in thousand million and, hence, this is an acceptable risk, be coherent: Eliminate the human operator. By the other side, if we think that unforeseen situations can appear and have to be managed, we have to provide people with the right means to do so.  Both are valid and legitimate ways to behave. Removing resources -including the ones that allow situation awareness- and, once the unforeseen situation appears, having an operator as a breaker to burn speaking of “lack of training”, “inadequate procedure compliance” and other common labels is not a right nor legitimate way. Of course, accidents will happen even if everything is properly done but, at least, the accidents waiting to happen should be removed.

Anuncios

Responder

Por favor, inicia sesión con uno de estos métodos para publicar tu comentario:

Logo de WordPress.com

Estás comentando usando tu cuenta de WordPress.com. Cerrar sesión / Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Cerrar sesión / Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Cerrar sesión / Cambiar )

Google+ photo

Estás comentando usando tu cuenta de Google+. Cerrar sesión / Cambiar )

Conectando a %s