Effect of high automation environment on human productivity

In the past, automation was considered as one of the features of progress. Today, we know that, at least, automation is a mixed blessing whose contribution has to be critically analyzed in every situation.

There are two myths about automation:

  1. Automation decreases the number of available jobs. Therefore, automation contributes to unemployment and should be avoided.
  2. Automation increases the quality of jobs. Only lower functions get automated and people can focus in more creative activities.

We cannot deny that automated activities require less people and, even when new jobs appear, they  are not created in one place at the same pace that are destroyed in other places. However, automation avoidance is not an alternative: Years ago, Volvo started an experiment organizing a factory around little teams where every team was responsible for a full vehicle. The required redundancy of technical resources, tools and highly-specialized people made the experiment fail once other manufacturers started to work with highly-automated factories beating Volvo in price and, perhaps, even in quality.  Unions, politicians and Governments can fight against this but their eventual success only could drive to move production facilities to other places. The argument about job losses could be true but useless.

The second myth is about the alleged increased quality of jobs in automated environments. If we go back to the Taylor time, we will find a model damned forever by unionists and motivation specialists. However, it is easy to forget that Taylor had people unable to read  manufacturing a full car. That was made through strict procedures where innovation or initiative from workers  was not considered even desirable. In the first development phases, automation was understood as a procedures freezer. However, when technology improved, it started to develop its own way instead of imitating human-performed procedures.

That was a no-return point: High-quality jobs started to move from production to places related with design and manufacturing of technology.  Automated systems, once start to work their own way instead of imitating the human one, make obsolete the knowledge about how things worked before automation. People are given an operational model about how things happen and IT people call this model “transparent”. Supposedly, operators should not have to be concerned about complex programmes inside the system if the interface is easy and familiar even when its design is closed to analysis by the users.  As an example of “transparency”, any Windows user deals with documents, binders, desktops and so on. However, when the system behaves in a bizarre way, we suddenly discover that these are  nice metaphors of the real work and of course, they do not exist.

The knowledge about former models becomes useless, no matter how deep it could be and the real knowledge of the system is owned by its designers making the users fully dependent of them. Quality of jobs gets then splitted: At the operating end (where more jobs are), the required  knowledge decreases. At the same time, required knowledge increases in the place where less jobs are available (design and manufacturing of automated systems). That is why the second myth about quality of jobs is precisely that: A myth.

Once we arrive to this point, some questions arise:

  • Is possible to change this situation?
  • If so, should it be convenient this change?
  • If the answer is positive, should it be an universal “yes” or should it depend on specific situations?
  • If so, which are the situations advising the change?

The car market provides a good example: Any new car has inside a lot of electronics not only to make the car work but to diagnose  mechanical problems. Systems like the “BMW augmented reality”, where  glasses provided with data screens show to the mechanic how to perform specific works step-by-step are a good example. Therefore, the ability of the mechanic to diagnose and fix mechanical problems has been downplayed. Furthermore, the knowledge of mechanics should be changed into knowledge of electronics, a field fully apart from any previous knowledge and where the mechanic should have to start from scratch.

This is hard, if possible, to change. Furthermore, it is easier for designers to hide their designs under programme lines than under physical shapes easy to observe and analyze. Designers use this advantage to hide key features to the users preventing the possibility of being copied. The change is difficult but the question about its convenience remains. There is a related effect, known as  “automation paradox” that can be described like this:

An automated system usually requires a big investment. Once the investment is made, investors try to optimize the payback and a decreased requirement of training is an important part of the payback. However, since automated systems are internally more complex than its non-automated counterparts, a deep knowledge of an automated system should require more training than the non-automated one. This paradox is solved the easy way: Operators are not trained about how the system works internally but about a system of metaphores or, in other terms, the kind of computer literacy that a common Windows user has.

The observation of many call-centers where operators seem more automated than the system itself, reading in their screens messages that do not understand, could provide an example that can be found too in a hypermarket, with people passing bar codes and credit cards as their main activity.  It has become business-as-usual and the client has accepted this as an unavoidable feature of the relation with big companies since this is common practice everywhere. The practice is then reinforced and big corporations accept low-qualified people at the front end as a sensible way to work.

This brings us to the fourth question: If consequences of an event are high, a highly-automated model unable to manage unforeseen situations is not acceptable. However, we should start by asking if designing a system able to answer to every single event is feasible.  The answer is negative, at least, for complex activities: Automated systems handle more and more events but, to do so, they increase their own internal complexity and, hence, new events arise coming from unexpected interactions inside the system. It works like a hydraulic press: When the volume of the pressed material decreases, the resulting product is more and more massive: We decrease the number of unforeseen events but the remaining ones are harder to manage.

Another effect of increasing complexity is how to asses the risk level. In very controlled fields as aviation, we can find events where functionally unrelated systems interact due to proximity. Since they were unrelated, these interactions were unforeseen. The list could be very long: Electric sparks near to a fuel tank, loss of hydraulic power due to engine explosions, fire onboard due to rubber from a wheel breaking a fuel tank in a plane with after-burner…

If we cannot create an automatic system able to handle 100% of contingencies and these can be important, the system requires intelligence inside and that means human contribution[1].   However,  allowing human contribution  requires some conditions:

The first problem is being in the “loop-of-control”. The sequence of events gives the operator clues to solve a problem through the analysis of that sequence. In automated systems, it is frequent getting the first notice once the event is fully developed and, therefore, missing basic clues to know what happened.

Another problem is related with workload. The operator can pass from a very low workload to a very high one and viceversa, being always out of the right level, the one where human performance can reach its optimum.

Last but not least, some automated systems prevent the operators to perform the required action. There are many examples. One is the problem of the Australian company Quantas with a highly-automated plane: After a problem with a sensor, pilots were flying the plane manually but, even though, computers keep running. The faulty sensor was feeding the computer with information defining the situation as one requiring immediate action . Therefore, the computer, acting on wrong information, took the control from the pilots causing a serious event.

In high-risk activities, a common practice is having an alternative system free of automation. That is, standard interfaces are made of computer screens but it remains a set of “basic instruments” that, under  degraded situations, could provide enough information to handle the system. The most interesting part, however, is not outside the system but inside: Redundancy does not work in software. If we had 50 identical systems with identical software and performing the same task at the same time, any bug could make the 50 systems fail at the same time. Since designers know that, they take a cautionary step:  Three systems with identical interface and behavior but different internal logic. If we have three computers running Windows, MacOs and Linux performing the same task at the same time, the probability of a multiple fail is extremely low.  This design, aimed to provide a redundancy that works, has a side consequence: The operator has, as best, a “Windows user” knowledge of the system.

Jens Rasmussen established a rule for critical systems: “The operator has to be able to run cognitively the system”. At this moment, we can say that this rule is not met. Furthermore, since operators are provided with user knowledge, the operator, under unforeseen events, could not know what and why happened. That is the present situation of automation. If applied to non-critical systems, the lack of an answer is a part of a general landscape. No one seems to be especially concerned by this split in the knowledge level between designers and users and the resulting situation mirrors the model of Taylor. If applied to high-risk activities, the shortcomings are the same but, in this case, a new relation between human and automated system should be required.

When someone ignores the logic model of the human operator and, instead, introduces a new one coming from IT, the objective is efficiency improvement. However, important costs appear related to the degraded knowledge of operator and, hence, the loss of ability to solve unforeseen problems. However, bringing back the Rasmussen rule should not be very difficult. An example can show how:

For many years, HP has been manufacturing calculators using the RPN[2] model, more efficient in terms of required keystrokes than the arithmetic one. However, even HP finished manufacturing calculators with arithmetic notation since it is more intuitive for people. Arithmethic notation can be less efficient but it fits with the mental model of the user and that is why many users prefer a notation that keeps their own logic…and that is why this (watch minute 7:20) is false: http://www.youtube.com/watch?v=xbecC_ApNY0. Actually,  we know how a calculator works at a functional level. That cannot be told of many of the systems that we use.

Peter Drucker, in his autobiography, speaks about a situation when, working for a Bank, had an idea supposedly brilliant and proposed it to his boss. He was told to explain his idea to the most moron employee and, when Drucker complained, the answer was that the ability of the moron employee to understand was the limit to complexity of any idea to be sold to clients.  This could be a good principle about automation. The limit to optimize through automation should be the ability of the operator to understand the internal logic of the system. Furthermore, it must be an enforced principle in critical systems. Meanwhile, we will keep running automated systems that have rediscovered Taylor and unfitted to use the human potential.

I will like to end with a comment: It is said that 80% of accidents have an human origin and, for many designers, a downgrade or plain dissapearance of the human role should be a good investment. However, keeping out the human should not reduce accidents by a full 80%. Actually, the complement of 80% is not the remaining 20% but the much higher number of non-accidents produced by human, often trivial,  interventions. If this idea is clear, it can help to go back to the right path.


[1] This should be a long discussion but out-of-focus.

[2] Reverse Polish Notation.

Anuncios

Responder

Por favor, inicia sesión con uno de estos métodos para publicar tu comentario:

Logo de WordPress.com

Estás comentando usando tu cuenta de WordPress.com. Cerrar sesión / Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Cerrar sesión / Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Cerrar sesión / Cambiar )

Google+ photo

Estás comentando usando tu cuenta de Google+. Cerrar sesión / Cambiar )

Conectando a %s