Amalberti explains very clearly safety related concepts and, whatever the reader agrees with 100% of contents or not, it is worth to be read and discussed. He goes against some sacred cows in the safety field and his book should be analyzed very carefully. Especially, these three points should deserve a special analysis:
• People learn by doing instead of observing. Asking for a full Situational Awareness before doing anything could drive to serious accidents while the operator tries to get a full Situational Awareness.
• There are different safety models. Many markets and companies try to imitate ultra-safe models like the ones coming from Aviation and Nuclear Energy when, actually, these models should not work in other activities more expert-based than procedure-based.
• Trying to train someone for every single exceptional situation is not the safest option. People can try to explore limits instead of remaining in the safe environment.
People learn by doing instead of observing. True, and perhaps that is one of the motives that people are so bad at monitoring while some automation designs still insist precisely on that. However, Amalberti reaches a conclusión related with Situational Awareness that, in my opinion, could be wrong: For Amalberti, we should not ask for a full Situational Awareness before trying a solution because we could get paralyzed under critical situations with serious time constraints. Explained like that, it’s true and that should be the phenomenon known as paralysis because of analysis but something is missing:
Situational Awareness cannot be understood as a condition to be met in critical situations but as a flowing process. In high risk environments, design should guarantee that flow at a level that, once the emergency appears, getting a full picture of the situation is easy. If we put together this single principle with the first one by Amalberti, that is, that people learn by doing instead of observing, we could reach different conclusions:
1. Top level automation should be used only under exceptional situations, using as default levels others where human operators should learn and develop Situational Awareness by doing instead of observing.
2. Information Technology used in critical systems should be under-optimized, that is, instead of using the most efficient design in terms of technology use, the alternative option should be using the most efficient design in terms of keeping Situational Awareness. Some new planes keep Intel processors that are out of the market many years ago –for instance, B777 using Intel 486- and nothing happens. Why then should we try to extract all the juice from every programming line building systems impossible to be understood by users?
Different safety models with an out-of-context imitation of ultra-safe systems as Aviation or Nuclear Plants. This is another excellent point but, again, something could be missing: Evolution. I have to confess on this point that my Ph.D. thesis was precisely trying to do what Amalberti rejects, that is, applying the Air Safety model to Business Management field. Some years later, Ashgate published it under the name Improving Air Safety through Organizational Learning but “forgetting” the chapter where learning from Air Safety was applied to Business Management.
The first thing to be said is that, in general terms, Amalberti is right. We cannot bring –unless if we want it to work- all the procedural weight of a field like Air Safety to many other fields like, for instance, Surgery, where the individual can be much more important than the operating rules. However, the issue that could be lost here is Organizational Evolution. Some fields have evolved through ultra-safe models and they did so because of their own constraints without anyone trying to imitate an external model. Different activities, while looking for efficiency improvement, evolved towards tightly coupled organizations as Charles Perrow called them and that produced an unintended effect: Errors in efficient organizations are also efficient because they spread their effects by using the same organizational channels that normal operation. Otherwise, how could we explain cases like Baring Brothers where an unfaithful employee was enough to take the whole Bank down?
Summarizing, it’s true that we should not make an out-of-context imitation of ultra-safe models but, at the same time, we should analyze if the field whose safety we are analyzing should evolve to an ultra-safe model because it already became a tightly coupled organization.
Trying to train someone for every single exceptional situation is not the safest option: Again, we can agree in general terms. For instance, we know that, as a part of their job, pilots practice in simulators events that are not expect to appear in their whole professional life. Perhaps, asking them to practice recovery from upside down positions or from spins should be an invitation to get closer to these situations since they should feel themselves able to recover. The “hero pilot” time is over long ago but…
We have known in the past of wrong risk-assessments where the supposedly low-probability event that should not require training since it should not happen…happened. A well-known example is United 232 where three supposedly independent hydraulic systems failed at the same time showing that they were not so independent as pretended. The pilot had practiced before in a flight simulator the skills that converted a full crash into a crash landing decreasing substantially the number of casualties. A similar case is the Hudson river landing where a double stop engine was supposed to happen only above 20.000 feet…and procedures developed for that scenario made the pilots lose a precious time when the full loss of power happened far lower than this height.
Even though, instead of focusing in different events showing a wrong risk assessment that could invite us to take with care the Amalberti idea -even when he clearly raises an important point there- is a different kind of problem that has already been under major accidents: Faulty sensors feeding wrong information to computers and occupants planes getting killed without the pilots getting the faintest idea about what was going on. “Stamping” this kind of events with a “lack of training” should be a way of telling us something that, at the same time, is true and useless and, by the way, it’s opposed to the Amalberti’s principle.
Daniel Dennett used a comparison that can be relevant here: The comparison between commands and information agencies: Commands are expected to react under unforeseen situations and that means redundancy in the resources and a very important cross training. By the other side, information agencies work under the principle of “need-to-know”. Should we consider as an accident that this same “need-to-know” idea has been used by some Aviation manufacturers in their rating courses? Should we really limit the training to the foreseen events or should we really recognize that some events are very hard to foresee and different approaches should be taken in design as well as in training?
Summarizing, this is an excellent book. Even if some of us could not agree with every single point, they deserve a discussion, especially when it’s clear that the points raised in the book come from a strong safety-related concept instead of politics or convenience inside regulators, operators or manufacturers.
I would not like finishing without a big Thank you to my friend Jesús Villena, editor of this book in Spanish under the name “Construir la seguridad: Compromisos individuales y colectivos para afrontar los grandes riesgos” because he made me know this book.
Perhaps in a moment where Microsoft does not live its best moment, speaking about Windows could be seen as a paradox. However, Windows itself is a paradox of the kind of knowledge that many companies are pricing right now.
Why Windows? You open your computer to find a desktop. The desktop has binders and documents and you have even a trash bin. This environment allows working in the old-fashioned way. We move documents from one binder to another one. When a document is not useful anymore, we put it in the trash bin and everything seems to be perfect…as far as everything works as it’s supposed to work. What happens when we find a blue screen or, simply, the computer does not start?
At that moment, we find that everything was false. Desktop, binders, documents…? Everything is false. Everything is a part of a complex metaphor, good while everything works as expected but fully useless once something fails. What kind of real knowledge does the Windows user have? The Windows user has an operating knowledge that can be enough in more than 90% of the cases. That’s fine if the remaining 10% cannot bring unexpected and severe consequences but we see this kind of knowledge in more and more places, including critical ones.
When 2008 crisis started, someone said that many Banks and financial institutions had been behaving as Casinos. Other people, more statistically seasoned denied that telling that, had they been behaving as Casinos, the situation never should have been as it was because Casinos have probabilities in their favor. Other environments don’t have this advantage but they behave as if they have it with unforeseeable consequences.
Cualquiera puede tener una avería. Seguro. Eso sí, no cualquiera es capaz de permitir que no funcione durante horas y con carácter generalizado un servicio de telefonía móvil sin dar una explicación ni decir cuándo se arreglará.
Si un domingo por la tarde, los teléfonos móviles de un operador no funcionan, la cosa puede ser seria pero si el lunes por la mañana sigue igual y no es la primera vez que se da una suspensión de servicio, habrá que pensar que ese operador no puede utilizarse si el uso del teléfono es profesional.
Por añadidura, tienen un teléfono en el que animan a utilizar el servicio en web. En el teléfono, tras una serie interminable de menús sin posibilidad de hablar con nadie, la llamada se corta y en la web, tras un funcionamiento tercermundista, no hay ni una explicación ni sobre la avería ni sobre cuánto tardará en resolverse.
Hoy ha llegado a mis manos este vínculo http://ideasinversion.com/blog/2014/01/07/las-diez-mejores-webs-para-emprendedores/ y, sin pretender quitarle mérito a las diez webs mencionadas en él, hay algo que me rechina un poco con el concepto de emprendedores que habitualmente se está manejando.
Lo explicaré sobre un ejemplo para que se entienda mejor: Hace bastante tiempo, tuve que impartir Recursos Humanos en un curso de 200 horas organizado por una confederación empresarial cuyo propósito era enseñarle al nuevo empresario o incluso autónomo las bases para iniciar su actividad. Me llevé la sorpresa de que la persona que organizaba el curso, que no cobraba por ello salvo sus horas docentes, se las había arreglado para que hubiera 70 horas de “Técnicas de dirección” a costa de reducir conceptos como Contabilidad a diez horas. No parece muy difícil saber de qué daba clase quien organizaba el curso ¿verdad?
Resultaba que a alguien que, tal vez, pretendía montar una peluquería trabajando solo o, como mucho, con otra persona, tenían que descubrirle todos los secretos de la comunicación, el liderazgo, la motivación, etc. pero, al parecer, no tenía la menor importancia que no tuviera la más remota idea de cómo llevar las cuentas de la peluquería.
Cuando hablamos de “emprendedores” -nótese que siempre se habla de “emprendedores” y no de “empresarios” o de “autónomos” o de “trabajo por cuenta propia”- parece que lo importante consiste en insuflar una especie de espíritu emprendedor en lugar de responder preguntas triviales cómo, por ejemplo, de dónde saco el dinero, cuál es la mejor fórmula de financiación, cuanto dinero voy a necesitar de verdad, con qué nivel mínimo de stock puede funcionar el negocio, cómo determino cuánta gente necesito, en qué perfiles, cómo y cuánto les pago, cómo me voy a dar a conocer en el mercado…en fin, preguntas triviales que pueden permitirse ser despreciadas en favor de “crear un espíritu emprendedor”. ¿No falla algo?
The Big Data concept is still relatively new but the concept inside is very old: If you have more and more data, you can eliminate ambiguity and there are less requirements of hunches since data are self-explanatory.
That is a very old idea coming from I.T. people. However, reality always insisted in delaying the moment when that can be accomplished. There are two problems to get there:
- As data grow, it is more necessary a context analysis to decide which one are relevant and which others can be safely ignored.
- At the other side of the table, we could have people trying to misguide automatic decision supporting systems. Actually, the so-called SEO (Search Engine Optimization) could be properly renamed GAD (Google Algorithm Deception) to explain more clearly what it is intended to do.
Perhaps, by now, Big Data could be less prone to the second problem than anyone performing Web Analytics. Web has become the battlefield for a quiet fight:
By one side, the ones trying to get better positions for them and the positive news about them. These are also the ones who try to hide negative news throwing positive ones and repeating them to be sure that the bad ones remain hidden in search results.
By the other side, we have the Masters of Measurement. They try to get magic algorithms able to avoid tricks from the first ones, unless they decide paying for their services.
Big Data has an advantage over Web data: If a company can have its own data sources, they can be more reliable, more expensive to deceive and any attempt could be quite easily visible. Even though, this is not new: During the II World War, knowing how successful a bombing had been was not a matter of reading German newspapers or listening to German radio stations.
The practice known as content analysis used indirect indicators like funerals or burials information that could be more informative if and only if the enemy did not know that these data were used to get information. In this same context, before D-Day, some heavily defended places with rubber-made tanks tried to fool reconnaissance planes about the place where the invasion was to start. That practice has remained for a long time. Actually, it was used even in the Gulf War, adding to the rubber tanks heat sources aimed to deceive infrared detectors, who should get a similar picture to the one coming from running engines.
Deceiving Big Data will be harder than deceiving Internet data but, once known who is using specific data and what is doing with them, there will be always a way to do this. An easy example: Inflation indicators: A Government can decide changing the weight in the different variables or changing prices of Government-controlled prices to get a favorable picture. In the same way, if Big Data is used to give information to external parties, we should not need someone from outside trying to deceive the system. That should be done from inside.
Anyway, the big problem is about the first point: Data without a context are worthless…and the context could be moving faster than any algorithm designed to give meaning to the data. Many surprising outcomes have happened in places where all the information was available. However, that information has been correctly read only after a major disaster. For instance, emergence of new political parties could be seen but, if major players decided to dismiss them, it comes as a surprise for them, even though data were available. The problem was in the decision about what deserves to be analyzed and how to do it, not in the data themselves.
Other times, the problem comes from fast changes in the context that are not included in the variables to analyze. In the case of Spain, we can speak about the changes that 11M, and how it was managed by the different players, supposed in the election three days after. In another election, everybody had a clear idea about who was going to get a position that required an alliance. Good reasons advised an agreement and data showed that everybody was sure that the agreement was coming…but it wasn’t. One of the players was so sure that things were already done that tried to impose conditions that the other players saw as unacceptable. Consequence: The desired position was to the hands of a third player. Very recently, twopeople, both considered as growing stars, can have spoiled their options in minor incidents.
In short, we can have a huge amount of data but we cannot analyze all of them but the ones considered as relevant. At doing that, there is not an algorithm or an amount of data that can be a good replacement for an analysis performed by human experts. An algorithm or automatic system can be fooled, even by another automatic system designed to do that, context analysis can lose important variables that have been misjudged and sudden changes in the context cannot be anticipated by any automatic system.
Big Data can be helpful if rationally used. Otherwise, it will become another fad or worse: It could become a standard and nobody would dare deciding against a machine with a lot of data and an algorithm, even when they are wrong.
Any motivation expert, from time to time, devotes a part of his time to throw some stones to Frederick W. Taylor. It seems, from our present scope, that there are good reasons for the stoning: Strict splitting between planning and performing is against any idea considering human beings as something more than faulty mechanisms.
However, if we try to get the perspective that Taylor could have a century ago, things could change: Taylor made unqualified workers able to manufacture complex products. These products were far beyond the understanding capacity of those manufacturing them.
From that point of view, we could say that Taylor and his SWO meant a clear advance and Taylor cannot be dismissed with a high-level theoretical approach out of context.
Many things have happened since Taylor that could explain so different approach: The education of average worker, at least in advanced societies, grew in an amazing way. The strict division between design and performance could be plainly justified in Taylor time but it could be nonsense right now.
Technology, especially the information related, not only advanced. We could say that it was born during the second half of the past century, well after Taylor. Advances have been so fast that is hard finding a fix point or a context to evaluate its contribution: When something evolves so fast, it modifies the initial context and that removes the reference point required to evaluate the real value.
At the risk of being simplistic, we could say that technology gives us “If…Then” solutions. As technology power increases, situations that can be confronted through an “If…Then” solution are more and more complex. Some time ago, I received this splendid parody of a call-center that shows clearly what can happen if people work only with “If…Then” recipes, coming, in this case, from a screen:
Technology evolution again puts the worker -now with an education level far superior to the one available in Taylor age- in a role of performer of routines and instructions. We could ask why so old model is still used and we could find some answers:
- Economics: Less qualified people using technology can perform more complex tasks. That means savings in training costs and makes turnover also cheaper since people are easier to replace.
- Knowledge Ownership: People have a brain that can store knowledge. Regretfully, from the perspective of a company, they have also feet that can be used to bring the brain to other places. In other words, knowledge stored by persons is not owned by companies and, hence, they could prefer storing knowledge in processes and Information Systems managing them.
- Functionality: People commit more mistakes, especially in these issues hard to convert into routines and required going beyond stored knowledge.
These points are true but, when things are seen that way, there is something clear: The relation between a company and people working there is strictly economical. Arie de Geus, in The living organization, said that the relation between a person and a company is economic but considering it ONLY economic is a big mistake.
Actually, using If…Then model as a way to make people expendable can be a way to guarantee a more relaxed present situation…at the price of questionning the future. Let’s see why:
- If…Then recipes are supplied by a short number of suppliers working in every market and, of course, having clients who compete among them. Once reduced the human factor to the minimum…where is it going to be the difference among companies sharing the same Information Systems model?
- If people are given stricly operative knowledge…how can we advance in this knowledge? Companies outsource their ability to create new knowledge that, again, remains in the hands of their suppliers of Information Systems and their ability to store more “If…Then” solutions.
- What is the real capacity of the organization to manage unforeseen contingencies, if they have not been anticipated in the system design or, even worse, contingencies coming from the growing complexity of the system itself?
This is the overview. Taylorism without Taylor is much worse than the original model since it’s not justified by the context. Companies perform better and better some things that they already knew how to manage and, at the same time, it is harder and harder for them improving at things that previously were poorly performed. People, under this model, cannot work as an emergency resource. To do this, they need knowledge far beyond the operative level and capacity to operate without being very constrained by the system. Very often they miss both.
Jens Rasmussen, expert in Organization and Safety, gave a golden rule that, regretfully, is not met in many places: Operator has to be able to run cognitively the program that the system is performing. Features of present Information Systems could allow us working under sub-optimized environments: Instead of an internal logic that only the designer can understand -and not always- things running and keeping the Rasmussen rule would be very different.
The rationale about training and turnover costs would remain but advantages from ignoring it are too important to dismiss them. The sentence of De Geus is real and, furthermore, it has a very serious impact about how our organizations are going to be in the next future.
Alguien mira mi perfil de Linkedin y, puesto que soy Open Networker, entra en contacto conmigo y me envía un mensaje: En éste me habla de la supuesta implantación de una empresa de aviación ligera en España y de la necesidad de alguien que les asesore en el arranque en el área de Recursos Humanos.
Sorprendente: Le pido detalles como si se trata de aviación ligera o ultraligera, en qué parte de España se quieren implantar y algunos datos iniciales más a los que no me contesta y me pide un contacto por Skype. Tampoco tengo problema y, tras darle mi usuario, me encuentro con que alguien me pide el contacto y tiene una fotografía muy profesional, de un personaje encorbatado y de aspecto sajón.
Cuando mantenemos la conversación, me llevo dos sorpresas: La primera es que el acento no correspondía con la fotografía sino que, más bien, aunque era un inglés básicamente correcto, tenía un fuerte acento africano sin que, por mi parte, fuera capaz de discriminar mucho más.
La segunda sorpresa fue que ya no querían asesoramiento en el ámbito de Recursos Humanos sino “contables” porque la empresa tampoco iba a estar en España sino que tenía actividad comercial y el “contable” se encargaría de realizar los cobros. Ya va sonando conocido ¿verdad?
Parece que la ingeniería social no tiene límites cuando se trata de cazar incautos en el phishing.