A few days ago, I knew about a big company using AI in the recruiting processes. Not a big deal since it has become a common practice and if something can be criticized, is the fact that AI commits the same error that human recruiters: The false-negative danger; a danger that goes far beyond recruiting.
The system can have more and more filters, learning from them but…what about the candidates that were eliminated despite they could have been successful? Usually, we will never know about them and, hence, we cannot learn from our errors rejecting good candidates. We know of cases like Clark Gable or Harrison Ford that were rejected by Hollywood but, usually, following the rejected candidates is not feasible and, then, the system -and the human recruiter- learns to improve the filters but it’s not possible learning from the wrong rejections.
That’s a luxury that companies can afford while it affects to jobs where the supply of candidates and the demand for jobs is clearly favorable. However, this error is much more serious if the same practice applies to highly demanded profiles. Eliminating a good candidate is, in this case, much more expensive but both, AI system and human recruiters, don’t have the opportunity to learn from this error type.
Only companies like Google, Amazon or Facebook with an impressive amount of information about many of us could afford to learn from the failures of the system. It’s true that, as a common practice, many recruiters “google” the names of the candidates to get more information, but these companies keep much more information about us than the offered when someone searches for us in Google.
Then, since these companies have a feature that cannot be reached by other companies, including big corporations, we could expect them to be hired in the next future to perform recruiting, evaluate insurance or credit proposals, evaluation about potential business partners and many others.
These companies having so much information about us can share Artificial Intelligence practices with their potential clients but, at the same time, they keep a treasure of information that their potential clients cannot have.
Of course, they have this information because we voluntarily give it in exchange for some advantages in our daily life. At the end of the day, if we are not involved in illegitimate or dangerous activities and we don’t have the intention of doing it, is it so important having a Google, Amazon, Facebook or whoever knowing what we do and even listening to us through their “personal assistants”? Perhaps it is:
The list of companies sharing this feature -almost unlimited access to personal information- is very short. Then, any outcome from them could affect us in many activities, since any company trying to get an advantage from these data is not going to find many potential suppliers.
Now, suppose a dark algorithm that nobody knows exactly how it works deciding that you are not a good option to get a job, insurance, a credit…whatever. Since the list of suppliers of information is very short, you will be rejected once and again, even though nobody will be able to give you a clear explanation of the reasons for the rejection. The algorithm could have chosen an irrelevant action that happens to keep a high correlation with a potential problem and, hence, you would be rejected.
Should this danger be disregarded? Companies like Amazon had their own problems with their recruiting systems when, unintendedly, they introduced racial and genre biases. There is not a valid reason to suppose that this cannot happen again with much more data and affecting many more activities.
Let me share a recent personal experience related to this error type: Recently, I was blacklisted by a supplier of information security. The apparent reason was having a page, opening automatically every time I opened Chrome, without further action. That generated repeated accesses; it was read by the website as an attempt to spy and they blacklisted me. The problem was that the supplier of information security had many clients: Banks, transportation companies…and even my own ISP. All of them denied my access to their websites based on the wrong information.
This is, of course, a minor sample of what can happen if Artificial Intelligence is applied over a vast amount of data by the few companies that have access to them and, because of reasons that nobody could explain, we must confront rejection in many activities; a rejection that would be at the same time universal and unexplained.
Perhaps the future was not defined in “1984” by Orwell but in “The process” by Kafka.
The first condition for online commerce to work is trust. If you cannot trust a place, that place will die.
A few weeks ago, I bought a bone conduction earphone through Aliexpress. One of the side did not work from the beginning. I simply asked for the replacement with one who worked.
The supplier asked for proof about the problem. However, it’s quite difficult showing in a recording that one side of the headphones is working while the other don’t. I sent two videos, one to supplier and the other one to AliExpress.
In the meanwhile, the supplier asked me to retire the claim, alleging “personal reasons” instead of the real problem, that is, defective product.
Finally, AliExpress considers that I did not prove the problem and decides that I must not receive a reimbursement nor a new product.
After that experience, I only can say “Goodbye, Alibaba”. I won’t buy anything from you anymore and I will invite my relatives to do the same.
Anyone in touch with dynamic fields can find this phenomenon: Things are faster than the rules intending to control them. Hence, if the capacity to be enforced is very strong, old rules can stop the advancement. By the same token, if that capacity is weak, rules are simply ignored, and the world evolves following different paths.
The same fact can be observed in many different fields:
Three months ago, an article was titled “POR QUÉ ALBERT EINSTEIN NO PODRÍA SER PROFESOR EN ESPAÑA” (Why Albert Einstein could not be a professor in Spain) and, basically, the reason was in a bureaucratic model tailored for the “average” teacher. This average teacher, just after becoming a Bachelor, starts with the doctorate entering a career path that will finish with the retirement in the University. External experience is not required and, very often, is not welcome.
The age, the publications and the length of the doctoral dissertation (17 pages) could have made impossible for Einstein to teach in Spain. The war for talent means in some environments fighting it wherever it can be found.
If we go to specific and fast evolving fields, things can be worse:
Cybersecurity can be a good example. There is a clear shortage of professionals in the field and it is worsening. The slowness to accept an official curriculum means that, once the curriculum is accepted, is already out-of-date. Then, a diploma is not worth and, instead, certification agencies are taking its place, enforcing up-to-date knowledge for both, getting and keeping the certification.
Financial regulators? Companies are faster than regulators and a single practice can appear as a savings plan, as an insurance product or many other options. If we go to derivative markets, the speed introduces different parameters or practices like high-frequency trading hard to follow.
What about cryptocurrencies? They are sidestepping control by the Governments and, still worse, they can break one of the easiest ways for the States to get funds. Governments would like to break them and, in a few weeks, EU will have a new rule to “protect privacy” that could affect the blockchain process, key for the security of cryptocurrencies and…many Banks operations.
Aviation? The best-selling airplane in the Aviation history -Boeing 737- was designed in 1964 and it started to fly in 1968. The last versions of this plane don’t have some features that could be judged as basic modifications because the process is so long and expensive (more and more long and expensive) that Boeing prefers to keep attached to some features designed more than 50 years ago.
In any of these fields or many others that could be mentioned, the rules are not meeting its intended function, that is, to keep functionality and, in the fields where it is required, safety as a part of the functionality. Whatever the rule can be ignored or can be a heavy load to be dragged in the development, it does not work.
We can laugh at the old “1865 Locomotive Act” with delicious rules such as this: The most draconic restrictions and speed limits were imposed by the 1865 act (the “Red Flag Act”), which required all road locomotives, which included automobiles, to travel at a maximum of 4 mph (6.4 km/h) in the country and 2 mph (3.2 km/h) in the city, as well as requiring a man carrying a red flag to walk in front of road vehicles hauling multiple wagons (Wikipedia).
However, things were evolving in 1865 far slower than now. Non-functional rules like that could be easily identified and removed before becoming a serious problem. That does not happen anymore. We try to get more efficient organizations and more efficient technology, but the architecture of the rules should be re-engineered too.
Perhaps the next revolution is not technologic despite it can be fueled by technology. It could be in the Law: The governing rules -not the specific rules but the process to create, modify, change or cancel rules- should be modified. Rules valid for a world already gone are so useful as a weather forecast for the past week.
Useless diplomas, lost talent, uncontrolled or under-controlled new activities or product design where the adaptation to the rules are a major part of the development cost and time are pointing to a single fact: The rules governing the world are unable to keep the pace of the world itself.
And one of them, Airbus, is celebrating its birthday.
Years ago, three major players were sharing the market but, once McDonnell Douglas disappeared, big planes were made by one of them. Of course, we should not forget Antonov, whose 225 model is still the biggest plane in the world, some huge Tupolev and Lockheed Tristar but the first ones never went out of their home markets while Lockheed Tristar could be seen as a failed experiment from the manufacturer.
Airbus emphasizes its milestones in the timeline but, behind these, there is a flow marked by efficiency through I.T. use.
Airbus was the first civilian planes manufacturer having a big plane with a cockpit for only two people (A-310) and Airbus was the first civilian plane manufacturer to introduce widely fly-by-wire technology (the only previous exception was the Concorde). Finally, Airbus introduced the commonality concept allowing pilots from a model to switch very fast to a different model keeping the rating for both.
Boeing had a more conservative position: B757 and B767 appeared with only two people in the cockpit after being redesigned to compete with A-310. Despite the higher experience of Boeing in military aviation and, hence, in fly-by-wire technology, Boeing deferred for a long time the decision to include it in civilian planes and, finally, where Boeing lost the efficiency battle was when it appeared with a portfolio whose products were mainly unrelated while Airbus was immerse in its commonality model.
The only point where Boeing arrived before was in the use of twin planes for transoceanic flights through the ETOPS policy. Paradoxically the ones in the worst position were the two American companies that were manufacturing three engine planes, McDonnell Douglas and Lockheed instead of Airbus. That was the exception because, usually, Boeing was behind in the efficiency field.
Probably -and this is my personal bet- they try to build a family starting with B787. This plane should be for Boeing the A320 equivalent, that is, the starter of a new generation sharing many features.
As a proof of that more conservative position, Boeing kept some feedbacks that Airbus simply removed like, for instance, the feeling of the flight controls or the feedback from autopilot to throttle levers. Nobody questionned if this should be made and it was offered as a commercial advantage instead of a safety feature since it was not compulsory…actually, the differences among both manufacturers -accepted by the regulators as features independent of safety- have been in the root of some events
Little-size Aviation is much more crowded and, right now, we have two new incomers from Russia and China (Sukhoi and Comac) including the possibility of an agreement among them to fight for the big planes market.
Anyway, that is still in the future. Big Aviation is still a game of two contenders and every single step in that game has been driven by efficiency. Some of us would like understability -in normal and abnormal conditions- to be among the priorities in future designs, whatever they come from the present contenders or from any newcomer.
Published in my Linkedin profile
Some books can be considered as a privilege since they are an opportunity to have a look at an interesting mind. In this case it’s the mind of someone who was professionally involved in many of the air accidents considered as HF milestones.
The author, Alan Diehl, has worked with NTSB, FAA and U.S. Air Force. Everywhere, he tried to show that Human Factors had something important to say in the investigations. Actually, I borrowed for my first sentence something that he repeats once and again: The idea of trying to get into the mind of the pilot to know why a decision was made.
Probably, we should establish a working hypothesis about people involved in an accident: They were not dumb, nor crazy and they were not trying to kill themselves. It would work fine almost always.
Very often, as the author shows, major design and organization flaws are under a bad decision driving to an accident. He suffered some of these organization flaws in his own career by being vetoed in places where he challenged the statu quo.
One of the key cases representing a turning point for his activity but, regretfully, not for Aviation Safety in military environments happened in Gulf war: Two F15 planes shooted two American helicopters. Before that, he tried to implement CRM principles in U.S. Air Force. It was rejected by a high rank officer and, after the accident, they tried to avoid any mention of CRM issues.
Diehl suffered the consequences of disobeying the orders about it as well as whistle-blowing some bad Safety related practices in the Air Force. Even though those practices represented a big death toll that did not make a change.
As an interesting tip, almost at the end of the book, there is a short analysis of different reporting systems, how they were created and the relationship among them. Even though, it does not pretend to be an important part in the book, it can be very clarifying for many people who can get lost in the acronyms soup.
However, the main and more important piece of the book is CRM related: Diehl fought hardly to get CRM established after a very well-known accident. It involved a United DC-8 in Portland, who crashed because it ran out of fuel while the pilot was worried about the landing gear. That made him delay the landing beyond any reasonable expectation.
It’s true that Portland case was important as well as Los Rodeos and Staines cases were also very important as major events to be used as inputs for the definition of CRM practice. However, and that is a personal opinion, something could be lost related with CRM: When Diehl had problems with Air Force, he defended CRM from a functional point of view. His point, in short, was that we cannot admit the death toll that its absence was provoking but…is CRM absence the real problem or does it have much deeper roots?
CRM principles can be hard to apply in an environment where power distance is very high. Once there, you can decide if a plane is a kind of bubble where this high power distance does not exist or there is not such a bubble and, as someone told me, as a pilot I’m in charge of the flight but the real fact is that a plane is a barracks extension and the higher rank officer inside the plane is the real captain. Nothing to be surprised if we attend to the facts under the air accident that beheaded the State in Poland. “Suggestions” by the Air Force chief are hard to be ignored by a military pilot.
Diehl points out how in many situations pilots seem to be inclined to play with their lives instead of keeping safety principles. Again, he is right but it can be easily explained: Suppose that the pilot, in the flight that crashed with all the Polish Government onboard, rejects the “suggestion” and goes to the alternate airport. Nothing should have happened except…the outcome for the other option is not visible and everyone should find reasons to explain why the pilot should have landed in the place where he tried to do it. His career should be simply ruined because nobody would admit the real danger under the other option.
Once you decide, it’s impossible to know the outcome of the alternate decision and that makes pressure especially hard to resist. Then, even if restricted to the cockpit or a full plane, CRM principles can be hard to apply in some organizations. Furthermore, as Diehl suggests in the book, you can extend CRM concepts well beyond the cockpit trying to make of it a change management program.
CRM, in civilian and military organizations, means a way to work but we can find incompatibilities between CRM principles and organizational culture principles. Management have to deal with these contradictions but, if the organizational culture is very strong, it will prevail and management will not deal with the contradictions. They will simply decide for the statu quo ignoring any other option.
Should have CRM saved the many lost lives because of its absence? Perhaps not. There is a paradox in approaches like CRM or, more recently, SMS: They work fine in places where they should be less required and they don’t work in places where its implementation should be a matter of urgency. I’m not trying to play with words but establish a single fact and I would like to do so with an example:
Qantas, the Australian airline, has a highly regarded CRM program and many people, inside and outside that Company, should agree that CRM principles meant a real safety improvement for the Company. Nothing to oppose but let me show it in a different light:
Suppose for a moment that someone decides removing all the CRM programs in the world because of…whatever. Once done, we can ask which companies should be the most affected because of that. Should be Qantas among them? Hard to answer but probably not. Why?
CRM principles work precisely in the places where these principles were already working in the background. Then, CRM brings order and procedures to a previous situation that we could call “CRM without CRM program”, for instance, a low power distance where the subordinate is willing to voice any safety concern. In this case, the improvement is clear. If we suddenly suppress the activity, the culture should keep alive these principles because they fitted with that culture from the very first moment and before.
What happens when CRM principles are against organization culture? Let me put it in short: Make-up. They will accept CRM as well as they accept SMS since they both are mandatory but everyone will know the truth inside the organization. Will CRM save lives in this organizations, even if they are enforced to implement it?
A recent event can answer that: Asiana accident in San Francisco happened because a first officer did not dare to tell his captain that he was unable to land the plane manually (of course, as usual, many more factors were present but this was one of them and extremely important).
Diehl clearly advocates for CRM and I believe he is right and with statistical information who speaks about safety improvement. My point is that improvement is not homogeneous and it happens mainly in places that were already willing to accept CRM principles and, in a non-structured way, they were already working with them.
CRM by itself does not have the power to change the organizational culture in places that reject its principles and the approach should be different. A very good old book, Critical Path Renewal by Beer, Eisenstat and Spector explains clearly why change programs don’t work and they show a different way to get the change in organizations who reject it.
Anyone trying to make a real change should flee from change programs even if we agree with the goals but one-size-fits-all does not work. Some principles, like the ones under CRM or SMS, are valid from safety point of view but, even though everyone will pay lip service to the goals, many organizations won’t accept the changes required to get there. That is still a hard challenge to be completed.
Published originally in my Linkedin profile
ADJUNTO LA CARTA RECIBIDA DE RENFE SOLO POR SI EXISTE LA REMOTA OPCIÓN DE QUE A ALGUIEN SE LE CAIGA LA CARA DE VERGÚENZA. ABAJO LOS HECHOS:
El pasado miércoles tuve que hacer un viaje que implicaba un trasbordo de tren en la estación de Madrid-Chamartín.
RENFE se jacta en su publicidad de la puntualidad de los trenes de alta velocidad y, en general, suelen llegar a su hora pero en esta ocasión no fue así. Puede ocurrir y, siempre que ocurra con poca frecuencia, es una contingencia que debe ser admitida por el viajero. Otra cosa distinta es cómo su incapacidad de respuesta a esa sencilla contingencia no sólo no corrige sino que agrava los problemas producidos:
La hora de llegada del tren de alta velocidad estaba establecida en las 7:50. Si el tren era puntual, tenía tiempo suficiente para alcanzar al otro tren que salía a las 8:00, máxime si, como todo viajero frecuente sabe, RENFE se reserva un pequeño colchón de unos minutos y la hora de llegada real suele ser cuatro o cinco minutos anterior a la hora informada. Aunque no hubiera sido así, había tiempo y el siguiente tren salía hora y cuarto después. La elección, por tanto, estaba clara.
Cuando el tren comenzó a pararse por causas no explicadas, vi que tal vez no llegase al tren pero, una vez que recuperó su marcha normal, se hizo evidente que sí podía llegar aunque muy justo de tiempo…y aquí empezó la parte divertida:
Lo primero que hice es buscar al interventor del tren para buscar información sobre la vía de salida de mi otro tren. De esa forma, en lugar de pasar por el vestíbulo y perder dos o tres minutos vitales podía ir por el paso subterráneo bajo las vías y llegar. El interventor estaba convenientemente escondido en una de las cabinas de mando y no hubo forma de hacer contacto con él.
Probé otra opción: El teléfono 902 de RENFE que, para mayor sorna, atienden como “RENFE contigo” donde, tras dificultades para contactar con ellos, dijeron que ellos no tenían ese tipo de información.
Aunque salí el primero del tren, no había en el andén ningún panel que indicase el tráfico en las otras vías y, aunque estaba viendo mi tren en otra vía, no sabía en cuál estaba. Entré por el paso subterráneo pero tampoco allí había ningún panel ni ninguna información sobre cuál era el tren que estaba estacionado justo arriba con lo que salí a un andén equivocado. Volví a entrar al paso subterráneo y llegué al andén correcto justo para ver el tren salir delante de mí.
La historia no acaba aquí. Al ir al área de “Agresión al Cliente” para que me cambiasen el billete, una persona con más aspecto de trabajar en una funeraria que atendiendo a clientes vivos me informó de dos puntos:
- Debido al tipo de tarifa que llevaba, a pesar de que el hecho de perder el tren era 100% imputable a RENFE, no me podían cambiar el billete sino que tenía que comprar uno nuevo.
- Esto no ocurriría si hubiera dejado entre la llegada del primer tren y la salida del segundo un intervalo de…¡¡¡UNA HORA!!!. Supongo que cuando RENFE hace su publicidad acerca de la puntualidad de los trenes, omite pequeños detalles como éste.
Esto significó, además de perder el dinero de un billete debido a la incompetencia de RENFE, ir a la oficina de venta de billetes donde había una persona en la ventanilla y un montón de personas más de charla a dos o tres metros de los que atendían a los sufridos viajeros que hacían cola.
Aunque presenté la reclamación correspondiente, sea cual sea el resultado, la capacidad de respuesta de RENFE y su absoluta incapacidad para atender contingencias sencillas, quedó totalmente en evidencia.
Debido a la absorción de ONO por Vodafone, sin pretenderlo, he pasado a ser cliente suyo. No es que me entusiasmase la idea porque ya había tenido una nefasta experiencia con esa compañía: http://factorhumano.org/2010/04/12/servicios-inservibles-vodafone-adsl/ pero no siempre tienen que salir las cosas mal. Pues bien, si está Vodafone por medio, al parecer sí.
Primero me avisan de que me tienen que cambiar las tarjetas de telefonía móvil de Ono por unas de Vodafone. Ningún problema: Llegan las nuevas tarjetas, se colocan en los terminales y empiezan a funcionar desde el primer día como antes, ni mejor ni peor.
Unas dos semanas más tarde llega una carta avisándome de nuevo del cambio. Creí que era un error pero, al ponerme en contacto con el número que me indican, me aclaran que no es un error porque hay una tercera tarjeta: La del modem 3G de ONO que debe ser cambiada también. Me envían la nueva tarjeta, me comunican la activación, la instalo…y el modem no funciona.
Llamo por primera vez al Servicio de Agresión al Cliente y se disculpan porque no me habían comunicado que el modem anterior no es compatible con la nueva tarjeta Vodafone y, por tanto, me tienen que enviar uno nuevo que esperan que llegue en dos o tres días, es decir, a lo largo de la primera semana de julio.
A medida que se van retrasando, voy llamando más veces para recibir siempre la misma respuesta: Está enviado y tiene que estar a punto de llegar. Mientras tanto, he tenido que hacer un viaje en que debería haberlo usado pero no lo tenía.
Pruebo el canal de Twitter y los resultados son éstos: data:text/mce-internal,ACTUALIZACI%D3N%20DEL%2028-
Me remiten al mismo sitio al que he llamado como mínimo diez veces y en el que raramente consigo hablar con un operador aunque, la verdad, es que da lo mismo porque los resultados son idénticos, o sea, ninguno.
Por supuesto, la opción web tampoco funciona y, además, no han caído en un pequeño detalle: En el menú de acceso al Departamento de Agresión al Cliente comienzan por pedir el número de teléfono al que se refiere la consulta. ¿Sabes de alguien que se sepa su número de módem? Desde luego, yo no.
En el momento de escribir este post llevo 27 días esperando y la disposición para recordarle a diario y públicamente a Vodafone su incumplimiento, su inutilidad y sus chapuzas.
ACTUALIZACIÓN DEL 28-7
Utilizo el canal para reclamaciones en la web. Indico qué ocurre y piden un teléfono. A los pocos minutos me llaman. Una persona me pide más información y me pasa a otro lugar donde, teóricamente, van a atender la petición. En este otro lugar, apenas coger la llamada, me pasan al menú infernal del Servicio de Agresión al Cliente donde no encuentro ninguna opción válida para mi problema. Tras dos intentos, me cuelga porque no les consta “nada pendiente” y volvemos al punto de origen.
ACTUALIZACIÓN DEL 29-7 Y -ESPERO- PUNTO FINAL
Después de pegarme con el menú de reclamaciones en la web -está tan mal hecho como el del Departamento de Agresión al Cliente- recibo una llamada y me cuentan que han cambiado el tipo de dispositivo y que no es un mero “pincho” tipo pendrive. Me pasan a otro Departamento donde se supone me van a tramitar la entrega del nuevo dispositivo y, sin dejarme respirar, este nuevo Departamento me pasa al Departamento de Agresión al Cliente y su infernal e inútil menú. No me queda más remedio que desistir porque el menú me echa pero, al entrar nuevamente en la web, veo que la reclamación inicial…¡¡¡aparece como resuelta!!!
Vuelvo a poner otra reclamación, me vuelven a llamar y por fin se desvela el misterio:Con Ono tenía una línea de datos sin cuota en la que, cuando la usaba, pagaba por consumo de datos. Vodafone no tiene ese tipo de línea sino que tiene una cuota, por cierto, nada barata para la línea de datos, es decir, la única opción lógica es cancelar la línea de datos. Aparte de que no conservar las condiciones de contratación anteriores me parece una golfada…¿no lo podían haber dicho desde el principio?
Amalberti explains very clearly safety related concepts and, whatever the reader agrees with 100% of contents or not, it is worth to be read and discussed. He goes against some sacred cows in the safety field and his book should be analyzed very carefully. Especially, these three points should deserve a special analysis:
• People learn by doing instead of observing. Asking for a full Situational Awareness before doing anything could drive to serious accidents while the operator tries to get a full Situational Awareness.
• There are different safety models. Many markets and companies try to imitate ultra-safe models like the ones coming from Aviation and Nuclear Energy when, actually, these models should not work in other activities more expert-based than procedure-based.
• Trying to train someone for every single exceptional situation is not the safest option. People can try to explore limits instead of remaining in the safe environment.
People learn by doing instead of observing. True, and perhaps that is one of the motives that people are so bad at monitoring while some automation designs still insist precisely on that. However, Amalberti reaches a conclusión related with Situational Awareness that, in my opinion, could be wrong: For Amalberti, we should not ask for a full Situational Awareness before trying a solution because we could get paralyzed under critical situations with serious time constraints. Explained like that, it’s true and that should be the phenomenon known as paralysis because of analysis but something is missing:
Situational Awareness cannot be understood as a condition to be met in critical situations but as a flowing process. In high risk environments, design should guarantee that flow at a level that, once the emergency appears, getting a full picture of the situation is easy. If we put together this single principle with the first one by Amalberti, that is, that people learn by doing instead of observing, we could reach different conclusions:
1. Top level automation should be used only under exceptional situations, using as default levels others where human operators should learn and develop Situational Awareness by doing instead of observing.
2. Information Technology used in critical systems should be under-optimized, that is, instead of using the most efficient design in terms of technology use, the alternative option should be using the most efficient design in terms of keeping Situational Awareness. Some new planes keep Intel processors that are out of the market many years ago –for instance, B777 using Intel 486- and nothing happens. Why then should we try to extract all the juice from every programming line building systems impossible to be understood by users?
Different safety models with an out-of-context imitation of ultra-safe systems as Aviation or Nuclear Plants. This is another excellent point but, again, something could be missing: Evolution. I have to confess on this point that my Ph.D. thesis was precisely trying to do what Amalberti rejects, that is, applying the Air Safety model to Business Management field. Some years later, Ashgate published it under the name Improving Air Safety through Organizational Learning but “forgetting” the chapter where learning from Air Safety was applied to Business Management.
The first thing to be said is that, in general terms, Amalberti is right. We cannot bring –unless if we want it to work- all the procedural weight of a field like Air Safety to many other fields like, for instance, Surgery, where the individual can be much more important than the operating rules. However, the issue that could be lost here is Organizational Evolution. Some fields have evolved through ultra-safe models and they did so because of their own constraints without anyone trying to imitate an external model. Different activities, while looking for efficiency improvement, evolved towards tightly coupled organizations as Charles Perrow called them and that produced an unintended effect: Errors in efficient organizations are also efficient because they spread their effects by using the same organizational channels that normal operation. Otherwise, how could we explain cases like Baring Brothers where an unfaithful employee was enough to take the whole Bank down?
Summarizing, it’s true that we should not make an out-of-context imitation of ultra-safe models but, at the same time, we should analyze if the field whose safety we are analyzing should evolve to an ultra-safe model because it already became a tightly coupled organization.
Trying to train someone for every single exceptional situation is not the safest option: Again, we can agree in general terms. For instance, we know that, as a part of their job, pilots practice in simulators events that are not expect to appear in their whole professional life. Perhaps, asking them to practice recovery from upside down positions or from spins should be an invitation to get closer to these situations since they should feel themselves able to recover. The “hero pilot” time is over long ago but…
We have known in the past of wrong risk-assessments where the supposedly low-probability event that should not require training since it should not happen…happened. A well-known example is United 232 where three supposedly independent hydraulic systems failed at the same time showing that they were not so independent as pretended. The pilot had practiced before in a flight simulator the skills that converted a full crash into a crash landing decreasing substantially the number of casualties. A similar case is the Hudson river landing where a double stop engine was supposed to happen only above 20.000 feet…and procedures developed for that scenario made the pilots lose a precious time when the full loss of power happened far lower than this height.
Even though, instead of focusing in different events showing a wrong risk assessment that could invite us to take with care the Amalberti idea -even when he clearly raises an important point there- is a different kind of problem that has already been under major accidents: Faulty sensors feeding wrong information to computers and occupants planes getting killed without the pilots getting the faintest idea about what was going on. “Stamping” this kind of events with a “lack of training” should be a way of telling us something that, at the same time, is true and useless and, by the way, it’s opposed to the Amalberti’s principle.
Daniel Dennett used a comparison that can be relevant here: The comparison between commands and information agencies: Commands are expected to react under unforeseen situations and that means redundancy in the resources and a very important cross training. By the other side, information agencies work under the principle of “need-to-know”. Should we consider as an accident that this same “need-to-know” idea has been used by some Aviation manufacturers in their rating courses? Should we really limit the training to the foreseen events or should we really recognize that some events are very hard to foresee and different approaches should be taken in design as well as in training?
Summarizing, this is an excellent book. Even if some of us could not agree with every single point, they deserve a discussion, especially when it’s clear that the points raised in the book come from a strong safety-related concept instead of politics or convenience inside regulators, operators or manufacturers.
I would not like finishing without a big Thank you to my friend Jesús Villena, editor of this book in Spanish under the name “Construir la seguridad: Compromisos individuales y colectivos para afrontar los grandes riesgos” because he made me know this book.
Perhaps in a moment where Microsoft does not live its best moment, speaking about Windows could be seen as a paradox. However, Windows itself is a paradox of the kind of knowledge that many companies are pricing right now.
Why Windows? You open your computer to find a desktop. The desktop has binders and documents and you have even a trash bin. This environment allows working in the old-fashioned way. We move documents from one binder to another one. When a document is not useful anymore, we put it in the trash bin and everything seems to be perfect…as far as everything works as it’s supposed to work. What happens when we find a blue screen or, simply, the computer does not start?
At that moment, we find that everything was false. Desktop, binders, documents…? Everything is false. Everything is a part of a complex metaphor, good while everything works as expected but fully useless once something fails. What kind of real knowledge does the Windows user have? The Windows user has an operating knowledge that can be enough in more than 90% of the cases. That’s fine if the remaining 10% cannot bring unexpected and severe consequences but we see this kind of knowledge in more and more places, including critical ones.
When 2008 crisis started, someone said that many Banks and financial institutions had been behaving as Casinos. Other people, more statistically seasoned denied that telling that, had they been behaving as Casinos, the situation never should have been as it was because Casinos have probabilities in their favor. Other environments don’t have this advantage but they behave as if they have it with unforeseeable consequences.
Cualquiera puede tener una avería. Seguro. Eso sí, no cualquiera es capaz de permitir que no funcione durante horas y con carácter generalizado un servicio de telefonía móvil sin dar una explicación ni decir cuándo se arreglará.
Si un domingo por la tarde, los teléfonos móviles de un operador no funcionan, la cosa puede ser seria pero si el lunes por la mañana sigue igual y no es la primera vez que se da una suspensión de servicio, habrá que pensar que ese operador no puede utilizarse si el uso del teléfono es profesional.
Por añadidura, tienen un teléfono en el que animan a utilizar el servicio en web. En el teléfono, tras una serie interminable de menús sin posibilidad de hablar con nadie, la llamada se corta y en la web, tras un funcionamiento tercermundista, no hay ni una explicación ni sobre la avería ni sobre cuánto tardará en resolverse.