Categoría: Recursos Humanos

Behavioral Economics: Cuando los tecnólogos se enamoraron de la psicología

(Published in my newsletter and profile in Linkedin. English Version after the Spanish text)

No es casualidad que el primer psicólogo en obtener un premio Nobel, Daniel Kahneman, haya hecho una contribución muy apreciada por los tecnólogos de la información, especialmente los dedicados a la inteligencia artificial.

Pocas cosas más atractivas para un tecnólogo que la idea de que existen sesgos inherentes al procesamiento humano de información y que estos sesgos se producen por la necesidad de conseguir resultados rápidos, aunque contengan errores. Por añadidura, tales sesgos se deben a carencias para procesar grandes cantidades de información rápidamente. Dicho de otra forma, las personas procesan información incorrectamente porque no tienen la capacidad de un ordenador para hacerlo. Saltar desde ahí a “Si tengo la suficiente capacidad de procesamiento de información, eso no ocurrirá con una máquina” es muy tentador -aunque recuerda los viejos tiempos de Minsky y la llamada GOFAI (Good Old-Fashioned Artificial Intelligence)- y, de ahí, que la corriente de Behavioral Economics haya tenido gran aceptación en el mundo de la tecnología. Sin embargo, las cosas no son tan claras.

Kahneman y Tversky, principales autores de referencia dentro de esta línea, descubrieron un conjunto de sesgos y, además, los nombraron de formas bastante ingeniosas y los recopilaron en “Pensar rápido, pensar despacio”. Tras ello, fueron aplaudidos y criticados aunque tal vez no lo hayan sido por los motivos adecuados o, para precisar más, tal vez no hayan sido totalmente entendidos ni por quienes los aplauden ni por quienes los critican.

Si comenzamos por la parte crítica, se suele aducir que los sesgos se han probado en condiciones de laboratorio y que, en la vida normal, esas situaciones no se dan o simplemente los sesgos no aparecen. Es verdad; las condiciones de laboratorio fuerzan la aparición del sesgo y en el mundo real, en el que habitualmente se opera dentro de un contexto, no aparecen o lo hacen de forma muy reducida. Tomemos como ejemplo, una de las historias más populares de “Pensar rápido, pensar despacio”, la historia de “Linda, la cajera”:

Linda es una mujer de 31 años de edad, soltera, extrovertida y muy brillante. Estudió filosofía en la universidad. En su época de estudiante mostró preocupación por temas de discriminación y justicia social y participó en manifestaciones en contra del uso de la energía nuclear. De acuerdo con la descripción dada, ordene las siguientes ocho afirmaciones en orden de probabilidad de ser ciertas. Asigne el número 1 a la frase que considere con más probabilidad de ser cierta, dos a la siguiente y así sucesivamente hasta asignar 8 a la frase menos probable de ser cierta.

o Linda es una trabajadora social en el área psiquiátrica.

 o Linda es agente vendedora de seguros.

o Linda es maestra en una escuela primaria.

o Linda es cajera de un banco y participa en movimientos feministas.

o Linda es miembro de una asociación que promueve la participación política de la mujer.

o Linda es cajera de un banco.

o Linda participa en movimientos feministas. o Linda trabaja en una librería y toma clases de yoga.

Obsérvense las opciones cuarta y sexta: Obviamente, la cuarta es un subconjunto de la sexta y, sin embargo, la cuarta aparecía de forma generalizada como más probable que la sexta. Un claro error de lógica pero, al mismo tiempo, ignora un hecho elemental en las relaciones humanas: Entendemos que, cuando una persona nos aporta un dato -en este caso, el perfil que dibuja de Linda como activista social- lo hace porque entiende que dicho dato es relevante, no con el exclusivo propósito de engañar. Más aún: A los sesgos de Kahneman y Tversky les ocurre algo parecido a la ley de la gravitación universal de Newton: Se trata de formas de procesamiento de la información que funcionan bien en su contexto y sólo fallan fuera de éste y, por tanto, los modelos de procesamiento subyacentes no pueden ser rechazados sino, en todo caso, asumir que tienen una validez limitada a un contexto.

Veamos qué ocurre en el extremo que aplaude la Behavioral Economics: Aun admitiendo que hay un forzamiento hacia el extremo por las condiciones experimentales, se demuestra la existencia del sesgo y, con ello, que el modelo de procesamiento humano no es universalmente aplicable, sino que sólo funciona dentro de un contexto. Retomando el ejemplo de Newton, los tecnólogos creen disponer del equivalente a una teoría de la relatividad, con un ámbito de aplicación más amplio que el modelo de Newton y con capacidad para sustituir a éste. La corriente de Behavioral Economics y sus derivaciones suministró todos los argumentos requeridos para llegar a esta conclusión.

Sin embargo, tanto críticos como partidarios parecen aceptar que el razonamiento humano es “quick-and-dirty” y el elogio o la crítica obedece a la importancia o falta de ella que se concede a ese hecho.

La pregunta no respondida ni por los que aceptan el modelo ni por los que lo rechazan es si ese modelo de procesamiento sujeto a sesgos es el único modelo posible de razonamiento humano o si hay otros distintos y que no estén recogidos. Los propios Kahneman y Tversky trataron de responder a esta pregunta aduciendo la existencia de un sistema uno, automático, y un sistema dos, deliberativo. El primero estaría sujeto a los sesgos identificados y el segundo procesaría información en una forma parecida a cómo lo hace un ordenador, es decir, siguiendo las reglas estrictas de la lógica formal aunque -matizan- siempre es susceptible de ser engañado por el sistema uno. Al proclamar la existencia de un sistema lento, racional y seguro, volvieron a dar argumentos a los tecnólogos para promover la inteligencia artificial como alternativa a la humana. No sólo es una cuestión de tecnólogos de la información; si se revisan los textos desclasificados por la CIA sobre técnica de análisis de información puede encontrarse una oda al procesamiento racional que evalúa múltiples opciones y asigna pesos a las variables mientras ignora la intuición y la experiencia del analista basándose en que puede introducir sesgos. El procesamiento de información siguiendo los cauces estrictamente racionales está de moda pero…

¿Y si existiera un “sistema tres” e incluso un “sistema cuatro”, ajenos a los sesgos de la Behavioral Economics pero, al mismo tiempo, no empeñados en reproducir un modelo de procesamiento racional canónico? ¿Y si esos sistemas fueran capaces de conseguir resultados tal vez inalcanzables para un modelo lógico formal, especialmente si tales resultados se requieren con restricciones de tiempo y ante situaciones no previstas?

Esos sistemas existen y, tal vez, uno de los autores que más claramente los ha identificado es Gary Klein: No se trata de procesos “quick-and-dirty” motivados por la falta de capacidad de procesamiento o de memoria ni de un proceso reproducible por una máquina sino de algo distinto, que introduce en la ecuación elementos de experiencia pasada, extrapolaciones de situaciones tomadas de otros ámbitos y experiencias sensoriales no accesibles a la máquina.

Resulta paradójico que uno de los pioneros de la inteligencia artificial, Alan Turing, fuese la prueba viviente de ese modelo de procesamiento no accesible a la máquina: Turing redujo espectacularmente el número de variaciones teóricamente posibles en la máquina de claves Enigma introduciendo una variable en la que nadie había pensado -que el idioma original en que estaba emitido el mensaje era el alemán- y ello permitió trabajar con las estructuras del idioma y reducir opciones..

Cuando un piloto de velero -o del US1549- concluye que no puede llegar a la pista, no lo hace después de calcular velocidad y dirección del viento, y el coeficiente de planeo del avión corregido por el peso y la configuración sino observando la cabecera de la pista y viendo si asciende o desciende sobre un punto de referencia en el parabrisas: Si la cabecera asciende sobre el punto de referencia, no llega. Si desciende, sí puede llegar. Así de fácil.

Cuando un jugador de baseball intenta llegar al punto donde va a llegar la pelota y al mismo tiempo que ésta, no calcula trayectorias angulares ni velocidades, sino que ajusta la velocidad de su carrera para mantener un ángulo constante con la pelota.

Cuando un avión ve afectados sus sistemas por la explosión de un motor -QF32- y lanza centenares de mensajes de error, los pilotos tratan de atenderlos hasta que descubren uno muy probablemente falso: Fallo de un motor en el extremo del ala contraria. Los pilotos concluyen que, si la metralla del motor que explotó hubiera llegado al otro extremo del avión, tendría que haber pasado a través del fuselaje y, a partir de ahí, deciden volar el avión a la antigua usanza, ignorando los mensajes de error del sistema.

Cuando, en plena II Guerra Mundial, a pesar de haber descifrado el sistema de claves japonés, los norteamericanos no saben de qué punto están hablando, aunque tienen la sospecha de que podría tratarse de Midway, lanzan un mensaje indicando que tienen problemas de agua en Midway. Cuando, tras recibirlo, los japoneses se refieren a las dificultades con el agua en el punto al que se referían sus mensajes previos, quedó claro a qué se referían. La preparación de la batalla de Midway tiene origen en ese simple hecho difícil de imitar por una máquina.

Incluso en el cine disponemos de un caso, no sabemos si real o apócrifo, en la película “Una mente maravillosa” sobre la vida de John Nash: En un momento especialmente dramático, Nash concluye que los personajes que está viendo son imaginarios porque observa que una niña, que formaba parte de las alucinaciones, no ha crecido a lo largo de varios años y permanece exactamente igual que las primeras veces que la vio.

Todos ellos son casos muy conocidos a los que se podría añadir muchos más; incidentalmente, uno de los ámbitos donde más ejemplos pueden encontrarse es en la actividad de los hackers. En cualquier caso, todas las situaciones tienen algo en común: El procesamiento de información que no sigue el modelo racional canónico no siempre es un modelo a rechazar por estar sujeto a sesgos sino que también introduce elementos propios no accesibles a un sistema de información, bien porque le falta el aparato sensorial capaz de detectarlo o bien porque le falta la experiencia que permite extrapolar situaciones pasadas y llegar a conclusiones distintas y, por supuesto, mucho más rápidas de lo accesible a un avanzado sistema de información.

Los tecnólogos han visto en la Behavioral Economics un reconocimiento casi explícito de la superioridad de la máquina y, por tanto, la han acogido con entusiasmo. Sin embargo, para los psicólogos, la Behavioral Economics tiene un punto ya conocido: Adolece del mismo problema que otras corrientes anteriores en psicología: Así, el psicoanálisis puede alardear de descubrir fenómenos inconscientes…pero hacer de éstos el núcleo del psiquismo humano tiene poco recorrido; la psicología del aprendizaje ha trabajado intensivamente con los mecanismos de condicionamiento clásico y operante…pero tratar de convertir el aprendizaje en el único motor del psiquismo no tiene mucho sentido. La Gestalt descubrió un conjunto de leyes relativas a la percepción…pero tratar de extrapolarlas al conjunto del funcionamiento humano parece excesivo. Ahora la toca a la Behavioral Economics. ¿Existen los sesgos cognitivos? Por supuesto que existen. ¿Representa ese modelo de procesamiento quick-and-dirty lleno de errores la única alternativa a un supuestamente siempre deseable procesamiento racional canónico, susceptible de enlatar en una máquina? No.

No olvidemos que incluso el procesamiento “quick-and-dirty” funciona correctamente en la mayoría de los casos aunque no lo haga en el vacío; necesita hacerlo en un contexto conocido por el sujeto. En segundo lugar, hay otro modelo de procesamiento que hace uso de una experiencia cristalizada en conocimiento tácito que, como tal, es difícil de expresar y más aún de “enlatar” en algoritmos, y de un aparato sensorial que es específicamente humano. Reducir el procesamiento humano de información a los estrechos cauces marcados por la Behavioral Economics no deja de ser una forma de caricaturizar a un adversario, en este caso el humano,  al que se pretende contraponer la potencia de la máquina. Eso sí; no dejaremos de recordar a Edgar Morin y su idea de que la máquina-máquina siempre es superior al hombre-máquina. Si insistimos en despojar al humano de sus capacidades específicas o en despreciarlas, mejor quedémonos con la máquina.

BEHAVIORAL ECONOMICS: WHEN TECHNOLOGISTS FELL IN LOVE WITH PSYCHOLOGY

It is no coincidence that the first psychologist to win a Nobel Prize, Daniel Kahneman, has contributed something much appreciated by information technologists, especially those dedicated to artificial intelligence.

Few things are more attractive to a technologist than the idea that there are inherent biases in human information processing and that these biases are produced by the need to achieve fast results, even if they contain errors. Moreover, such biases are due to shortcomings in processing large amounts of information quickly. Put another way, people process information incorrectly because they do not have the capacity of a computer to do so. Jumping from there to «If I have sufficient information processing capacity, that won’t happen with a machine» is very tempting -although it reminds of the old days of Minsky and the so-called GOFAI (Good Old-Fashioned Artificial Intelligence)– and, hence, the Behavioral Economics current has had great acceptance in the world of technology. However, things are not so clear-cut.

Kahneman and Tversky, the main authors of reference in this line, discovered a set of biases and, moreover, named them in rather ingenious ways and compiled them in «Thinking fast and slow». After that, they were applauded and criticized, although perhaps not for the right reasons or, to be more precise, perhaps they were not fully understood by those who applaud them nor by those who criticize them.

If we start with the critical part, it is often argued that the biases have been tested under laboratory conditions and that, in normal life, such situations do not occur, or the biases simply do not appear. It is true; laboratory conditions force the appearance of the bias and in the real world, in which we usually operate within a context, they do not appear or do so in a very reduced form. Take, for example, one of the most popular «Thinking fast and slow» stories, the story of «Linda, the cashier»:

Linda is a 31-year-old woman, single, outgoing, and very bright. She studied philosophy at university. As a student, she was concerned about discrimination and social justice issues and participated in demonstrations against the use of nuclear energy. According to the description given, rank the following eight statements in order of probability of being true. Assign the number 1 to the statement you think is most likely to be true, two to the next one, and so on until you assign 8 to the statement least likely to be true.

o Linda is a social worker in the psychiatric field.

 o Linda is an insurance sales agent.

o Linda is an elementary school teacher.

o Linda is a bank teller and participates in feminist movements.

o Linda is a member of an association that promotes women’s political participation.

o Linda is a bank teller.

o Linda participates in feminist movements. o Linda works in a bookstore and takes yoga classes.

Note the fourth and sixth options: Obviously, the fourth is a subset of the sixth and, nevertheless, the fourth appeared in a generalized way as more probable than the sixth. A clear error of logic but, at the same time, it ignores an elementary fact in human relations: We understand that, when a person gives us a piece of information – in this case, the profile of Linda as a social activist – it is because that person understands that such information is relevant, not with the exclusive purpose of deceiving us. Moreover: Kahneman’s and Tversky’s biases are like Newton’s law of universal gravitation: They are showing information processing strategies that work well in context and only fail out of context. Therefore, the underlying processing models cannot be rejected but, in any case, assumed to have a validity limited to a context.

Let us see what happens at the extreme that applauds Behavioral Economics: Even admitting that there is not a natural setting, and they are forcing the conditions towards the extreme through the experimental design, the existence of the bias is demonstrated and, with it, it seems that the human processing model is not universally applicable, but limited to a context. Taking Newton’s example, technologists believe that they have the equivalent of a theory of relativity, with a broader scope of application than Newton’s model and with the capacity to replace it. Behavioral Economics and its derivations provided all the arguments required to reach this conclusion.

However, both critics and supporters seem to accept that human reasoning is «quick-and-dirty» and the praise or criticism is due to the importance or lack of it that is given to that fact. The unanswered question, neither by those who accept the model nor by those who reject it, is whether this model of processing subject to biases is the only possible model of human reasoning or whether there are others that are different and not captured. Kahneman and Tversky themselves tried to answer this question by adducing the existence of a system one, automatic, and a system two, deliberative. The first would be subject to the identified biases and the second would process information in a manner like that of a computer, i.e., following the strict rules of formal logic, although -they argue- it is always susceptible to being fooled by system one.

By proclaiming the existence of a slow, rational, and safe system, they once again gave technologists arguments to promote artificial intelligence as an alternative to human intelligence. It is not only a matter of information technologists; if one reviews the declassified CIA texts on information analysis technique one can find an ode to rational processing that evaluates multiple options and assigns weights to variables while ignoring the analyst’s intuition and experience on the grounds that it may introduce bias. Processing information along strictly rational lines is all the rage but… what if there was a different and valid way?

What if there were a «system three» and even a «system four», unbiased by the proclaimed Behavioral Economics biases but, at the same time, not trying to reproduce a canonical rational processing model? What if such systems could achieve results perhaps unattainable for a formal logical model, especially if such results are required under time constraints and in the face of unforeseen situations?

Such systems do exist, and perhaps one of the authors who has identified them most clearly is Gary Klein: They are not «quick-and-dirty» processes motivated by a lack of processing capacity or memory, nor are they a machine-reproducible process, but rather something different, which introduces into the equation elements of past experience, extrapolations of situations taken from other fields and sensory experiences not accessible to the machine.

It is paradoxical that one of the pioneers of artificial intelligence, Alan Turing, was the living proof of this model of processing not accessible to the machine: Turing spectacularly reduced the number of theoretically possible variations in the Enigma key machine by introducing a variable that no one had thought of – that the original language in which the message was issued was German – and this allowed working with the language structures and reducing options.

When a glider -or US1549- pilot concludes that he cannot reach the runway, he does so not after calculating wind speed and direction, and the glide ratio of the aircraft corrected for weight and configuration, but by looking at the runway threshold and seeing if it ascends or descends over a reference point on the windshield: If the threshold ascends over the reference point, he will not reach it. If it descends, he will arrive. It’s that easy.

When a baseball player tries to get to the point where the ball is going to arrive and at the same time as the ball, he does not calculate angular trajectories and velocities but adjusts his running speed to maintain a constant angle to the ball.

When an aircraft’s systems are affected by an engine explosion -QF32- and it sends hundreds of error messages, pilots try to deal with them until they discover a most probably false one: Failure of an engine on the opposite wing tip. The pilots conclude that if the shrapnel from the exploding engine had reached the other end of the plane, it would have passed through the fuselage, and so they decide to fly the plane the old-fashioned way, ignoring the system error messages.

When, in the middle of World War II, despite having deciphered the Japanese key system, the Americans do not know what point they are talking about, although they suspect it might be Midway, they send a message indicating that they have water problems at Midway. When, after receiving it, the Japanese refer to the water difficulties at the point to which their previous messages referred, it became clear what they were referring to. The preparation for the battle of Midway has its origin in that simple fact which is difficult for a machine to imitate.

Even in the cinema, we have a case, we do not know if it is real or apocryphal, in the movie «A Beautiful Mind» about the life of John Nash: In a particularly dramatic moment, Nash concludes that the characters he is seeing are imaginary because he observes that a little girl, who was part of the hallucinations, has not grown over several years and remains exactly the same as the first times he saw her.

These are all well-known cases to which many more could be added; incidentally, one of the areas where most examples can be found is in the activity of hackers. In any case, all situations have something in common: Information processing that does not follow the canonical rational model is not always a model to be rejected because it is subject to biases. Actually, it also introduces elements that are not accessible to an information system, either because it lacks the sensory apparatus capable of detecting it or because it lacks the experience that allows extrapolating past situations and reaching conclusions that are different and, of course, much faster than what is accessible to an advanced information system.

Technologists have seen in Behavioral Economics an almost explicit recognition of the superiority of the machine and have therefore welcomed it with enthusiasm. However, for psychologists, Behavioral Economics has a well-known point: it suffers from the same problem as other previous currents in psychology: Thus, Psychoanalysis may boast of discovering unconscious phenomena…but making these the core of the human psyche is a long shot; the Psychology of Learning has worked intensively with the mechanisms of classical and operant conditioning…but trying to make learning the only engine of the psyche does not make much sense. Gestalt discovered a set of laws related to perception…but trying to extrapolate them to the whole of human functioning seems excessive. Now it is the turn of Behavioral Economics. Do cognitive biases exist? Of course, they exist. Does that error-ridden quick-and-dirty processing model represent the only alternative to a supposedly always desirable canonical rational processing, amenable to canning in a machine? No.

Let us not forget that even quick-and-dirty processing works correctly in most cases even if it does not do so in a vacuum; it needs to do so in a context known to the subject. Secondly, there is another processing model that uses an experience crystallized in tacit knowledge that, as such, is difficult to express and even more difficult to be «canned» in algorithms, and of a sensory apparatus that is specifically human. Reducing the human processing of information to the narrow channels set by Behavioral Economics is nothing more than a way of caricaturing an adversary, in this case, the human, against which the power of the machine is intended to be set. Of course, we will not fail to remember Edgar Morin and his idea that the machine-machine is always superior to the man-machine. If we insist on stripping humans of their specific capacities or despising them, we should stick with the machine.

ABOUT HYPER-LEGITIMACY AND COMPLEXES

Today, when a new year still at its beginning, we are witnessing in Spain the old rivalry of left and right, driven by the most extremist factions of both sides in a dynamic from which, if it remains, nothing good can be expected.

History taught us that both sides have much to regret or hide about the past; on the right hand, the pursuit of the interests of the most privileged, supported by force; on the left hand, the savagery of revolutions which, in the end, placed at their tops a new and equally privileged class, whose interests are not more legitimate than those of their predecessors.

However, nowadays we see that both sides did not assume their history in the same way; the left clearly won what has been called «the story». While there is a right wing repentant for the sins of its predecessors, the left wing places its own ones on pedestals, as the supposed forerunners of democracy, even in those cases where the only thing they brought was dictatorship, ruin and death.

 Recent episodes in Spain of removal of statues while maintaining others with equal or greater merits for such removal, changes of street names and even the transfer of Franco’s remains speak by themselves of a hyper-legitimacy from the most extremist leftists hardly supported by facts.

Well into the 21st century, the ideological debate should not consist of the most extreme right and left wings competing to pull up the rugs of the past of their opponents, nor should anyone have to apologize for existing. Each side has an ideological legitimacy that certainly does not reside in the past. Perhaps it is time for both sides to show their real legitimacy while they send to the garbage can of history the principles and practices of their ideological great-great-grandfathers and those present leaders who, nowadays, pretend to behave like them.

A modern liberal right wing is legitimate because, as a guiding principle, it seeks the common good and not the maintenance of situations of privilege; it trusts that an individual initiative with few restrictions will bring a better life for the great majority and will put its best efforts into it.

A left wing is legitimate when, likewise, it seeks the common good and not a mere quest to subvert the existing order in order to place its own members at the head of a new and equally undesirable one; legitimate left and right wings coincide in ends, though not in means. Thus, while the right wing trust in the individual, the left wing is prone to social engineering and direct action to alleviate the situation of the less fortunate, preventing them from being abandoned to their fate.

Naturally, any non-sectarian reader, whatever his orientation, will see that both options are not only legitimate but compatible, and that alternation in power has a crucial role to play: to correct undesirable drifts which, in one case, could drive to the abandonment of layers of the population while in the other could produce elephantine States and ever larger groups willing to get their living from the resources that these States take from the productive part of the population.

At the same time, there are also illegitimate political practices ; those led by politicians who get their living from the confrontation and see the common good as an empty abstraction, preferring to opt for their own. That is a situation, very present right now, that transforms the political chessboard into modern Augean stables where every dirt has its seat and that, despite it, its cleaning would not require any river or any Hercules but simpler, punctual and modest actions.

Pointing to individuals or gangs of opportunistic people as the culprits, even though they clearly exist and are guilty, does not put us on the road to solve the problem. Such individuals are not the key, which does not mean that they should not be stopped, but they are mere anecdotes stemming from basic errors in a process that, in Spain, started in 1977: The lauded political transition in Spain introduced undesirable elements that constituted the germ of the current situation, granting legitimacy to anyone who claimed to be against the dictatorship, no matter how questionable their own actions could be:

 The design of the transition not only led to an unwieldy territorial situation, but -it is worth remembering- ETA prisoners with blood crimes were even pardoned, many of whom returned to their former activity. Situations of privilege were consecrated, both economic and electoral, which contradicted the supposed equality of all Spaniards proclaimed in the subsequent Constitution.

Once those principles were in place, the political pressures towards greater power of the successive executive powers and the concessions to groups favored by the electoral system to form blocking minorities started the process. In addition to a bad start, the Constitution, with the good offices of the Constitutional Court, has been twisted until it has become virtually unrecognizable. Today, we can find laws that require an advanced exercise in sophistry to be accepted within the constitutional framework.

Those have been the basic lines of a process that led to the present moment where, in addition, there are political actors -some of them inside the Government- who explicitly seek to liquidate the regime born in 1977 in order to replace it with adventures whose outcome cannot even be described as uncertain.

 Relying on the old parliamentary arithmetic and on the old parties -or the new ones with old principles and actions- on both sides of the political spectrum- may be a mistake. Today an «apolitical» party may be required, without a government program, willing to lend its support to the party with the most votes if it commits itself to making changes leading to a real democracy, with the design errors of 1977 corrected.

It would be unrealistic expecting a solution from someone who is benefiting, at the expense of society as a whole, from the current morass, as the present Government and its supporters; nor does it seem that a solution can be expected from someone, now in the opposition, who had his chance in 2012. They got more central and autonomic power than anyone in the recent past and, however, behaved as temporary tenants without daring to proclaim his principles -if they existed- and limited himself to achieve some economic respite. Finally, neither should we turn back the clock nor resort to the «we lived better with Franco».

Perhaps the most hopeful experiments in this long period have been the initial Ciudadanos and UPyD parties; it is true that UPyD was born almost as a PSOE in exile, but it progressively focused on nuclear issues and, perhaps that’s why it was made disappear. In the case of Ciudadanos, its origin is impeccable, but the ambition of its leaders turned the party into something unrecognizable today.

There are figures who could lead the rebirth of a principled political activity, but none of them occupy right now prominent positions within their parties (some of them did), if they even remain in them. Inviting these figures to action is not about bringing in more people to fight for power on the current political chessboard but, precisely, to change the rules of a game whose original design is wrong. The subsequent evolution have turned it into perfect terrain for the «gamblers of the Mississippi» -to use the former vice president Alfonso Guerra’s expression- to prosper at the expense of a society whose welfare they supposedly manage.

It is a matter of changing the rules of access and permanence in political activity, ensuring equality among all Spaniards, and that both the rules and the zeal in their application are the same for all.

Undoubtedly, the design of 1977 is coming apart at the seams in many places and its subsequent evolution was already foreseeable for some; a change is necessary, but this does not consist of returning to states prior to that date but of reinforcing what was already done well then and eliminating what was done badly or very badly.

Life after a Black Swan

COVID-19 or coronavirus could be considered, at least partially, as a genuine Black Swan. It’s true that the first condition to qualify as a Black Swan is to be impossible to foresee. In this case, we can find a TED conference by Bill Gates in 2015, where he warned that we were not prepared to manage it. Five years later, facts have shown that we were not ready and, at practical effects, it can be named a Black Swan, a false one if we want but a Black Swan anyway.

black-swan-4553166_640

The behavior of many Governments in front of the outbreak can be qualified from poor management to openly criminal and, as a consequence, we have entered a new situation whose consequences are hard to foresee.

Unless a vaccination or a treatment can be found in a very short time, something is sure: The world won’t come back to the zero-day. No Government will have the «Black Swan» excuse anymore before the next outbreak but, beyond that, some changes in the common behavior can point to other changes that could last well beyond the end of the outbreak.

Probably, the restrictions of the movements are going to enforce the real dawn of teleworking, training on-line and many other related activities that, during the outbreak, have been used as an emergency resource but, now, it could become the standard way. Of course, it can change our work habits but, at the same time, it will have a major impact on real estate, traveling and…it has shown that Internet, basic tool for so many activities, is far from being as strong as pretended: Actually, the increased use of digital platforms like Netflix, Amazon or others during quarantine periods challenged the capacity of some of the main nodes.

However, these facts together with the impact on the world economy could be read as the trivial lessons to be obtained from the Black Swan. There are other potential effects that could be far higher.

One of them is related to the start of the outbreak.  As it happened with Chernobyl, where the Soviet Government denied everything until the pollution went out of their borders, the denial of the Chinese Government, allowing massive feasts that contributed to spread the virus once they knew of its existence could have internal and external consequences.

Could it drive to a major internal turmoil? If so, the consequences are hard to foresee, since China is nowadays the factory of the world. What about the external situation? Some Governments are pointing to China as responsible for the outbreak and its consequences due to the behavior shown during the first 20 critical days. Will these accusations go beyond public statements or will they remain inconsequential?

There is still a third kind of effect in the future world: Many optimists say that the world never was better and, to defend this position, they show statistical reports about hunger, health, life expectations, wars, and many others. They think that all the media focus on the negative side because it sells better than the positive side and it gives a distorted view of the world. It could be but there is still another option:

There is a lot of powder magazines in the world and we do not know which one will explode and, if so, what will be the consequences of the explosion and if they could start new explosions. We should remind that, when IWW started, not many people were conscious of it, due to the uncommon links of the facts that came after an assassination in Sarajevo. Actually, the IWW name came after the war itself.

To use the same term, we are running a farm of black swans and we do not know the consequences that could come from that. Then, without denying that the material side could be better than it was in any other Human age, the stability of that situation is very questionable.

Peter Turchin warned about this instability and how we should expect a major change in the world as we know it. Turchin is prone to mathematical models and, as such, he has defined nice algorithms and graphics. However, for those who think that numbers are a language and, as such, they can misguide people under the flag of «objectivity», the real value of the Turchin model is not in the algorithms; it’s in the dynamics. He was very original, observing variables that passed unnoticed for many other researchers and explaining why these variables could be used as indicators of a major trend.

Turchin expected a major change in the 2020 decade as a consequence of being in the final stage of the decomposition. Being conscious that I’m simplifying his model perhaps beyond any legitimate limit, a major event -like a war- brings a kind of vaccination that can last for two generations driving to a prosperity period. After that, the decomposition starts driven by people who did not live that major event, driving to others and repeating the cycle.

Covid-19 and its consequences will be for many people a major event in their lives. The question, hard to answer, is if it will accelerate the decomposition process or, instead, it will reset the model starting a new positive cycle driven by people that, suddenly, discovered that there are real problems, far more important than the invented by some political leaders.

Some changes, even in the language of politicians, during the outbreak and how people rebuke those who try to attract the attention to the issues in their political agendas are quite revealing. Something is already changing; it’s not the virus. It’s the meaning of the change in the life conditions for many of us and how this change made many people build a new hierarchy of values.

Nietzsche said that what does not kill us makes us stronger. At this moment, we do not know if we will die -as a society- because of this crisis or we will become stronger, at least for a time. Something is sure: It will not be inconsequential.

Artificial Intelligence, Privacy and Google: An explosive mix

A few days ago, I knew about a big company using AI in the recruiting processes. Not a big deal since it has become a common practice and if something can be criticized, is the fact that AI commits the same error that human recruiters: The false-negative danger; a danger that goes far beyond recruiting.

The system can have more and more filters, learning from them but…what about the candidates that were eliminated despite they could have been successful? Usually, we will never know about them and, hence, we cannot learn from our errors rejecting good candidates. We know of cases like Clark Gable or Harrison Ford that were rejected by Hollywood but, usually, following the rejected candidates is not feasible and, then, the system -and the human recruiter- learns to improve the filters but it’s not possible learning from the wrong rejections.

That’s a luxury that companies can afford while it affects to jobs where the supply of candidates and the demand for jobs is clearly favorable. However, this error is much more serious if the same practice applies to highly demanded profiles. Eliminating a good candidate is, in this case, much more expensive but both, AI system and human recruiters, don’t have the opportunity to learn from this error type.

Only companies like Google, Amazon or Facebook with an impressive amount of information about many of us could afford to learn from the failures of the system. It’s true that, as a common practice, many recruiters “google” the names of the candidates to get more information, but these companies keep much more information about us than the offered when someone searches for us in Google.

Then, since these companies have a feature that cannot be reached by other companies, including big corporations, we could expect them to be hired in the next future to perform recruiting, evaluate insurance or credit proposals, evaluation about potential business partners and many others.

These companies having so much information about us can share Artificial Intelligence practices with their potential clients but, at the same time, they keep a treasure of information that their potential clients cannot have.

Of course, they have this information because we voluntarily give it in exchange for some advantages in our daily life. At the end of the day, if we are not involved in illegitimate or dangerous activities and we don’t have the intention of doing it, is it so important having a Google, Amazon, Facebook or whoever knowing what we do and even listening to us through their “personal assistants”? Perhaps it is:

The list of companies sharing this feature -almost unlimited access to personal information- is very short. Then, any outcome from them could affect us in many activities, since any company trying to get an advantage from these data is not going to find many potential suppliers.

Now, suppose a dark algorithm that nobody knows exactly how it works deciding that you are not a good option to get a job, insurance, a credit…whatever. Since the list of suppliers of information is very short, you will be rejected once and again, even though nobody will be able to give you a clear explanation of the reasons for the rejection. The algorithm could have chosen an irrelevant action that happens to keep a high correlation with a potential problem and, hence, you would be rejected.

Should this danger be disregarded? Companies like Amazon had their own problems with their recruiting systems when, unintendedly, they introduced racial and genre biases. There is not a valid reason to suppose that this cannot happen again with much more data and affecting many more activities.

Let me share a recent personal experience related to this error type: Recently, I was blacklisted by a supplier of information security. The apparent reason was having a page, opening automatically every time I opened Chrome, without further action. That generated repeated accesses; it was read by the website as an attempt to spy and they blacklisted me. The problem was that the supplier of information security had many clients: Banks, transportation companies…and even my own ISP. All of them denied my access to their websites based on the wrong information.

This is, of course, a minor sample of what can happen if Artificial Intelligence is applied over a vast amount of data by the few companies that have access to them and, because of reasons that nobody could explain, we must confront rejection in many activities; a rejection that would be at the same time universal and unexplained.

Perhaps the future was not defined in «1984» by Orwell but in “The process” by Kafka.

A contracorriente: ¿Libertad de información o libertad de difamación?

Creo que en ciertos momentos es necesario dejar clara la posición personal para evitar confusiones, de modo que allá va:

No me gusta la forma en que ha llegado al poder en España el líder del PSOE, Pedro Sánchez, no me gusta su actuación en los principales temas de España, no me gustan sus socios, no me gusta que remolonee en la convocatoria de elecciones y la ocultación de su tesis me parece un asunto sospechoso del que el tiempo dirá, a corto plazo, si hay algo real o es una tormenta en un vaso de agua. Está claro ¿verdad?

Sin embargo, me parece impresentable que el gran asunto actual no sea nada de eso sino que haya osado exigirles la rectificación a algunos medios de comunicación y cómo esa exigencia atenta supuestamente contra la libertad de información. En esta trampa han caído interesadamente periodistas y políticos e intentan que los demás caigamos también. Pues no:

La libertad de información no es un derecho del periodista a decir lo que le parezca sino un derecho del ciudadano a recibir información cierta y desde la perspectiva que a ese ciudadano le apetezca.

Si el periodista difama, está tan sujeto a la ley como cualquier otro y no puede invocar la libertad de información como si fuera una patente de corso.

Si alguien, en este caso un presidente de un Gobierno y con independencia de la opinión que se tenga de él, cree que ha sido difamado por un medio de comunicación está en su perfecto derecho de exigir una rectificación.

¿Quiere llevar el asunto a los tribunales? Adelante; es su derecho aunque sea una jugada arriesgada porque, si pierde, no tendrá manera humana de amarrarse a un sillón al que ha llegado de una forma tan irregular.

Desde luego, quien no puede ni debe tratar de impedírselo son los medios de comunicación, alegando libertad de información que, al parecer, consideran sinónimo de libertad para decir lo que les de la gana.

Insisto: No defiendo al personaje sino su pleno derecho a acudir a instancias judiciales si cree que ha sido difamado. Si lo ha sido o no, ya lo veremos pero no cabe rasgarse las vestiduras por un atentado a la libertad de información ante el ejercicio de tal derecho.

THE HARD LIFE OF AVIATION REGULATORS (especially, regarding Human Factors)

There is a very extended mistake among Aviation professionals: The idea that regulations set a minimum level. Hence, accepting a plane, a procedure or an organization means barely being at the minimum acceptable level.

The facts are very different: A single plane able to pass, beyond any kind of reasonable doubt, all the requirements without further questions would not be an “acceptable” plane. It would be an almost perfect product.

Then, where is the trick and why things are not so easy as they seem to be?

Basically, because rules are not so clear as pretended, giving in some cases wide room for interpretation and because, in their crafting, they are mirroring the systems they speak about and, hence, integration is lost in the way.

The acceptance of a new plane is a very long and complex process till the point that some manufacturers give up. For instance, the Chinese COMAC built a first plane certified by Chinese authorities to fly only in China or the French Dassault decided to stop everything jumping directly to the second generation of a plane. It be judged a failure, but it is always better than dragging design problems until they prove that they should have been managed. We cannot avoid to remind cases like the cargo door of DC10 and the consequences of a problem already known during the design phase.

The process is so long and expensive that some manufacturers keep attached to very old models with incremental improvements. Boeing 737, that started to fly in 1968, is a good example. Its brother B777, flying since 1995, keeps a very old Intel-80486 processor inside but changing it would be a major change, despite Intel stopped its production in 1997.

The process is not linear, and many different tests and negotiations are required in the way. Statements of similarity with other planes are frequent and the use of standards from Engineering or Military fields is common place when something is not fully clear in the main regulation.

Of course, some of the guidelines can contradict others since they are addressed to different uses. For instance, a good military regulation very used in Human Factors (MIL-STD-1472) includes a statement about required training, indicating that it should be as short as possible to keep full operating state. That can be justified if we think in environments where lack of resources -including knowledge- or even physical destruction could happen. It should be harder to justify as a rule in passenger’s transportation.

Another standard can include a statement about the worst possible scenario for a specific parameter, but the parameter can be more elusive than that. The idea itself of worst possible scenario could be nonsense and, if the manufacturer accepts this and the regulator buys it, a plane could by flying legally but with serious design flaws.

Regulations about Human Factors were simply absent a few years ago and HF mentions were added to the technical blocks. That was partially changed when a new rule for planes design appeared addressing precisely Human Factors as a block on its own. However, the first attempts were not much further than collecting all the scattered HF mentions in a single place.

Since then, it has been partially corrected in the Acceptable Means of Compliance, but the technical approach still prevails. Very often, manufacturers assemble HF teams with technical specialists in specific systems instead of trying a global and transversal approach.

The regulators take their own cautions and repeat mantras like avoiding fatigue levels beyond acceptability or planes that could not require special alertness or special skill levels to manage a situation.

These conditions are, of course, good but they should not be enough. Compliance with a general condition like this one, in EASA CS25  “Each pilot compartment and its equipment must allow the minimum flight crew (established under CS 25.1523) to perform their duties without unreasonable concentration or fatigue” is quite difficult to demonstrate. If there is not a visible mistake in the design, trying to meet this condition is more a matter of imagining potential situations than a matter of analysis and, as it should be expected, the whole process is driven by analysis, not by imagination.

Very often, designs and the rules governing them try to prevent the accident that happened yesterday, but a strictly analytic approach makes hard to anticipate the next one. Who could anticipate the importance of controls feedback (present in every single old plane) until a related accident happened? Who could anticipate before AA191 that, perhaps, removing the mechanical blockage of flaps/slats could not be so sound idea? Who could think that different presentations of artificial horizon could drive to an accident? What about different automation policies and pilots disregarding a fact that could be disregarded in other planes but not in that one?…

Now, it is still in the news the fact that a Boeing factory had been attacked by the WannyCry virus and the big question was if it had affected the systems of the B777s that were manufactured there. B787 is said to have 6,5 million of code lines. Even though B777 is far below that number, checking it should not be easy and it should be still harder if computers calculating parameters for the manufacturing must be also checked.

That complexity in the product drives not only to invisible faults but to unexpected interactions between theoretically independent events. In some cases, the dependence is clear. Everyone is conscious that an engine stopped can mean hydraulic, electric, pressure and oxygen problems and manufacturers try to design systems pointing to the root problem instead of pointing to every single failure. That’s fine but…what if the interaction is unexpected? What if a secondary problem -like oxygen scarcity, for instance- is more important than the root problem that drove to this? How are we going to define the right training level for operators where there is not a single person who understands the full design?

In the technical parts, the complexity is already a problem. When we add the human element, its features and what kind of things are demanded from operators, the answer is everything but easy. Claiming “lack of training” every time that something serious happens and adding a patch to the present training is not enough.

A full approach more integrated and less shy to speak about using imagination in the whole process is advisable long ago but now it is a must. Operators do not manage systems. They manage situations and, at doing so, they can use several systems at the same time. Even if there is not an unexpected technical interaction among them, there is a place where this interaction happens: The operator who is working with all of them and the concept of consistency is not enough to deal with it.

 

El efecto Twitter

Mucha gente considera Twitter como un sitio poco serio y, por tanto, decide no tener una cuenta en Twitter. Grave error:

Muchos individuos y publicaciones muy conocidos tienen sus cuentas y publican regularmente contenidos. Es cierto que 140 caracteres no dan para mucho pero la cosa se pone más interesante si se considera que, dentro de esos 140 caracteres, puede haber vínculos a artículos recién publicados por ellos mismos.

Seguir a mucha gente es enloquecedor porque, a menos que se viva con la nariz pegada a la pantalla, se perderá información pero nuevamente hay una solución: Elíjanse los temas más interesantes y prepárense listas especializadas en esos temas. Una revisión diaria o semanal, según el nivel de actividad, será suficiente y, si se escogen los miembros de las listas con cuidado, se puede mantener uno actualizado sobre cualquier tema imaginable. Ni que decir tiene que se pueden añadir o quitar miembros de las listas.

En resumen, hay buenas razones para recomendar a alguien que tenga una cuenta en Twitter: Es un recurso valioso para mantenerse informado casi sobre cualquier tema. Ahora viene la parte más difícil: ¿Cómo debe ser la interacción en Twitter?

Mucha gente simplemente se mantiene en silencio. Siguen las fuentes que consideran interesantes y se acabó. Es una buena opción si no hay intención de compartir contenido propio. Puede encontrarse gente que utiliza sus propios nombres mientras otros prefieren no estar identificados, especialmente si tienen intención de participar activamente en discusiones sobre temas que puedan ser controvertidos y ahí precisamente aparece el lado oscuro de Twitter, un lado oscuro muy difícil de separar de la parte positiva.

Twitter es muy rapido. Por ello, medios tradicionales como la radio o la televisión lo utilizan como forma de mantener el contacto con sus seguidores y es frecuente ver una línea en televisión con un flujo de mensajes en Twitter. Esto les da a los programas sensación de actualidad y, al mismo tiempo, le da relevancia a Twitter, tanto en sus aspectos positivos como en los negativos.

Una vez que Twitter aparece como algo relevante, mucha gente empieza a utilizar la red para sus propios objetivos. Por ejemplo, se utilizan cuentas falsas con bots diseñados para convertir cualquier tema de su elección en trending topic en cuestión de minutos. Cuando se actúa así, por supuesto, la información sobre la relevancia real de un tema está falseada porque hay gente dedicada activamente a esa falsificación y, por añadidura, no se necesita ser un gran experto en redes sociales para ello.

Ésta es una parte negativa pero hay algo aún peor: La interacción entre miembros de Twitter es muy animada. Es fácil identificar grupos -incluso hay aplicaciones que permiten hacerlo automáticamente- y hay una fuerte presión hacia la conformidad dentro de esos grupos. Sus miembros, buscando el aplauso de sus compañeros de grupo, presentan visiones cada vez más extremas sobre cualquier tema controvertido y las discusiones resultantes aparecen en los medios más tradicionales como tendencias confundiendo la caricatura Twitter con la imagen real de una sociedad, imagen que a su vez se ve afectada por la difusión de la caricatura como realidad.

Hay muchos ejemplos actuales pero el caso español y su situación política es paradigmático. Tenemos de todo: Bots convirtiendo cualquier cosa en trending topic y gente que va derivando hacia visiones cada vez más extremas en sus posiciones políticas, especialmente si se trata de cuentas no identificadas o se trata de líderes de opinión que no quieren decepcionar a su auditorio. Por añadidura, esto no es un efecto específicamente español sino que, si se sigue la campaña americana, se encuentran exactamente los mismos fenómenos: La velocidad de la interacción y la brevedad de los mensajes, sin mucho espacio para matices, pueden ser los factores determinantes de ese comportamiento.

En suma, Twitter es una herramienta valiosa para mantenerse actualizados sobre cualquier tema pero, al mismo tiempo, tiene facetas muy negativas cuya influencia trasciende Twitter. Estar dentro es positivo pero mantenerse activo es algo para pensárselo dos veces. Aceptar las tendencias que marca Twitter como reales es algo que debe evitarse y no sólo porque probablemente sean falsas sino porque, dándoles carta de naturaleza, se puede contribuir a que se conviertan en reales aunque originalmente no lo fueran. Quizás todos tenemos una tarea de evitar que eso ocurra porque, debido a la presión hacia la conformidad, suele ocurrir que la posición ganadora se le llevan precisamente las más impresentables tendencias y comentarios…sin distinción de adscripción ideológica o de cualquier otro tipo.

Big Aviation is still a game of two players

And one of them, Airbus,  is celebrating its birthday.

Years ago, three major players were sharing the market but, once McDonnell Douglas disappeared, big planes were made by one of them. Of course, we should not forget Antonov, whose 225 model is still the biggest plane in the world, some huge Tupolev and Lockheed Tristar but the first ones never went out of their home markets while Lockheed Tristar could be seen as a failed experiment from the manufacturer.

Airbus emphasizes its milestones in the timeline but, behind these, there is a flow marked by efficiency through I.T. use.

Airbus was the first civilian planes manufacturer having a big plane with a cockpit for only two people (A-310) and Airbus was the first civilian plane manufacturer to introduce widely fly-by-wire technology (the only previous exception was the Concorde). Finally, Airbus introduced the commonality concept allowing pilots from a model to switch very fast to a different model keeping the rating for both.

Boeing had a more conservative position: B757 and B767 appeared with only two people in the cockpit after being redesigned to compete with A-310. Despite the higher experience of Boeing in military aviation and, hence, in fly-by-wire technology, Boeing deferred for a long time the decision to include it in civilian planes and, finally, where Boeing lost the efficiency battle was when it appeared with a portfolio whose products were mainly unrelated while Airbus was immerse in its commonality model.

The only point where Boeing arrived before was in the use of twin planes for transoceanic flights through the ETOPS policy. Paradoxically the ones in the worst position were the two American companies that were manufacturing three engine planes, McDonnell Douglas and Lockheed instead of Airbus. That was the exception because, usually, Boeing was behind in the efficiency field.

Probably -and this is my personal bet- they try to build a family starting with B787. This plane should be for Boeing the A320 equivalent, that is, the starter of a new generation sharing many features.

As a proof of that more conservative position, Boeing kept some feedbacks that Airbus simply removed like, for instance, the feeling of the flight controls or the feedback from autopilot to throttle levers. Nobody questionned if this should be made and it was offered as a commercial advantage instead of a safety feature since it was not compulsory…actually, the differences among both manufacturers -accepted by the regulators as features independent of safety-  have been in the root of some events

Little-size Aviation is much more crowded and, right now, we have two new incomers from Russia and China (Sukhoi and Comac) including the possibility of an agreement among them to fight for the big planes market.

Anyway, that is still in the future. Big Aviation is still a game of two contenders and every single step in that game has been driven by efficiency. Some of us would like understability -in normal and abnormal conditions- to be among the priorities in future designs, whatever they come from the present contenders or from any newcomer.

Published in my Linkedin profile

Air Safety and Hacker Frame of Mind

If we ask anyone what a hacker is, we could get answers going from cyberpiracy, cyberdelincuency, cybersecurity…and any other cyberthing. However, it’s much more than that.

Hackers are classified depending of the “color of their hats”. White hat hacker means individual devoted to security, black hat hacker means cybercriminal and grey hat hacker means something in the middle. That can be interesting as a matter of curiosity but…what do they have in common? Furthermore, what do they have in common that can be relevant for Air Safety?

Simonyi, the creator of WYSIWYG, warned long ago about an abstraction scale that was adding more and more steps. Speaking about Information Technology, that means that programmers don’t program a machine. They instruct a program to make a program to be run by a machine. Higher programming levels mean longer distance from the real thing and more steps between the human action and the machine action.

Of course, Simonyi warned of this as a potential problem while he was speaking about Information Technology but…Information Technology is now ubiquitous and this problem can be found anywhere including, of course, Aviation.

We could say that any IT-intensive system has different layers and the number of layers defines how advanced the system is. So far so good, if we assume that there is a perfect correspondance between layers, that is, every layer is a symbolic representation of the former one and that representation should be perfect. That should be all…but it isn’t.

Every information layer that we put over the real thing is not a perfect copy -it should be nonsense- but, instead, it tries to improve something in safety, efficiency or, very often, it claims to be improving both. However, avoiding flaws in that process is something that is almost impossible. That is the point where problems start and when hacker-type knowledge and frame of mind should be highly desirable for a pilot.

The symbolic nature of IT-based systems makes its flaws to be hard to diagnose since their behavior can be very different to mechanic or electric systems. Hackers, good or bad, try to identify these flaws, that is, they are very conscious of this symbolic layer approach instead of assuming an enhanced but perfect representation of the reality below.

What means a hacker frame of mind as a way to improve safety? Let me show two examples:

  • From cinema: The movie “A beautiful mind”, devoted to John Nash and showing his mental health problems shows at a moment how and why he was able to control these problems: He was confusing reality and fiction until a moment where he found something that did not fit. It happened to be a little girl that, after many years, continued being a little girl instead of an adult woman. That gave him the clue to know which part of his life was created by his own brain.
  • From Air Safety: A reflection taken from the book “QF32” by Richard de Crespigny: Engine 4 was mounted to our extreme right. The fuselage separated Engine 4 from Engines 1 and 2. So how could shrapnel pass over or under the fuselage, then travel all that way and damage Engine 4? The answer is clear. It can’t. However, once arrived there, a finding appears crystal-clear: Information coming from the plane is not trustable because in any of the IT-layers the correspondance reality-representation has been lost.

Detecting these problems is not easy. It implies much more than operating knowledge and, at the same time, we know that nobody has full knowledge about the whole system but only partial knowledge. That partial knowledge should be enough to define key indicators -as it happens in the mentioned examples- to know when we work with information that should not be trusted.

The hard part of this: The indicators should not be permanent but adapted to every situation, that is, the pilot should decide about which indicator should be used in situations that are not covered by procedures. That should bring us to other issue: If a hacker frame of mind is positive for Air Safety, how to create, nurture and train it? Let’s use again the process followed by a hacker to become such a hacker:

First, hackers look actively for information. They don’t go to formal courses expecting the information to be given. Instead, they look for resources allowing them to increase their knowledge level. Then, applying this model to Aviation should suppose a wide access to information sources beyond the information provided in formal courses.

Second, hackers training is more similar to military training than academic training, that is, they fight to intrude or to defend a system and they show their skills by opposing an active enemy. To replay a model such as this, simulators should include situations that trainers can imagine. Then, the design should be much more flexible and, instead of simulators behaving as a plane is supposed to do, they should have room to include potential situations coming from information misrepresentation or from situations coming from automatic answers to defective sensors.

Asking for a full knowledge of all the information layers and their potential pitfalls can be utopic since nobody has that kind of knowledge, including designers and engineers. Everybody has a partial knowledge. Then, how can we do our best with this partial knowledge? Looking for a different frame of mind in involved people -mainly pilots- and providing the information and training resources that allow that frame of mind to be created and developed. That could mean a fully new training model.

Published originally in my Linkedin profile

Sterile discussions about competencies, Emotional Intelligence and others…

When «Emotional Intelligence» fashion arrived with Daniel Goleman, I was among the discordant voices affirming that the concept and, especially, the use of it, was nonsense. Nobody can seriously reject that personal features are a key for success or failure. If we want to call it Emotional Intelligence that’s fine. It’s a marketing born name not very precise but, anyway, we can accept it.

However, losing the focus is not acceptable…and some people lose the focus with statements like «80% of success is due to Emotional Intelligence, well above the percentage due to «classic» intelligence. We lose focus too with statements comparing competencies with academic degress and the role of each part in professional success. These problems should be analyzed in a different and simpler way: It’s a matter of sequence instead of percentage.

An easy example: What is more important for a surgeon to be successful? The academic degree or the skills shown inside the OR? Of course, this is a tricky question where the trick is highly visible. To enter the OR armed with an scalpel, the surgeon needs an academic recognition and/or a specific license. Hence, the second filter -skills- is applied over the ones who passed the first one -academic recognition- and we cannot compare in percentage terms skills and academic recognition.

Of course, this is an extreme situation but we can apply it to the concepts where some sterile discussions appear. Someone can perform well thank to Emotional Intelligence but the entrance to the field is guaranteed with intelligence in the most common used meaning. Could we say that, once passed an IQ threshold we should better improve our interaction skills than -if possible- improve 10 more IQ points? Possibly…but things don’t work that way, that is, we define the access level through a threshold value and performance with other criteria, always comparing people that share something: They all are above the threshold value. Then…how can I say «Emotional Intelligence is in the root of 80% of success»? It should be false but we can convert it into true by adding  «if the comparison is made among people whose IQ is, at least medium-high level». The problem is that, with this addition, it is not false anymore but this kind of statement should be a simple-mindedness proof.

We cannot compare the relative importance of two factors if one of them is referred to job access while the other is referred to job performance once in the job. It’s like comparing bacon with speed but using percentages to appear more «scientific».