Aviation: The other war

Published in Linkedin: Spanish Translation at the end

Nowadays, it is easy to recognize who are the two dominant powers among aviation manufacturers: Airbus and Boeing. However, these manufacturers have two powerful partners that are decisive in shaping the global aviation landscape: the European and North American regulators EASA and FAA.

The relationship between both regulators has always been one of collaboration not without some conflicts due to support for «their» reference manufacturer that may have led them in subtle ways to take sides in the market. However, anyone entering the aviation world knows that they must go through the certifications and audits of one or both world’s two largest regulators.

That situation could be changing in a slow and probably intentional way by a third player that does not seem to be in a hurry: The first indication of that change was the appearance of the Chinese manufacturer COMAC: COMAC, taking advantage of the size of the Chinese domestic market, decided to manufacture an aircraft with no intention of certifying it for flight on world markets but simply for use on domestic flights (ARJ21). This aircraft would serve the manufacturer to gain experience and, subsequently, to be able to compete with the major manufacturers with its C919 model.

Airbus and Boeing, apparently at least, did not attach much importance to the first move because of its restriction to the Chinese market, nor to the second since, technologically, they found it to be a far inferior product to those manufactured by Airbus and Boeing. However, both manufacturers may be losing sight of something: Perhaps it is not about competing with Airbus and Boeing but with FAA and EASA. In other words, CAAC (Civil Aviation Administration of China) might try to be the one setting the global aviation standards in the next future.

In addition to COMAC’s activity, in recent months there has been another movement that, perhaps, has not been appreciated for its real significance since it has been attributed to the political tensions between China and the USA: CAAC’s refusal to certify the Boeing 737MAX following EASA and FAA.

Both, EASA and FAA know that 737MAX should never have been certified, at least under the type certificate for the Boeing 737 issued in 1967 and doing so revealed a clear collusion between Boeing and FAA. However, they were faced with a very difficult situation: If thousands of aircraft, including already manufactured and those ordered by various airlines were not allowed to fly, a crisis in the aviation market could be triggered with consequences that are difficult to calculate: Boeing’s eventual bankruptcy could trigger the bankruptcy of many airlines with aircraft they could not use and, in addition, there would be an undersupplied market, since the other major manufacturer would not have the production capacity to fill the gap.

CAAC had fewer commitments since it has a large domestic market and much greater control over it than accessible to its FAA and EASA equivalents. Therefore,  it simply denied authorization to fly the 737MAX and did not follow the big regulators in their compromise solution.
At this point, many countries that are not under the authority of EASA or FAA accept those regulators as their own references and simply adopt the regulations and standards coming from them. What would be the incentive to change their reference to CAAC? Let’s go back to COMAC:

An aircraft certified to fly only in China under CAAC authority could be automatically cleared to fly also in countries that adopted CAAC as their reference authority. Africa, Central, and South America, and large parts of Asia, where China has a strong influence, could look favorably on the ARJ21 for their domestic flights or for flights between countries that had also accepted CAAC as a reference.

The later model, C919, has been manufactured with the purpose of being certified for worldwide use and, if this objective is achieved, its lower technological level could be more than compensated by favorable pricing policies that would make it accessible both to those same markets that could be interested in the ARJ21 and to the low-cost segment of aviation in countries with a higher level of development.

The moves are slow but seem to have a clear direction, aimed at establishing the Chinese aviation authority as a world reference. The possibility of a contingency that could accelerate this process, such as a new serious event involving a 737MAX, cannot be excluded. If this were to happen, the performance and motives of the still world reference aviation authorities would be called into question and, with that, the position of the third party in waiting would be favored.

The situation suggests that in the near future, global aviation will not be a matter of two but of three and, in the long term, it is still to be defined who will prevail.

AVIACIÓN: LA OTRA GUERRA

A fecha de hoy, es fácil reconocer quiénes son las dos potencias dominantes entre los fabricantes de aviación: Airbus y Boeing. Sin embargo, estas dos potencias tienen dos poderosos asociados que son decisivas para configurar el panorama de la aviación mundial: los reguladores europeo y norteamericano EASA y FAA.

La relación entre ambos reguladores ha sido siempre de colaboración no exenta de algunos conflictos debido al apoyo a “su” fabricante de referencia que les puede haber llevado en formas más o menos sutiles a tomar partido en el mercado. Sin embargo, en términos generales, cualquiera que entre en el mundo de la aviación sabe que tiene que pasar por las certificaciones y las auditorías de uno o de los dos mayores reguladores mundiales.

Esa situación podría estar cambiando de una forma lenta y probablemente intencionada por parte de un tercer actor que no parece tener prisa: El primer indicio de ese cambio fue la aparición del fabricante chino COMAC: COMAC, aprovechando el tamaño del mercado interno chino, decidió fabricar un avión sin intención de certificarlo para su vuelo en los mercados mundiales sino, simplemente, para utilizarlo en vuelos interiores (ARJ21). Este avión le serviría al fabricante para ganar experiencia y, posteriormente, poder lanzarse a competir con los grandes fabricantes con su modelo C919.

Airbus y Boeing, aparentemente al menos, no dieron mayor importancia al primer movimiento por su restricción al mercado chino ni al segundo ya que, tecnológicamente, encontraban que era un producto muy inferior a los fabricados por Airbus y Boeing. Sin embargo, ambos fabricantes podrían estar perdiendo algo de vista: Tal vez no se trata de competir con Airbus y Boeing sino con FAA y EASA. En otros términos, CAAC (Civil Aviation Administration of China) podría intentar ser quien fije los estándares mundiales de aviación en el próximo futuro.

Además de la actividad de COMAC, en los últimos meses se ha producido otro movimiento que, tal vez, no ha sido valorado en su trascendencia real y se ha atribuido a las tensiones políticas entre China y USA: La negativa por CAAC de certificar el Boeing 737MAX siguiendo a EASA y FAA.

 EASA y FAA saben muy bien que el 737MAX nunca se debió certificar, al menos bajo el certificado de tipo correspondiente al Boeing 737 emitido en 1967. Sin embargo, se encontraron con una situación de hecho con muy difícil salida: Si no se permitía volar a los miles de aviones ya fabricados más los pedidos por distintas aerolíneas, se podía desencadenar una crisis en el mercado de la aviación de consecuencias difíciles de calcular: La eventual bancarrota de Boeing podía arrastrar la bancarrota de muchas aerolíneas con aviones que no podían utilizar y, además, habría que contar con un mercado desabastecido, ya que el otro gran fabricante no tendría capacidad de producción para cubrir el hueco. CAAC tenía menos compromisos, puesto que tiene un gran mercado interno y un control sobre él mucho mayor que el accesible a sus equivalentes FAA y EASA. Por ello, simplemente, denegó la autorización para volar al 737MAX y no siguió a los grandes reguladores en su solución de compromiso.

En este momento, muchos países que no están bajo la autoridad de EASA o FAA aceptan a dichos reguladores como sus propias referencias y, simplemente, adoptan la normativa y estándares procedentes de éstos. ¿Cuál sería el incentivo para cambiar su referencia a la CAAC? Volvamos a COMAC:

Un avión certificado para volar sólo en China bajo la autoridad de la CAAC podría quedar automáticamente autorizado para volar también en países que adoptasen a la CAAC como su autoridad de referencia. Gran parte de África, de América Central y del Sur o de grandes zonas de Asia, donde China tiene una fuerte influencia, podía ver con buenos ojos al ARJ21 para sus vuelos internos o para vuelos entre países que hubieran aceptado también a la CAAC como referencia.

El modelo posterior, C919, ha sido fabricado con el propósito de ser certificado para su uso en todo el mundo y, si este objetivo se consigue, su menor nivel tecnológico podría ser sobradamente compensado mediante políticas favorables de precios que lo hicieran accesible tanto a esos mismos mercados que podrían tener interés en el ARJ21 como al segmento low-cost de la aviación en países con mayor nivel de desarrollo.

Los movimientos son lentos pero parecen tener una dirección clara, encaminada a establecer a la autoridad de aviación china como una referencia mundial. No puede excluirse la posibilidad de alguna contingencia que pueda acelerar ese proceso como, por ejemplo, un nuevo evento grave relacionado con un 737MAX. Si así ocurriera, quedarían en entredicho la actuación y los motivos de las aún autoridades de referencia mundial en aviación y, con ello, se favorecería la posición del tercero en espera.

La situación, vista en su conjunto, invita a pensar que en el próximo futuro la aviación mundial no será cosa de dos sino de tres y, en el largo plazo, está por definir cuál de los tres prevalecerá.

Behavioral Economics: Cuando los tecnólogos se enamoraron de la psicología

(Published in my newsletter and profile in Linkedin. English Version after the Spanish text)

No es casualidad que el primer psicólogo en obtener un premio Nobel, Daniel Kahneman, haya hecho una contribución muy apreciada por los tecnólogos de la información, especialmente los dedicados a la inteligencia artificial.

Pocas cosas más atractivas para un tecnólogo que la idea de que existen sesgos inherentes al procesamiento humano de información y que estos sesgos se producen por la necesidad de conseguir resultados rápidos, aunque contengan errores. Por añadidura, tales sesgos se deben a carencias para procesar grandes cantidades de información rápidamente. Dicho de otra forma, las personas procesan información incorrectamente porque no tienen la capacidad de un ordenador para hacerlo. Saltar desde ahí a “Si tengo la suficiente capacidad de procesamiento de información, eso no ocurrirá con una máquina” es muy tentador -aunque recuerda los viejos tiempos de Minsky y la llamada GOFAI (Good Old-Fashioned Artificial Intelligence)- y, de ahí, que la corriente de Behavioral Economics haya tenido gran aceptación en el mundo de la tecnología. Sin embargo, las cosas no son tan claras.

Kahneman y Tversky, principales autores de referencia dentro de esta línea, descubrieron un conjunto de sesgos y, además, los nombraron de formas bastante ingeniosas y los recopilaron en “Pensar rápido, pensar despacio”. Tras ello, fueron aplaudidos y criticados aunque tal vez no lo hayan sido por los motivos adecuados o, para precisar más, tal vez no hayan sido totalmente entendidos ni por quienes los aplauden ni por quienes los critican.

Si comenzamos por la parte crítica, se suele aducir que los sesgos se han probado en condiciones de laboratorio y que, en la vida normal, esas situaciones no se dan o simplemente los sesgos no aparecen. Es verdad; las condiciones de laboratorio fuerzan la aparición del sesgo y en el mundo real, en el que habitualmente se opera dentro de un contexto, no aparecen o lo hacen de forma muy reducida. Tomemos como ejemplo, una de las historias más populares de “Pensar rápido, pensar despacio”, la historia de “Linda, la cajera”:

Linda es una mujer de 31 años de edad, soltera, extrovertida y muy brillante. Estudió filosofía en la universidad. En su época de estudiante mostró preocupación por temas de discriminación y justicia social y participó en manifestaciones en contra del uso de la energía nuclear. De acuerdo con la descripción dada, ordene las siguientes ocho afirmaciones en orden de probabilidad de ser ciertas. Asigne el número 1 a la frase que considere con más probabilidad de ser cierta, dos a la siguiente y así sucesivamente hasta asignar 8 a la frase menos probable de ser cierta.

o Linda es una trabajadora social en el área psiquiátrica.

 o Linda es agente vendedora de seguros.

o Linda es maestra en una escuela primaria.

o Linda es cajera de un banco y participa en movimientos feministas.

o Linda es miembro de una asociación que promueve la participación política de la mujer.

o Linda es cajera de un banco.

o Linda participa en movimientos feministas. o Linda trabaja en una librería y toma clases de yoga.

Obsérvense las opciones cuarta y sexta: Obviamente, la cuarta es un subconjunto de la sexta y, sin embargo, la cuarta aparecía de forma generalizada como más probable que la sexta. Un claro error de lógica pero, al mismo tiempo, ignora un hecho elemental en las relaciones humanas: Entendemos que, cuando una persona nos aporta un dato -en este caso, el perfil que dibuja de Linda como activista social- lo hace porque entiende que dicho dato es relevante, no con el exclusivo propósito de engañar. Más aún: A los sesgos de Kahneman y Tversky les ocurre algo parecido a la ley de la gravitación universal de Newton: Se trata de formas de procesamiento de la información que funcionan bien en su contexto y sólo fallan fuera de éste y, por tanto, los modelos de procesamiento subyacentes no pueden ser rechazados sino, en todo caso, asumir que tienen una validez limitada a un contexto.

Veamos qué ocurre en el extremo que aplaude la Behavioral Economics: Aun admitiendo que hay un forzamiento hacia el extremo por las condiciones experimentales, se demuestra la existencia del sesgo y, con ello, que el modelo de procesamiento humano no es universalmente aplicable, sino que sólo funciona dentro de un contexto. Retomando el ejemplo de Newton, los tecnólogos creen disponer del equivalente a una teoría de la relatividad, con un ámbito de aplicación más amplio que el modelo de Newton y con capacidad para sustituir a éste. La corriente de Behavioral Economics y sus derivaciones suministró todos los argumentos requeridos para llegar a esta conclusión.

Sin embargo, tanto críticos como partidarios parecen aceptar que el razonamiento humano es “quick-and-dirty” y el elogio o la crítica obedece a la importancia o falta de ella que se concede a ese hecho.

La pregunta no respondida ni por los que aceptan el modelo ni por los que lo rechazan es si ese modelo de procesamiento sujeto a sesgos es el único modelo posible de razonamiento humano o si hay otros distintos y que no estén recogidos. Los propios Kahneman y Tversky trataron de responder a esta pregunta aduciendo la existencia de un sistema uno, automático, y un sistema dos, deliberativo. El primero estaría sujeto a los sesgos identificados y el segundo procesaría información en una forma parecida a cómo lo hace un ordenador, es decir, siguiendo las reglas estrictas de la lógica formal aunque -matizan- siempre es susceptible de ser engañado por el sistema uno. Al proclamar la existencia de un sistema lento, racional y seguro, volvieron a dar argumentos a los tecnólogos para promover la inteligencia artificial como alternativa a la humana. No sólo es una cuestión de tecnólogos de la información; si se revisan los textos desclasificados por la CIA sobre técnica de análisis de información puede encontrarse una oda al procesamiento racional que evalúa múltiples opciones y asigna pesos a las variables mientras ignora la intuición y la experiencia del analista basándose en que puede introducir sesgos. El procesamiento de información siguiendo los cauces estrictamente racionales está de moda pero…

¿Y si existiera un “sistema tres” e incluso un “sistema cuatro”, ajenos a los sesgos de la Behavioral Economics pero, al mismo tiempo, no empeñados en reproducir un modelo de procesamiento racional canónico? ¿Y si esos sistemas fueran capaces de conseguir resultados tal vez inalcanzables para un modelo lógico formal, especialmente si tales resultados se requieren con restricciones de tiempo y ante situaciones no previstas?

Esos sistemas existen y, tal vez, uno de los autores que más claramente los ha identificado es Gary Klein: No se trata de procesos “quick-and-dirty” motivados por la falta de capacidad de procesamiento o de memoria ni de un proceso reproducible por una máquina sino de algo distinto, que introduce en la ecuación elementos de experiencia pasada, extrapolaciones de situaciones tomadas de otros ámbitos y experiencias sensoriales no accesibles a la máquina.

Resulta paradójico que uno de los pioneros de la inteligencia artificial, Alan Turing, fuese la prueba viviente de ese modelo de procesamiento no accesible a la máquina: Turing redujo espectacularmente el número de variaciones teóricamente posibles en la máquina de claves Enigma introduciendo una variable en la que nadie había pensado -que el idioma original en que estaba emitido el mensaje era el alemán- y ello permitió trabajar con las estructuras del idioma y reducir opciones..

Cuando un piloto de velero -o del US1549- concluye que no puede llegar a la pista, no lo hace después de calcular velocidad y dirección del viento, y el coeficiente de planeo del avión corregido por el peso y la configuración sino observando la cabecera de la pista y viendo si asciende o desciende sobre un punto de referencia en el parabrisas: Si la cabecera asciende sobre el punto de referencia, no llega. Si desciende, sí puede llegar. Así de fácil.

Cuando un jugador de baseball intenta llegar al punto donde va a llegar la pelota y al mismo tiempo que ésta, no calcula trayectorias angulares ni velocidades, sino que ajusta la velocidad de su carrera para mantener un ángulo constante con la pelota.

Cuando un avión ve afectados sus sistemas por la explosión de un motor -QF32- y lanza centenares de mensajes de error, los pilotos tratan de atenderlos hasta que descubren uno muy probablemente falso: Fallo de un motor en el extremo del ala contraria. Los pilotos concluyen que, si la metralla del motor que explotó hubiera llegado al otro extremo del avión, tendría que haber pasado a través del fuselaje y, a partir de ahí, deciden volar el avión a la antigua usanza, ignorando los mensajes de error del sistema.

Cuando, en plena II Guerra Mundial, a pesar de haber descifrado el sistema de claves japonés, los norteamericanos no saben de qué punto están hablando, aunque tienen la sospecha de que podría tratarse de Midway, lanzan un mensaje indicando que tienen problemas de agua en Midway. Cuando, tras recibirlo, los japoneses se refieren a las dificultades con el agua en el punto al que se referían sus mensajes previos, quedó claro a qué se referían. La preparación de la batalla de Midway tiene origen en ese simple hecho difícil de imitar por una máquina.

Incluso en el cine disponemos de un caso, no sabemos si real o apócrifo, en la película “Una mente maravillosa” sobre la vida de John Nash: En un momento especialmente dramático, Nash concluye que los personajes que está viendo son imaginarios porque observa que una niña, que formaba parte de las alucinaciones, no ha crecido a lo largo de varios años y permanece exactamente igual que las primeras veces que la vio.

Todos ellos son casos muy conocidos a los que se podría añadir muchos más; incidentalmente, uno de los ámbitos donde más ejemplos pueden encontrarse es en la actividad de los hackers. En cualquier caso, todas las situaciones tienen algo en común: El procesamiento de información que no sigue el modelo racional canónico no siempre es un modelo a rechazar por estar sujeto a sesgos sino que también introduce elementos propios no accesibles a un sistema de información, bien porque le falta el aparato sensorial capaz de detectarlo o bien porque le falta la experiencia que permite extrapolar situaciones pasadas y llegar a conclusiones distintas y, por supuesto, mucho más rápidas de lo accesible a un avanzado sistema de información.

Los tecnólogos han visto en la Behavioral Economics un reconocimiento casi explícito de la superioridad de la máquina y, por tanto, la han acogido con entusiasmo. Sin embargo, para los psicólogos, la Behavioral Economics tiene un punto ya conocido: Adolece del mismo problema que otras corrientes anteriores en psicología: Así, el psicoanálisis puede alardear de descubrir fenómenos inconscientes…pero hacer de éstos el núcleo del psiquismo humano tiene poco recorrido; la psicología del aprendizaje ha trabajado intensivamente con los mecanismos de condicionamiento clásico y operante…pero tratar de convertir el aprendizaje en el único motor del psiquismo no tiene mucho sentido. La Gestalt descubrió un conjunto de leyes relativas a la percepción…pero tratar de extrapolarlas al conjunto del funcionamiento humano parece excesivo. Ahora la toca a la Behavioral Economics. ¿Existen los sesgos cognitivos? Por supuesto que existen. ¿Representa ese modelo de procesamiento quick-and-dirty lleno de errores la única alternativa a un supuestamente siempre deseable procesamiento racional canónico, susceptible de enlatar en una máquina? No.

No olvidemos que incluso el procesamiento “quick-and-dirty” funciona correctamente en la mayoría de los casos aunque no lo haga en el vacío; necesita hacerlo en un contexto conocido por el sujeto. En segundo lugar, hay otro modelo de procesamiento que hace uso de una experiencia cristalizada en conocimiento tácito que, como tal, es difícil de expresar y más aún de “enlatar” en algoritmos, y de un aparato sensorial que es específicamente humano. Reducir el procesamiento humano de información a los estrechos cauces marcados por la Behavioral Economics no deja de ser una forma de caricaturizar a un adversario, en este caso el humano,  al que se pretende contraponer la potencia de la máquina. Eso sí; no dejaremos de recordar a Edgar Morin y su idea de que la máquina-máquina siempre es superior al hombre-máquina. Si insistimos en despojar al humano de sus capacidades específicas o en despreciarlas, mejor quedémonos con la máquina.

BEHAVIORAL ECONOMICS: WHEN TECHNOLOGISTS FELL IN LOVE WITH PSYCHOLOGY

It is no coincidence that the first psychologist to win a Nobel Prize, Daniel Kahneman, has contributed something much appreciated by information technologists, especially those dedicated to artificial intelligence.

Few things are more attractive to a technologist than the idea that there are inherent biases in human information processing and that these biases are produced by the need to achieve fast results, even if they contain errors. Moreover, such biases are due to shortcomings in processing large amounts of information quickly. Put another way, people process information incorrectly because they do not have the capacity of a computer to do so. Jumping from there to «If I have sufficient information processing capacity, that won’t happen with a machine» is very tempting -although it reminds of the old days of Minsky and the so-called GOFAI (Good Old-Fashioned Artificial Intelligence)– and, hence, the Behavioral Economics current has had great acceptance in the world of technology. However, things are not so clear-cut.

Kahneman and Tversky, the main authors of reference in this line, discovered a set of biases and, moreover, named them in rather ingenious ways and compiled them in «Thinking fast and slow». After that, they were applauded and criticized, although perhaps not for the right reasons or, to be more precise, perhaps they were not fully understood by those who applaud them nor by those who criticize them.

If we start with the critical part, it is often argued that the biases have been tested under laboratory conditions and that, in normal life, such situations do not occur, or the biases simply do not appear. It is true; laboratory conditions force the appearance of the bias and in the real world, in which we usually operate within a context, they do not appear or do so in a very reduced form. Take, for example, one of the most popular «Thinking fast and slow» stories, the story of «Linda, the cashier»:

Linda is a 31-year-old woman, single, outgoing, and very bright. She studied philosophy at university. As a student, she was concerned about discrimination and social justice issues and participated in demonstrations against the use of nuclear energy. According to the description given, rank the following eight statements in order of probability of being true. Assign the number 1 to the statement you think is most likely to be true, two to the next one, and so on until you assign 8 to the statement least likely to be true.

o Linda is a social worker in the psychiatric field.

 o Linda is an insurance sales agent.

o Linda is an elementary school teacher.

o Linda is a bank teller and participates in feminist movements.

o Linda is a member of an association that promotes women’s political participation.

o Linda is a bank teller.

o Linda participates in feminist movements. o Linda works in a bookstore and takes yoga classes.

Note the fourth and sixth options: Obviously, the fourth is a subset of the sixth and, nevertheless, the fourth appeared in a generalized way as more probable than the sixth. A clear error of logic but, at the same time, it ignores an elementary fact in human relations: We understand that, when a person gives us a piece of information – in this case, the profile of Linda as a social activist – it is because that person understands that such information is relevant, not with the exclusive purpose of deceiving us. Moreover: Kahneman’s and Tversky’s biases are like Newton’s law of universal gravitation: They are showing information processing strategies that work well in context and only fail out of context. Therefore, the underlying processing models cannot be rejected but, in any case, assumed to have a validity limited to a context.

Let us see what happens at the extreme that applauds Behavioral Economics: Even admitting that there is not a natural setting, and they are forcing the conditions towards the extreme through the experimental design, the existence of the bias is demonstrated and, with it, it seems that the human processing model is not universally applicable, but limited to a context. Taking Newton’s example, technologists believe that they have the equivalent of a theory of relativity, with a broader scope of application than Newton’s model and with the capacity to replace it. Behavioral Economics and its derivations provided all the arguments required to reach this conclusion.

However, both critics and supporters seem to accept that human reasoning is «quick-and-dirty» and the praise or criticism is due to the importance or lack of it that is given to that fact. The unanswered question, neither by those who accept the model nor by those who reject it, is whether this model of processing subject to biases is the only possible model of human reasoning or whether there are others that are different and not captured. Kahneman and Tversky themselves tried to answer this question by adducing the existence of a system one, automatic, and a system two, deliberative. The first would be subject to the identified biases and the second would process information in a manner like that of a computer, i.e., following the strict rules of formal logic, although -they argue- it is always susceptible to being fooled by system one.

By proclaiming the existence of a slow, rational, and safe system, they once again gave technologists arguments to promote artificial intelligence as an alternative to human intelligence. It is not only a matter of information technologists; if one reviews the declassified CIA texts on information analysis technique one can find an ode to rational processing that evaluates multiple options and assigns weights to variables while ignoring the analyst’s intuition and experience on the grounds that it may introduce bias. Processing information along strictly rational lines is all the rage but… what if there was a different and valid way?

What if there were a «system three» and even a «system four», unbiased by the proclaimed Behavioral Economics biases but, at the same time, not trying to reproduce a canonical rational processing model? What if such systems could achieve results perhaps unattainable for a formal logical model, especially if such results are required under time constraints and in the face of unforeseen situations?

Such systems do exist, and perhaps one of the authors who has identified them most clearly is Gary Klein: They are not «quick-and-dirty» processes motivated by a lack of processing capacity or memory, nor are they a machine-reproducible process, but rather something different, which introduces into the equation elements of past experience, extrapolations of situations taken from other fields and sensory experiences not accessible to the machine.

It is paradoxical that one of the pioneers of artificial intelligence, Alan Turing, was the living proof of this model of processing not accessible to the machine: Turing spectacularly reduced the number of theoretically possible variations in the Enigma key machine by introducing a variable that no one had thought of – that the original language in which the message was issued was German – and this allowed working with the language structures and reducing options.

When a glider -or US1549- pilot concludes that he cannot reach the runway, he does so not after calculating wind speed and direction, and the glide ratio of the aircraft corrected for weight and configuration, but by looking at the runway threshold and seeing if it ascends or descends over a reference point on the windshield: If the threshold ascends over the reference point, he will not reach it. If it descends, he will arrive. It’s that easy.

When a baseball player tries to get to the point where the ball is going to arrive and at the same time as the ball, he does not calculate angular trajectories and velocities but adjusts his running speed to maintain a constant angle to the ball.

When an aircraft’s systems are affected by an engine explosion -QF32- and it sends hundreds of error messages, pilots try to deal with them until they discover a most probably false one: Failure of an engine on the opposite wing tip. The pilots conclude that if the shrapnel from the exploding engine had reached the other end of the plane, it would have passed through the fuselage, and so they decide to fly the plane the old-fashioned way, ignoring the system error messages.

When, in the middle of World War II, despite having deciphered the Japanese key system, the Americans do not know what point they are talking about, although they suspect it might be Midway, they send a message indicating that they have water problems at Midway. When, after receiving it, the Japanese refer to the water difficulties at the point to which their previous messages referred, it became clear what they were referring to. The preparation for the battle of Midway has its origin in that simple fact which is difficult for a machine to imitate.

Even in the cinema, we have a case, we do not know if it is real or apocryphal, in the movie «A Beautiful Mind» about the life of John Nash: In a particularly dramatic moment, Nash concludes that the characters he is seeing are imaginary because he observes that a little girl, who was part of the hallucinations, has not grown over several years and remains exactly the same as the first times he saw her.

These are all well-known cases to which many more could be added; incidentally, one of the areas where most examples can be found is in the activity of hackers. In any case, all situations have something in common: Information processing that does not follow the canonical rational model is not always a model to be rejected because it is subject to biases. Actually, it also introduces elements that are not accessible to an information system, either because it lacks the sensory apparatus capable of detecting it or because it lacks the experience that allows extrapolating past situations and reaching conclusions that are different and, of course, much faster than what is accessible to an advanced information system.

Technologists have seen in Behavioral Economics an almost explicit recognition of the superiority of the machine and have therefore welcomed it with enthusiasm. However, for psychologists, Behavioral Economics has a well-known point: it suffers from the same problem as other previous currents in psychology: Thus, Psychoanalysis may boast of discovering unconscious phenomena…but making these the core of the human psyche is a long shot; the Psychology of Learning has worked intensively with the mechanisms of classical and operant conditioning…but trying to make learning the only engine of the psyche does not make much sense. Gestalt discovered a set of laws related to perception…but trying to extrapolate them to the whole of human functioning seems excessive. Now it is the turn of Behavioral Economics. Do cognitive biases exist? Of course, they exist. Does that error-ridden quick-and-dirty processing model represent the only alternative to a supposedly always desirable canonical rational processing, amenable to canning in a machine? No.

Let us not forget that even quick-and-dirty processing works correctly in most cases even if it does not do so in a vacuum; it needs to do so in a context known to the subject. Secondly, there is another processing model that uses an experience crystallized in tacit knowledge that, as such, is difficult to express and even more difficult to be «canned» in algorithms, and of a sensory apparatus that is specifically human. Reducing the human processing of information to the narrow channels set by Behavioral Economics is nothing more than a way of caricaturing an adversary, in this case, the human, against which the power of the machine is intended to be set. Of course, we will not fail to remember Edgar Morin and his idea that the machine-machine is always superior to the man-machine. If we insist on stripping humans of their specific capacities or despising them, we should stick with the machine.

THE HIDDEN SIDE-EFFECTS OF ARTIFICIAL INTELLIGENCE

(English Translation below)

La Inteligencia Artificial, desde su inicio, ha traído un enorme entusiasmo tanto a favor como en contra. Tanto unos como otros incurren en un esencialismo difícil de justificar: Los primeros porque, al equiparar al ser humano con un procesador, llegan a la conclusión de que el estado del arte de la tecnología de la información permite tener mejores opciones. Los segundos porque consideran que el ser humano aporta elementos que son imposibles de conseguir para la máquina.

Sin embargo, en el estado actual, esa discusión que aún sigue vigente peca de superficialidad: ¿La tecnología de la información es capaz de procesar infinitamente más información y mucho más rápidamente que cualquier humano? Claramente, sí. ¿El ser humano es capaz de atender a situaciones que son imposibles para los más avanzados productos tecnológicos? Claramente también. ¿Puede entonces encontrarse un nicho de especialización en el que convivan ambos? Eso ya es más discutible y de ahí la idea de efectos escondidos.

Pongamos un ejemplo conocido: La previsión meteorológica. Un meteorólogo experto es capaz de detectar pautas que se le pueden escapar a un sistema de información avanzado; sin embargo, el sistema de información avanzado es capaz de procesar muchas más fuentes de información de las que puede procesar el meteorólogo, especialmente si contamos con limitaciones temporales que van a definir la validez de la previsión.

El meteorólogo experto puede equivocarse pero también puede conseguir resultados superiores a los accesibles a un sistema de información gracias a su conocimiento de pautas y a su capacidad para “construir” una evolución esperable. Sin embargo, contradecir al sistema encargado de las previsiones puede representar un compromiso importante porque, en caso de error, alguien le estará señalando que el sistema se lo estaba avisando.

Un meteorólogo menos experto, ya formado en un entorno de sistemas de información, será más proclive a aceptar pasivamente los resultados del sistema y le resultará más difícil adquirir el conocimiento de un meteorólogo experto. Le faltará capacidad para seleccionar las fuentes de datos relevantes y alcanzar conclusiones propias y lo que es peor: La experiencia trabajando con un sistema de información no le dotará de esa capacidad. Nunca alcanzará el nivel de experto sino, como máximo, el de operador avanzado de sistemas de información. Por añadidura, un cambio de sistema de información puede representar la pérdida de un conjunto de ”trucos del oficio” aprendidos de la observación; se trata de un conocimiento operativo que no es susceptible de trasladar a otro entorno, incluso cuando ese nuevo entorno no signifique otra cosa que un cambio de proveedor tecnológico.

Por supuesto, la meteorología es un mero ejemplo que se puede extrapolar a la aviación, a la navegación marítima, a la mecánica, a las finanzas y virtualmente a cualquier terreno imaginable.

La “toma racional de decisiones” se ha convertido en un estándar incluso en aquellos terrenos en que no es la práctica adecuada; algunos autores como Gigerenzer, Klein o Flach han señalado entornos en que ese modelo de procesamiento de información no sirve y, en su lugar, se utilizan modelos de procesamiento heurístico que hacen uso de la experiencia acumulada en forma de conocimiento tácito, difícil de expresar y más difícil aún de formalizar, y que son los que han resuelto situaciones en las que un modelo “racional” habría llegado a la parálisis por el análisis, incluso aunque se requiriese una respuesta urgente y se tratase de un caso de vida o muerte.

Casos muy conocidos en aviación como el American Airlines 96, United 232, QF32, US1549, Gimli glider y muchos otros muestran claramente cuál es el valor del experto. Sin embargo, el riesgo que se está corriendo hoy es que las generaciones formadas en entornos de tecnología de la información avanzada nunca alcancen ese nivel de experto.

El sociólogo francés Edgar Morin advertía de que la máquina-máquina siempre será superior al hombre-máquina. Sin embargo, al privar al ser humano de la capacidad para adquirir conocimiento real, lo estamos convirtiendo en hombre-máquina.

¿Puede convivir la inteligencia artificial con la generación de expertos capaces de cubrir las insuficiencias de la primera? En teoría, sí. Los hechos están demostrando que esa opción tiene una gran dificultad porque no son desarrollos independientes.

La persona formada en una disciplina donde ha entrado la inteligencia artificial tiene cada vez más difícil el camino para convertirse en un experto real, es decir, elegir los datos que necesita, procesarlos en la forma que un humano experto lo haría y llegar a sus propias conclusiones.

El advenimiento de la nueva generación de sistemas de información dotados de inteligencia artificial está conduciendo a que el conocimiento tácito se esté tratando como conocimiento obsoleto, aunque se trata de dos cosas completamente distintas. Se está destruyendo la escala que permite acceder al grado de experto, atendiendo a cantidad y diversidad de situaciones vividas y, con ello, podemos perder capacidad para atender a situaciones imposibles para cualquier sistema de información por avanzado que éste sea.

No es la única situación posible, pero escenarios en los que hay que actuar de forma inmediata y con unos datos fragmentarios y a menudo contradictorios no son precisamente el ámbito en que se desenvuelven bien los sistemas de información. El ser humano muestra en estas situaciones capacidades que están más allá de las de los sistemas de información -porque procesa esas situaciones de forma radicalmente distinta- pero, para que esto ocurra, es necesario disponer de expertos y no de los hombres-máquina mencionados por Morin.

El gran problema y el efecto colateral oculto de los sistemas de información avanzados es precisamente que está destruyendo la capacidad de generar expertos y, en su lugar, está estableciendo un reino de mediocridad tecnológicamente avanzada. Cuando un avanzadísimo sistema de inteligencia artificial basado en la computación cuántica tenga que tomar una decisión en una situación como las señaladas más arriba, nos acordaremos de los expertos desaparecidos.

THE HIDDEN SIDE-EFFECTS OF ARTIFICIAL INTELLIGENCE

Artificial Intelligence, since its inception, has brought enormous enthusiasm both for and against. Both incur in an essentialism that is difficult to justify: The former because, by equating the human being with a processor, conclude that the state of the art of Information Technology allows for better options. The latter because they consider that the human being provides elements that are impossible for the machine to achieve.

However, in the current state of the art, this discussion, which is still going on, is superficial:

Is information technology capable of processing infinitely more information and much faster than any human being? Clearly, yes.

Is a human being capable of dealing with situations that are impossible for the most advanced technological products? Clearly, yes.

Can a niche of specialization be found in which both can coexist? That is more debatable and hence the idea of hidden effects.

Let’s take a well-known example: weather forecasting. An expert meteorologist can detect patterns that may escape an advanced information system; however, the advanced information system is able to process many more sources of information than the expert meteorologist can process, especially if we have temporal limitations that will define the validity of the forecast.

The expert meteorologist can be wrong, but he can also achieve results superior to those accessible to an information system thanks to his knowledge of patterns and his ability to «construct» an expected evolution. However, contradicting the forecasting system may represent a major compromise because, in case of error, someone will be pointing out to him that the system was warning him.

A less experienced meteorologist, already trained in an information systems environment, will be more likely to passively accept the results of the system and will find it more difficult to acquire the knowledge of an expert meteorologist. He will lack the ability to select relevant data sources and reach his own conclusions and worse: His experience working with an information system will not give him that ability. He will never reach the level of an expert but, at most, that of an advanced information system operator. Moreover, a change of information system may represent the loss of a set of «tricks of the trade» learned from observation; this would be operational knowledge not transferable to another environment, even if “new environment” means nothing more than a change of technological supplier.

Of course, meteorology is only an example of something that happens in aviation, maritime navigation, mechanics, finance, and virtually any imaginable terrain.

Rational decision making» has become a standard even in those fields where it is not the proper practice; some authors such as Gigerenzer, Klein or Flach have pointed out environments where such an information processing model does not work. Instead, heuristic processing that make use of accumulated experience in the form of tacit knowledge are used; they are difficult to express and even more difficult to formalize and, despite that, these are the models that have solved situations where a «rational» model would have reached paralysis by analysis, even if an urgent response was required and it was a case of life and death.

Well known cases in aviation such as American Airlines 96, United 232, QF32, US1549, Gimli glider and many others clearly show what the value of the expert is. However, the risk today is that generations trained in advanced information technology environments will never reach that level of expert.

The French sociologist Edgar Morin warned that the machine-machine will always be superior to the man-machine. However, by depriving human beings of the ability to acquire real knowledge, we are turning them into man-machines.

Can artificial intelligence coexist with the generation of experts capable of covering the inadequacies of the former? In theory, yes. The facts are showing that this option is very difficult because they are not independent developments.

The person trained in a discipline where Artificial Intelligence has entered has an increasingly difficult path to become a real expert, i.e., to choose the data he needs, process it in the way a human expert would and come to his own conclusions.

The advent of the new generation of information systems equipped with artificial intelligence is leading to the fact that tacit knowledge is being treated as obsolete knowledge, although these are two completely different things. The scale that allows access to the level of expert, based on the number and diversity of situations experienced, is being destroyed and, as a result, we may lose the capacity to deal with situations that are impossible for any information system, no matter how advanced it may be.

Scenarios requiring immediate action, with fragmentary and often contradictory data are not exactly the area in which information systems excel. In these situations, human beings show capabilities that are beyond those of information systems -because they process these situations in a radically different way- but for this to happen, it is necessary to have experts and not the machine-men mentioned by Morin.

The great problem and the hidden side effect of advanced information systems is precisely that it is destroying the capacity to generate experts. In its place, it is establishing a reign of technologically advanced mediocrity. When a highly advanced artificial intelligence system based on quantum computing must make a decision in a situation like the ones described above, we will remember the missing experts.

ABOUT HYPER-LEGITIMACY AND COMPLEXES

Today, when a new year still at its beginning, we are witnessing in Spain the old rivalry of left and right, driven by the most extremist factions of both sides in a dynamic from which, if it remains, nothing good can be expected.

History taught us that both sides have much to regret or hide about the past; on the right hand, the pursuit of the interests of the most privileged, supported by force; on the left hand, the savagery of revolutions which, in the end, placed at their tops a new and equally privileged class, whose interests are not more legitimate than those of their predecessors.

However, nowadays we see that both sides did not assume their history in the same way; the left clearly won what has been called «the story». While there is a right wing repentant for the sins of its predecessors, the left wing places its own ones on pedestals, as the supposed forerunners of democracy, even in those cases where the only thing they brought was dictatorship, ruin and death.

 Recent episodes in Spain of removal of statues while maintaining others with equal or greater merits for such removal, changes of street names and even the transfer of Franco’s remains speak by themselves of a hyper-legitimacy from the most extremist leftists hardly supported by facts.

Well into the 21st century, the ideological debate should not consist of the most extreme right and left wings competing to pull up the rugs of the past of their opponents, nor should anyone have to apologize for existing. Each side has an ideological legitimacy that certainly does not reside in the past. Perhaps it is time for both sides to show their real legitimacy while they send to the garbage can of history the principles and practices of their ideological great-great-grandfathers and those present leaders who, nowadays, pretend to behave like them.

A modern liberal right wing is legitimate because, as a guiding principle, it seeks the common good and not the maintenance of situations of privilege; it trusts that an individual initiative with few restrictions will bring a better life for the great majority and will put its best efforts into it.

A left wing is legitimate when, likewise, it seeks the common good and not a mere quest to subvert the existing order in order to place its own members at the head of a new and equally undesirable one; legitimate left and right wings coincide in ends, though not in means. Thus, while the right wing trust in the individual, the left wing is prone to social engineering and direct action to alleviate the situation of the less fortunate, preventing them from being abandoned to their fate.

Naturally, any non-sectarian reader, whatever his orientation, will see that both options are not only legitimate but compatible, and that alternation in power has a crucial role to play: to correct undesirable drifts which, in one case, could drive to the abandonment of layers of the population while in the other could produce elephantine States and ever larger groups willing to get their living from the resources that these States take from the productive part of the population.

At the same time, there are also illegitimate political practices ; those led by politicians who get their living from the confrontation and see the common good as an empty abstraction, preferring to opt for their own. That is a situation, very present right now, that transforms the political chessboard into modern Augean stables where every dirt has its seat and that, despite it, its cleaning would not require any river or any Hercules but simpler, punctual and modest actions.

Pointing to individuals or gangs of opportunistic people as the culprits, even though they clearly exist and are guilty, does not put us on the road to solve the problem. Such individuals are not the key, which does not mean that they should not be stopped, but they are mere anecdotes stemming from basic errors in a process that, in Spain, started in 1977: The lauded political transition in Spain introduced undesirable elements that constituted the germ of the current situation, granting legitimacy to anyone who claimed to be against the dictatorship, no matter how questionable their own actions could be:

 The design of the transition not only led to an unwieldy territorial situation, but -it is worth remembering- ETA prisoners with blood crimes were even pardoned, many of whom returned to their former activity. Situations of privilege were consecrated, both economic and electoral, which contradicted the supposed equality of all Spaniards proclaimed in the subsequent Constitution.

Once those principles were in place, the political pressures towards greater power of the successive executive powers and the concessions to groups favored by the electoral system to form blocking minorities started the process. In addition to a bad start, the Constitution, with the good offices of the Constitutional Court, has been twisted until it has become virtually unrecognizable. Today, we can find laws that require an advanced exercise in sophistry to be accepted within the constitutional framework.

Those have been the basic lines of a process that led to the present moment where, in addition, there are political actors -some of them inside the Government- who explicitly seek to liquidate the regime born in 1977 in order to replace it with adventures whose outcome cannot even be described as uncertain.

 Relying on the old parliamentary arithmetic and on the old parties -or the new ones with old principles and actions- on both sides of the political spectrum- may be a mistake. Today an «apolitical» party may be required, without a government program, willing to lend its support to the party with the most votes if it commits itself to making changes leading to a real democracy, with the design errors of 1977 corrected.

It would be unrealistic expecting a solution from someone who is benefiting, at the expense of society as a whole, from the current morass, as the present Government and its supporters; nor does it seem that a solution can be expected from someone, now in the opposition, who had his chance in 2012. They got more central and autonomic power than anyone in the recent past and, however, behaved as temporary tenants without daring to proclaim his principles -if they existed- and limited himself to achieve some economic respite. Finally, neither should we turn back the clock nor resort to the «we lived better with Franco».

Perhaps the most hopeful experiments in this long period have been the initial Ciudadanos and UPyD parties; it is true that UPyD was born almost as a PSOE in exile, but it progressively focused on nuclear issues and, perhaps that’s why it was made disappear. In the case of Ciudadanos, its origin is impeccable, but the ambition of its leaders turned the party into something unrecognizable today.

There are figures who could lead the rebirth of a principled political activity, but none of them occupy right now prominent positions within their parties (some of them did), if they even remain in them. Inviting these figures to action is not about bringing in more people to fight for power on the current political chessboard but, precisely, to change the rules of a game whose original design is wrong. The subsequent evolution have turned it into perfect terrain for the «gamblers of the Mississippi» -to use the former vice president Alfonso Guerra’s expression- to prosper at the expense of a society whose welfare they supposedly manage.

It is a matter of changing the rules of access and permanence in political activity, ensuring equality among all Spaniards, and that both the rules and the zeal in their application are the same for all.

Undoubtedly, the design of 1977 is coming apart at the seams in many places and its subsequent evolution was already foreseeable for some; a change is necessary, but this does not consist of returning to states prior to that date but of reinforcing what was already done well then and eliminating what was done badly or very badly.

Life after a Black Swan

COVID-19 or coronavirus could be considered, at least partially, as a genuine Black Swan. It’s true that the first condition to qualify as a Black Swan is to be impossible to foresee. In this case, we can find a TED conference by Bill Gates in 2015, where he warned that we were not prepared to manage it. Five years later, facts have shown that we were not ready and, at practical effects, it can be named a Black Swan, a false one if we want but a Black Swan anyway.

black-swan-4553166_640

The behavior of many Governments in front of the outbreak can be qualified from poor management to openly criminal and, as a consequence, we have entered a new situation whose consequences are hard to foresee.

Unless a vaccination or a treatment can be found in a very short time, something is sure: The world won’t come back to the zero-day. No Government will have the «Black Swan» excuse anymore before the next outbreak but, beyond that, some changes in the common behavior can point to other changes that could last well beyond the end of the outbreak.

Probably, the restrictions of the movements are going to enforce the real dawn of teleworking, training on-line and many other related activities that, during the outbreak, have been used as an emergency resource but, now, it could become the standard way. Of course, it can change our work habits but, at the same time, it will have a major impact on real estate, traveling and…it has shown that Internet, basic tool for so many activities, is far from being as strong as pretended: Actually, the increased use of digital platforms like Netflix, Amazon or others during quarantine periods challenged the capacity of some of the main nodes.

However, these facts together with the impact on the world economy could be read as the trivial lessons to be obtained from the Black Swan. There are other potential effects that could be far higher.

One of them is related to the start of the outbreak.  As it happened with Chernobyl, where the Soviet Government denied everything until the pollution went out of their borders, the denial of the Chinese Government, allowing massive feasts that contributed to spread the virus once they knew of its existence could have internal and external consequences.

Could it drive to a major internal turmoil? If so, the consequences are hard to foresee, since China is nowadays the factory of the world. What about the external situation? Some Governments are pointing to China as responsible for the outbreak and its consequences due to the behavior shown during the first 20 critical days. Will these accusations go beyond public statements or will they remain inconsequential?

There is still a third kind of effect in the future world: Many optimists say that the world never was better and, to defend this position, they show statistical reports about hunger, health, life expectations, wars, and many others. They think that all the media focus on the negative side because it sells better than the positive side and it gives a distorted view of the world. It could be but there is still another option:

There is a lot of powder magazines in the world and we do not know which one will explode and, if so, what will be the consequences of the explosion and if they could start new explosions. We should remind that, when IWW started, not many people were conscious of it, due to the uncommon links of the facts that came after an assassination in Sarajevo. Actually, the IWW name came after the war itself.

To use the same term, we are running a farm of black swans and we do not know the consequences that could come from that. Then, without denying that the material side could be better than it was in any other Human age, the stability of that situation is very questionable.

Peter Turchin warned about this instability and how we should expect a major change in the world as we know it. Turchin is prone to mathematical models and, as such, he has defined nice algorithms and graphics. However, for those who think that numbers are a language and, as such, they can misguide people under the flag of «objectivity», the real value of the Turchin model is not in the algorithms; it’s in the dynamics. He was very original, observing variables that passed unnoticed for many other researchers and explaining why these variables could be used as indicators of a major trend.

Turchin expected a major change in the 2020 decade as a consequence of being in the final stage of the decomposition. Being conscious that I’m simplifying his model perhaps beyond any legitimate limit, a major event -like a war- brings a kind of vaccination that can last for two generations driving to a prosperity period. After that, the decomposition starts driven by people who did not live that major event, driving to others and repeating the cycle.

Covid-19 and its consequences will be for many people a major event in their lives. The question, hard to answer, is if it will accelerate the decomposition process or, instead, it will reset the model starting a new positive cycle driven by people that, suddenly, discovered that there are real problems, far more important than the invented by some political leaders.

Some changes, even in the language of politicians, during the outbreak and how people rebuke those who try to attract the attention to the issues in their political agendas are quite revealing. Something is already changing; it’s not the virus. It’s the meaning of the change in the life conditions for many of us and how this change made many people build a new hierarchy of values.

Nietzsche said that what does not kill us makes us stronger. At this moment, we do not know if we will die -as a society- because of this crisis or we will become stronger, at least for a time. Something is sure: It will not be inconsequential.

Artificial Intelligence, Privacy and Google: An explosive mix

A few days ago, I knew about a big company using AI in the recruiting processes. Not a big deal since it has become a common practice and if something can be criticized, is the fact that AI commits the same error that human recruiters: The false-negative danger; a danger that goes far beyond recruiting.

The system can have more and more filters, learning from them but…what about the candidates that were eliminated despite they could have been successful? Usually, we will never know about them and, hence, we cannot learn from our errors rejecting good candidates. We know of cases like Clark Gable or Harrison Ford that were rejected by Hollywood but, usually, following the rejected candidates is not feasible and, then, the system -and the human recruiter- learns to improve the filters but it’s not possible learning from the wrong rejections.

That’s a luxury that companies can afford while it affects to jobs where the supply of candidates and the demand for jobs is clearly favorable. However, this error is much more serious if the same practice applies to highly demanded profiles. Eliminating a good candidate is, in this case, much more expensive but both, AI system and human recruiters, don’t have the opportunity to learn from this error type.

Only companies like Google, Amazon or Facebook with an impressive amount of information about many of us could afford to learn from the failures of the system. It’s true that, as a common practice, many recruiters “google” the names of the candidates to get more information, but these companies keep much more information about us than the offered when someone searches for us in Google.

Then, since these companies have a feature that cannot be reached by other companies, including big corporations, we could expect them to be hired in the next future to perform recruiting, evaluate insurance or credit proposals, evaluation about potential business partners and many others.

These companies having so much information about us can share Artificial Intelligence practices with their potential clients but, at the same time, they keep a treasure of information that their potential clients cannot have.

Of course, they have this information because we voluntarily give it in exchange for some advantages in our daily life. At the end of the day, if we are not involved in illegitimate or dangerous activities and we don’t have the intention of doing it, is it so important having a Google, Amazon, Facebook or whoever knowing what we do and even listening to us through their “personal assistants”? Perhaps it is:

The list of companies sharing this feature -almost unlimited access to personal information- is very short. Then, any outcome from them could affect us in many activities, since any company trying to get an advantage from these data is not going to find many potential suppliers.

Now, suppose a dark algorithm that nobody knows exactly how it works deciding that you are not a good option to get a job, insurance, a credit…whatever. Since the list of suppliers of information is very short, you will be rejected once and again, even though nobody will be able to give you a clear explanation of the reasons for the rejection. The algorithm could have chosen an irrelevant action that happens to keep a high correlation with a potential problem and, hence, you would be rejected.

Should this danger be disregarded? Companies like Amazon had their own problems with their recruiting systems when, unintendedly, they introduced racial and genre biases. There is not a valid reason to suppose that this cannot happen again with much more data and affecting many more activities.

Let me share a recent personal experience related to this error type: Recently, I was blacklisted by a supplier of information security. The apparent reason was having a page, opening automatically every time I opened Chrome, without further action. That generated repeated accesses; it was read by the website as an attempt to spy and they blacklisted me. The problem was that the supplier of information security had many clients: Banks, transportation companies…and even my own ISP. All of them denied my access to their websites based on the wrong information.

This is, of course, a minor sample of what can happen if Artificial Intelligence is applied over a vast amount of data by the few companies that have access to them and, because of reasons that nobody could explain, we must confront rejection in many activities; a rejection that would be at the same time universal and unexplained.

Perhaps the future was not defined in «1984» by Orwell but in “The process” by Kafka.

Goodbye, AliExpress (Alibaba)

The first condition for online commerce to work is trust. If you cannot trust a place, that place will die.

A few weeks ago, I bought a bone conduction earphone through Aliexpress. One of the side did not work from the beginning. I simply asked for the replacement with one who worked.

The supplier asked for proof about the problem. However, it’s quite difficult showing in a recording that one side of the headphones is working while the other don’t. I sent two videos, one to supplier and the other one to AliExpress.

In the meanwhile, the supplier asked me to retire the claim, alleging «personal reasons» instead of the real problem, that is, defective product.

Finally, AliExpress considers that I did not prove the problem and decides that I must not receive a reimbursement nor a new product.

After that experience, I only can say «Goodbye, Alibaba». I won’t buy anything from you anymore and I will invite my relatives to do the same.

A contracorriente: ¿Libertad de información o libertad de difamación?

Creo que en ciertos momentos es necesario dejar clara la posición personal para evitar confusiones, de modo que allá va:

No me gusta la forma en que ha llegado al poder en España el líder del PSOE, Pedro Sánchez, no me gusta su actuación en los principales temas de España, no me gustan sus socios, no me gusta que remolonee en la convocatoria de elecciones y la ocultación de su tesis me parece un asunto sospechoso del que el tiempo dirá, a corto plazo, si hay algo real o es una tormenta en un vaso de agua. Está claro ¿verdad?

Sin embargo, me parece impresentable que el gran asunto actual no sea nada de eso sino que haya osado exigirles la rectificación a algunos medios de comunicación y cómo esa exigencia atenta supuestamente contra la libertad de información. En esta trampa han caído interesadamente periodistas y políticos e intentan que los demás caigamos también. Pues no:

La libertad de información no es un derecho del periodista a decir lo que le parezca sino un derecho del ciudadano a recibir información cierta y desde la perspectiva que a ese ciudadano le apetezca.

Si el periodista difama, está tan sujeto a la ley como cualquier otro y no puede invocar la libertad de información como si fuera una patente de corso.

Si alguien, en este caso un presidente de un Gobierno y con independencia de la opinión que se tenga de él, cree que ha sido difamado por un medio de comunicación está en su perfecto derecho de exigir una rectificación.

¿Quiere llevar el asunto a los tribunales? Adelante; es su derecho aunque sea una jugada arriesgada porque, si pierde, no tendrá manera humana de amarrarse a un sillón al que ha llegado de una forma tan irregular.

Desde luego, quien no puede ni debe tratar de impedírselo son los medios de comunicación, alegando libertad de información que, al parecer, consideran sinónimo de libertad para decir lo que les de la gana.

Insisto: No defiendo al personaje sino su pleno derecho a acudir a instancias judiciales si cree que ha sido difamado. Si lo ha sido o no, ya lo veremos pero no cabe rasgarse las vestiduras por un atentado a la libertad de información ante el ejercicio de tal derecho.

THE HARD LIFE OF AVIATION REGULATORS (especially, regarding Human Factors)

There is a very extended mistake among Aviation professionals: The idea that regulations set a minimum level. Hence, accepting a plane, a procedure or an organization means barely being at the minimum acceptable level.

The facts are very different: A single plane able to pass, beyond any kind of reasonable doubt, all the requirements without further questions would not be an “acceptable” plane. It would be an almost perfect product.

Then, where is the trick and why things are not so easy as they seem to be?

Basically, because rules are not so clear as pretended, giving in some cases wide room for interpretation and because, in their crafting, they are mirroring the systems they speak about and, hence, integration is lost in the way.

The acceptance of a new plane is a very long and complex process till the point that some manufacturers give up. For instance, the Chinese COMAC built a first plane certified by Chinese authorities to fly only in China or the French Dassault decided to stop everything jumping directly to the second generation of a plane. It be judged a failure, but it is always better than dragging design problems until they prove that they should have been managed. We cannot avoid to remind cases like the cargo door of DC10 and the consequences of a problem already known during the design phase.

The process is so long and expensive that some manufacturers keep attached to very old models with incremental improvements. Boeing 737, that started to fly in 1968, is a good example. Its brother B777, flying since 1995, keeps a very old Intel-80486 processor inside but changing it would be a major change, despite Intel stopped its production in 1997.

The process is not linear, and many different tests and negotiations are required in the way. Statements of similarity with other planes are frequent and the use of standards from Engineering or Military fields is common place when something is not fully clear in the main regulation.

Of course, some of the guidelines can contradict others since they are addressed to different uses. For instance, a good military regulation very used in Human Factors (MIL-STD-1472) includes a statement about required training, indicating that it should be as short as possible to keep full operating state. That can be justified if we think in environments where lack of resources -including knowledge- or even physical destruction could happen. It should be harder to justify as a rule in passenger’s transportation.

Another standard can include a statement about the worst possible scenario for a specific parameter, but the parameter can be more elusive than that. The idea itself of worst possible scenario could be nonsense and, if the manufacturer accepts this and the regulator buys it, a plane could by flying legally but with serious design flaws.

Regulations about Human Factors were simply absent a few years ago and HF mentions were added to the technical blocks. That was partially changed when a new rule for planes design appeared addressing precisely Human Factors as a block on its own. However, the first attempts were not much further than collecting all the scattered HF mentions in a single place.

Since then, it has been partially corrected in the Acceptable Means of Compliance, but the technical approach still prevails. Very often, manufacturers assemble HF teams with technical specialists in specific systems instead of trying a global and transversal approach.

The regulators take their own cautions and repeat mantras like avoiding fatigue levels beyond acceptability or planes that could not require special alertness or special skill levels to manage a situation.

These conditions are, of course, good but they should not be enough. Compliance with a general condition like this one, in EASA CS25  “Each pilot compartment and its equipment must allow the minimum flight crew (established under CS 25.1523) to perform their duties without unreasonable concentration or fatigue” is quite difficult to demonstrate. If there is not a visible mistake in the design, trying to meet this condition is more a matter of imagining potential situations than a matter of analysis and, as it should be expected, the whole process is driven by analysis, not by imagination.

Very often, designs and the rules governing them try to prevent the accident that happened yesterday, but a strictly analytic approach makes hard to anticipate the next one. Who could anticipate the importance of controls feedback (present in every single old plane) until a related accident happened? Who could anticipate before AA191 that, perhaps, removing the mechanical blockage of flaps/slats could not be so sound idea? Who could think that different presentations of artificial horizon could drive to an accident? What about different automation policies and pilots disregarding a fact that could be disregarded in other planes but not in that one?…

Now, it is still in the news the fact that a Boeing factory had been attacked by the WannyCry virus and the big question was if it had affected the systems of the B777s that were manufactured there. B787 is said to have 6,5 million of code lines. Even though B777 is far below that number, checking it should not be easy and it should be still harder if computers calculating parameters for the manufacturing must be also checked.

That complexity in the product drives not only to invisible faults but to unexpected interactions between theoretically independent events. In some cases, the dependence is clear. Everyone is conscious that an engine stopped can mean hydraulic, electric, pressure and oxygen problems and manufacturers try to design systems pointing to the root problem instead of pointing to every single failure. That’s fine but…what if the interaction is unexpected? What if a secondary problem -like oxygen scarcity, for instance- is more important than the root problem that drove to this? How are we going to define the right training level for operators where there is not a single person who understands the full design?

In the technical parts, the complexity is already a problem. When we add the human element, its features and what kind of things are demanded from operators, the answer is everything but easy. Claiming “lack of training” every time that something serious happens and adding a patch to the present training is not enough.

A full approach more integrated and less shy to speak about using imagination in the whole process is advisable long ago but now it is a must. Operators do not manage systems. They manage situations and, at doing so, they can use several systems at the same time. Even if there is not an unexpected technical interaction among them, there is a place where this interaction happens: The operator who is working with all of them and the concept of consistency is not enough to deal with it.

 

WHEN THE WORLD IS FASTER THAN ITS RULES

Anyone in touch with dynamic fields can find this phenomenon: Things are faster than the rules intending to control them. Hence, if the capacity to be enforced is very strong, old rules can stop the advancement. By the same token, if that capacity is weak, rules are simply ignored, and the world evolves following different paths.

The same fact can be observed in many different fields:

Three months ago, an article was titled “POR QUÉ ALBERT EINSTEIN NO PODRÍA SER PROFESOR EN ESPAÑA” (Why Albert Einstein could not be a professor in Spain) and, basically, the reason was in a bureaucratic model tailored for the “average” teacher. This average teacher, just after becoming a Bachelor, starts with the doctorate entering a career path that will finish with the retirement in the University. External experience is not required and, very often, is not welcome.

The age, the publications and the length of the doctoral dissertation (17 pages) could have made impossible for Einstein to teach in Spain. The war for talent means in some environments fighting it wherever it can be found.

If we go to specific and fast evolving fields, things can be worse:

Cybersecurity can be a good example. There is a clear shortage of professionals in the field and it is worsening. The slowness to accept an official curriculum means that, once the curriculum is accepted, is already out-of-date. Then, a diploma is not worth and, instead, certification agencies are taking its place, enforcing up-to-date knowledge for both, getting and keeping the certification.

Financial regulators? Companies are faster than regulators and a single practice can appear as a savings plan, as an insurance product or many other options. If we go to derivative markets, the speed introduces different parameters or practices like high-frequency trading hard to follow.

What about cryptocurrencies? They are sidestepping control by the Governments and, still worse, they can break one of the easiest ways for the States to get funds. Governments would like to break them and, in a few weeks, EU will have a new rule to “protect privacy” that could affect the blockchain process, key for the security of cryptocurrencies and…many Banks operations.

Aviation? The best-selling airplane in the Aviation history -Boeing 737- was designed in 1964 and it started to fly in 1968. The last versions of this plane don’t have some features that could be judged as basic modifications because the process is so long and expensive (more and more long and expensive) that Boeing prefers to keep attached to some features designed more than 50 years ago.

In any of these fields or many others that could be mentioned, the rules are not meeting its intended function, that is, to keep functionality and, in the fields where it is required, safety as a part of the functionality. Whatever the rule can be ignored or can be a heavy load to be dragged in the development, it does not work.

We can laugh at the old “1865 Locomotive Act” with delicious rules such as this: The most draconic restrictions and speed limits were imposed by the 1865 act (the «Red Flag Act»), which required all road locomotives, which included automobiles, to travel at a maximum of 4 mph (6.4 km/h) in the country and 2 mph (3.2 km/h) in the city, as well as requiring a man carrying a red flag to walk in front of road vehicles hauling multiple wagons (Wikipedia).

However, things were evolving in 1865 far slower than now. Non-functional rules like that could be easily identified and removed before becoming a serious problem. That does not happen anymore. We try to get more efficient organizations and more efficient technology, but the architecture of the rules should be re-engineered too.

Perhaps the next revolution is not technologic despite it can be fueled by technology. It could be in the Law: The governing rules -not the specific rules but the process to create, modify, change or cancel rules- should be modified. Rules valid for a world already gone are so useful as a weather forecast for the past week.

Useless diplomas, lost talent, uncontrolled or under-controlled new activities or product design where the adaptation to the rules are a major part of the development cost and time are pointing to a single fact: The rules governing the world are unable to keep the pace of the world itself.