Categoría: Behavioral Economics

Life after a Black Swan

COVID-19 or coronavirus could be considered, at least partially, as a genuine Black Swan. It’s true that the first condition to qualify as a Black Swan is to be impossible to foresee. In this case, we can find a TED conference by Bill Gates in 2015, where he warned that we were not prepared to manage it. Five years later, facts have shown that we were not ready and, at practical effects, it can be named a Black Swan, a false one if we want but a Black Swan anyway.

black-swan-4553166_640

The behavior of many Governments in front of the outbreak can be qualified from poor management to openly criminal and, as a consequence, we have entered a new situation whose consequences are hard to foresee.

Unless a vaccination or a treatment can be found in a very short time, something is sure: The world won’t come back to the zero-day. No Government will have the “Black Swan” excuse anymore before the next outbreak but, beyond that, some changes in the common behavior can point to other changes that could last well beyond the end of the outbreak.

Probably, the restrictions of the movements are going to enforce the real dawn of teleworking, training on-line and many other related activities that, during the outbreak, have been used as an emergency resource but, now, it could become the standard way. Of course, it can change our work habits but, at the same time, it will have a major impact on real estate, traveling and…it has shown that Internet, basic tool for so many activities, is far from being as strong as pretended: Actually, the increased use of digital platforms like Netflix, Amazon or others during quarantine periods challenged the capacity of some of the main nodes.

However, these facts together with the impact on the world economy could be read as the trivial lessons to be obtained from the Black Swan. There are other potential effects that could be far higher.

One of them is related to the start of the outbreak.  As it happened with Chernobyl, where the Soviet Government denied everything until the pollution went out of their borders, the denial of the Chinese Government, allowing massive feasts that contributed to spread the virus once they knew of its existence could have internal and external consequences.

Could it drive to a major internal turmoil? If so, the consequences are hard to foresee, since China is nowadays the factory of the world. What about the external situation? Some Governments are pointing to China as responsible for the outbreak and its consequences due to the behavior shown during the first 20 critical days. Will these accusations go beyond public statements or will they remain inconsequential?

There is still a third kind of effect in the future world: Many optimists say that the world never was better and, to defend this position, they show statistical reports about hunger, health, life expectations, wars, and many others. They think that all the media focus on the negative side because it sells better than the positive side and it gives a distorted view of the world. It could be but there is still another option:

There is a lot of powder magazines in the world and we do not know which one will explode and, if so, what will be the consequences of the explosion and if they could start new explosions. We should remind that, when IWW started, not many people were conscious of it, due to the uncommon links of the facts that came after an assassination in Sarajevo. Actually, the IWW name came after the war itself.

To use the same term, we are running a farm of black swans and we do not know the consequences that could come from that. Then, without denying that the material side could be better than it was in any other Human age, the stability of that situation is very questionable.

Peter Turchin warned about this instability and how we should expect a major change in the world as we know it. Turchin is prone to mathematical models and, as such, he has defined nice algorithms and graphics. However, for those who think that numbers are a language and, as such, they can misguide people under the flag of “objectivity”, the real value of the Turchin model is not in the algorithms; it’s in the dynamics. He was very original, observing variables that passed unnoticed for many other researchers and explaining why these variables could be used as indicators of a major trend.

Turchin expected a major change in the 2020 decade as a consequence of being in the final stage of the decomposition. Being conscious that I’m simplifying his model perhaps beyond any legitimate limit, a major event -like a war- brings a kind of vaccination that can last for two generations driving to a prosperity period. After that, the decomposition starts driven by people who did not live that major event, driving to others and repeating the cycle.

Covid-19 and its consequences will be for many people a major event in their lives. The question, hard to answer, is if it will accelerate the decomposition process or, instead, it will reset the model starting a new positive cycle driven by people that, suddenly, discovered that there are real problems, far more important than the invented by some political leaders.

Some changes, even in the language of politicians, during the outbreak and how people rebuke those who try to attract the attention to the issues in their political agendas are quite revealing. Something is already changing; it’s not the virus. It’s the meaning of the change in the life conditions for many of us and how this change made many people build a new hierarchy of values.

Nietzsche said that what does not kill us makes us stronger. At this moment, we do not know if we will die -as a society- because of this crisis or we will become stronger, at least for a time. Something is sure: It will not be inconsequential.

Artificial Intelligence, Privacy and Google: An explosive mix

A few days ago, I knew about a big company using AI in the recruiting processes. Not a big deal since it has become a common practice and if something can be criticized, is the fact that AI commits the same error that human recruiters: The false-negative danger; a danger that goes far beyond recruiting.

The system can have more and more filters, learning from them but…what about the candidates that were eliminated despite they could have been successful? Usually, we will never know about them and, hence, we cannot learn from our errors rejecting good candidates. We know of cases like Clark Gable or Harrison Ford that were rejected by Hollywood but, usually, following the rejected candidates is not feasible and, then, the system -and the human recruiter- learns to improve the filters but it’s not possible learning from the wrong rejections.

That’s a luxury that companies can afford while it affects to jobs where the supply of candidates and the demand for jobs is clearly favorable. However, this error is much more serious if the same practice applies to highly demanded profiles. Eliminating a good candidate is, in this case, much more expensive but both, AI system and human recruiters, don’t have the opportunity to learn from this error type.

Only companies like Google, Amazon or Facebook with an impressive amount of information about many of us could afford to learn from the failures of the system. It’s true that, as a common practice, many recruiters “google” the names of the candidates to get more information, but these companies keep much more information about us than the offered when someone searches for us in Google.

Then, since these companies have a feature that cannot be reached by other companies, including big corporations, we could expect them to be hired in the next future to perform recruiting, evaluate insurance or credit proposals, evaluation about potential business partners and many others.

These companies having so much information about us can share Artificial Intelligence practices with their potential clients but, at the same time, they keep a treasure of information that their potential clients cannot have.

Of course, they have this information because we voluntarily give it in exchange for some advantages in our daily life. At the end of the day, if we are not involved in illegitimate or dangerous activities and we don’t have the intention of doing it, is it so important having a Google, Amazon, Facebook or whoever knowing what we do and even listening to us through their “personal assistants”? Perhaps it is:

The list of companies sharing this feature -almost unlimited access to personal information- is very short. Then, any outcome from them could affect us in many activities, since any company trying to get an advantage from these data is not going to find many potential suppliers.

Now, suppose a dark algorithm that nobody knows exactly how it works deciding that you are not a good option to get a job, insurance, a credit…whatever. Since the list of suppliers of information is very short, you will be rejected once and again, even though nobody will be able to give you a clear explanation of the reasons for the rejection. The algorithm could have chosen an irrelevant action that happens to keep a high correlation with a potential problem and, hence, you would be rejected.

Should this danger be disregarded? Companies like Amazon had their own problems with their recruiting systems when, unintendedly, they introduced racial and genre biases. There is not a valid reason to suppose that this cannot happen again with much more data and affecting many more activities.

Let me share a recent personal experience related to this error type: Recently, I was blacklisted by a supplier of information security. The apparent reason was having a page, opening automatically every time I opened Chrome, without further action. That generated repeated accesses; it was read by the website as an attempt to spy and they blacklisted me. The problem was that the supplier of information security had many clients: Banks, transportation companies…and even my own ISP. All of them denied my access to their websites based on the wrong information.

This is, of course, a minor sample of what can happen if Artificial Intelligence is applied over a vast amount of data by the few companies that have access to them and, because of reasons that nobody could explain, we must confront rejection in many activities; a rejection that would be at the same time universal and unexplained.

Perhaps the future was not defined in “1984” by Orwell but in “The process” by Kafka.

A contracorriente: ¿Libertad de información o libertad de difamación?

Creo que en ciertos momentos es necesario dejar clara la posición personal para evitar confusiones, de modo que allá va:

No me gusta la forma en que ha llegado al poder en España el líder del PSOE, Pedro Sánchez, no me gusta su actuación en los principales temas de España, no me gustan sus socios, no me gusta que remolonee en la convocatoria de elecciones y la ocultación de su tesis me parece un asunto sospechoso del que el tiempo dirá, a corto plazo, si hay algo real o es una tormenta en un vaso de agua. Está claro ¿verdad?

Sin embargo, me parece impresentable que el gran asunto actual no sea nada de eso sino que haya osado exigirles la rectificación a algunos medios de comunicación y cómo esa exigencia atenta supuestamente contra la libertad de información. En esta trampa han caído interesadamente periodistas y políticos e intentan que los demás caigamos también. Pues no:

La libertad de información no es un derecho del periodista a decir lo que le parezca sino un derecho del ciudadano a recibir información cierta y desde la perspectiva que a ese ciudadano le apetezca.

Si el periodista difama, está tan sujeto a la ley como cualquier otro y no puede invocar la libertad de información como si fuera una patente de corso.

Si alguien, en este caso un presidente de un Gobierno y con independencia de la opinión que se tenga de él, cree que ha sido difamado por un medio de comunicación está en su perfecto derecho de exigir una rectificación.

¿Quiere llevar el asunto a los tribunales? Adelante; es su derecho aunque sea una jugada arriesgada porque, si pierde, no tendrá manera humana de amarrarse a un sillón al que ha llegado de una forma tan irregular.

Desde luego, quien no puede ni debe tratar de impedírselo son los medios de comunicación, alegando libertad de información que, al parecer, consideran sinónimo de libertad para decir lo que les de la gana.

Insisto: No defiendo al personaje sino su pleno derecho a acudir a instancias judiciales si cree que ha sido difamado. Si lo ha sido o no, ya lo veremos pero no cabe rasgarse las vestiduras por un atentado a la libertad de información ante el ejercicio de tal derecho.

THE HARD LIFE OF AVIATION REGULATORS (especially, regarding Human Factors)

There is a very extended mistake among Aviation professionals: The idea that regulations set a minimum level. Hence, accepting a plane, a procedure or an organization means barely being at the minimum acceptable level.

The facts are very different: A single plane able to pass, beyond any kind of reasonable doubt, all the requirements without further questions would not be an “acceptable” plane. It would be an almost perfect product.

Then, where is the trick and why things are not so easy as they seem to be?

Basically, because rules are not so clear as pretended, giving in some cases wide room for interpretation and because, in their crafting, they are mirroring the systems they speak about and, hence, integration is lost in the way.

The acceptance of a new plane is a very long and complex process till the point that some manufacturers give up. For instance, the Chinese COMAC built a first plane certified by Chinese authorities to fly only in China or the French Dassault decided to stop everything jumping directly to the second generation of a plane. It be judged a failure, but it is always better than dragging design problems until they prove that they should have been managed. We cannot avoid to remind cases like the cargo door of DC10 and the consequences of a problem already known during the design phase.

The process is so long and expensive that some manufacturers keep attached to very old models with incremental improvements. Boeing 737, that started to fly in 1968, is a good example. Its brother B777, flying since 1995, keeps a very old Intel-80486 processor inside but changing it would be a major change, despite Intel stopped its production in 1997.

The process is not linear, and many different tests and negotiations are required in the way. Statements of similarity with other planes are frequent and the use of standards from Engineering or Military fields is common place when something is not fully clear in the main regulation.

Of course, some of the guidelines can contradict others since they are addressed to different uses. For instance, a good military regulation very used in Human Factors (MIL-STD-1472) includes a statement about required training, indicating that it should be as short as possible to keep full operating state. That can be justified if we think in environments where lack of resources -including knowledge- or even physical destruction could happen. It should be harder to justify as a rule in passenger’s transportation.

Another standard can include a statement about the worst possible scenario for a specific parameter, but the parameter can be more elusive than that. The idea itself of worst possible scenario could be nonsense and, if the manufacturer accepts this and the regulator buys it, a plane could by flying legally but with serious design flaws.

Regulations about Human Factors were simply absent a few years ago and HF mentions were added to the technical blocks. That was partially changed when a new rule for planes design appeared addressing precisely Human Factors as a block on its own. However, the first attempts were not much further than collecting all the scattered HF mentions in a single place.

Since then, it has been partially corrected in the Acceptable Means of Compliance, but the technical approach still prevails. Very often, manufacturers assemble HF teams with technical specialists in specific systems instead of trying a global and transversal approach.

The regulators take their own cautions and repeat mantras like avoiding fatigue levels beyond acceptability or planes that could not require special alertness or special skill levels to manage a situation.

These conditions are, of course, good but they should not be enough. Compliance with a general condition like this one, in EASA CS25  “Each pilot compartment and its equipment must allow the minimum flight crew (established under CS 25.1523) to perform their duties without unreasonable concentration or fatigue” is quite difficult to demonstrate. If there is not a visible mistake in the design, trying to meet this condition is more a matter of imagining potential situations than a matter of analysis and, as it should be expected, the whole process is driven by analysis, not by imagination.

Very often, designs and the rules governing them try to prevent the accident that happened yesterday, but a strictly analytic approach makes hard to anticipate the next one. Who could anticipate the importance of controls feedback (present in every single old plane) until a related accident happened? Who could anticipate before AA191 that, perhaps, removing the mechanical blockage of flaps/slats could not be so sound idea? Who could think that different presentations of artificial horizon could drive to an accident? What about different automation policies and pilots disregarding a fact that could be disregarded in other planes but not in that one?…

Now, it is still in the news the fact that a Boeing factory had been attacked by the WannyCry virus and the big question was if it had affected the systems of the B777s that were manufactured there. B787 is said to have 6,5 million of code lines. Even though B777 is far below that number, checking it should not be easy and it should be still harder if computers calculating parameters for the manufacturing must be also checked.

That complexity in the product drives not only to invisible faults but to unexpected interactions between theoretically independent events. In some cases, the dependence is clear. Everyone is conscious that an engine stopped can mean hydraulic, electric, pressure and oxygen problems and manufacturers try to design systems pointing to the root problem instead of pointing to every single failure. That’s fine but…what if the interaction is unexpected? What if a secondary problem -like oxygen scarcity, for instance- is more important than the root problem that drove to this? How are we going to define the right training level for operators where there is not a single person who understands the full design?

In the technical parts, the complexity is already a problem. When we add the human element, its features and what kind of things are demanded from operators, the answer is everything but easy. Claiming “lack of training” every time that something serious happens and adding a patch to the present training is not enough.

A full approach more integrated and less shy to speak about using imagination in the whole process is advisable long ago but now it is a must. Operators do not manage systems. They manage situations and, at doing so, they can use several systems at the same time. Even if there is not an unexpected technical interaction among them, there is a place where this interaction happens: The operator who is working with all of them and the concept of consistency is not enough to deal with it.

 

WHEN THE WORLD IS FASTER THAN ITS RULES

Anyone in touch with dynamic fields can find this phenomenon: Things are faster than the rules intending to control them. Hence, if the capacity to be enforced is very strong, old rules can stop the advancement. By the same token, if that capacity is weak, rules are simply ignored, and the world evolves following different paths.

The same fact can be observed in many different fields:

Three months ago, an article was titled “POR QUÉ ALBERT EINSTEIN NO PODRÍA SER PROFESOR EN ESPAÑA” (Why Albert Einstein could not be a professor in Spain) and, basically, the reason was in a bureaucratic model tailored for the “average” teacher. This average teacher, just after becoming a Bachelor, starts with the doctorate entering a career path that will finish with the retirement in the University. External experience is not required and, very often, is not welcome.

The age, the publications and the length of the doctoral dissertation (17 pages) could have made impossible for Einstein to teach in Spain. The war for talent means in some environments fighting it wherever it can be found.

If we go to specific and fast evolving fields, things can be worse:

Cybersecurity can be a good example. There is a clear shortage of professionals in the field and it is worsening. The slowness to accept an official curriculum means that, once the curriculum is accepted, is already out-of-date. Then, a diploma is not worth and, instead, certification agencies are taking its place, enforcing up-to-date knowledge for both, getting and keeping the certification.

Financial regulators? Companies are faster than regulators and a single practice can appear as a savings plan, as an insurance product or many other options. If we go to derivative markets, the speed introduces different parameters or practices like high-frequency trading hard to follow.

What about cryptocurrencies? They are sidestepping control by the Governments and, still worse, they can break one of the easiest ways for the States to get funds. Governments would like to break them and, in a few weeks, EU will have a new rule to “protect privacy” that could affect the blockchain process, key for the security of cryptocurrencies and…many Banks operations.

Aviation? The best-selling airplane in the Aviation history -Boeing 737- was designed in 1964 and it started to fly in 1968. The last versions of this plane don’t have some features that could be judged as basic modifications because the process is so long and expensive (more and more long and expensive) that Boeing prefers to keep attached to some features designed more than 50 years ago.

In any of these fields or many others that could be mentioned, the rules are not meeting its intended function, that is, to keep functionality and, in the fields where it is required, safety as a part of the functionality. Whatever the rule can be ignored or can be a heavy load to be dragged in the development, it does not work.

We can laugh at the old “1865 Locomotive Act” with delicious rules such as this: The most draconic restrictions and speed limits were imposed by the 1865 act (the “Red Flag Act”), which required all road locomotives, which included automobiles, to travel at a maximum of 4 mph (6.4 km/h) in the country and 2 mph (3.2 km/h) in the city, as well as requiring a man carrying a red flag to walk in front of road vehicles hauling multiple wagons (Wikipedia).

However, things were evolving in 1865 far slower than now. Non-functional rules like that could be easily identified and removed before becoming a serious problem. That does not happen anymore. We try to get more efficient organizations and more efficient technology, but the architecture of the rules should be re-engineered too.

Perhaps the next revolution is not technologic despite it can be fueled by technology. It could be in the Law: The governing rules -not the specific rules but the process to create, modify, change or cancel rules- should be modified. Rules valid for a world already gone are so useful as a weather forecast for the past week.

Useless diplomas, lost talent, uncontrolled or under-controlled new activities or product design where the adaptation to the rules are a major part of the development cost and time are pointing to a single fact: The rules governing the world are unable to keep the pace of the world itself.

El efecto Twitter

Mucha gente considera Twitter como un sitio poco serio y, por tanto, decide no tener una cuenta en Twitter. Grave error:

Muchos individuos y publicaciones muy conocidos tienen sus cuentas y publican regularmente contenidos. Es cierto que 140 caracteres no dan para mucho pero la cosa se pone más interesante si se considera que, dentro de esos 140 caracteres, puede haber vínculos a artículos recién publicados por ellos mismos.

Seguir a mucha gente es enloquecedor porque, a menos que se viva con la nariz pegada a la pantalla, se perderá información pero nuevamente hay una solución: Elíjanse los temas más interesantes y prepárense listas especializadas en esos temas. Una revisión diaria o semanal, según el nivel de actividad, será suficiente y, si se escogen los miembros de las listas con cuidado, se puede mantener uno actualizado sobre cualquier tema imaginable. Ni que decir tiene que se pueden añadir o quitar miembros de las listas.

En resumen, hay buenas razones para recomendar a alguien que tenga una cuenta en Twitter: Es un recurso valioso para mantenerse informado casi sobre cualquier tema. Ahora viene la parte más difícil: ¿Cómo debe ser la interacción en Twitter?

Mucha gente simplemente se mantiene en silencio. Siguen las fuentes que consideran interesantes y se acabó. Es una buena opción si no hay intención de compartir contenido propio. Puede encontrarse gente que utiliza sus propios nombres mientras otros prefieren no estar identificados, especialmente si tienen intención de participar activamente en discusiones sobre temas que puedan ser controvertidos y ahí precisamente aparece el lado oscuro de Twitter, un lado oscuro muy difícil de separar de la parte positiva.

Twitter es muy rapido. Por ello, medios tradicionales como la radio o la televisión lo utilizan como forma de mantener el contacto con sus seguidores y es frecuente ver una línea en televisión con un flujo de mensajes en Twitter. Esto les da a los programas sensación de actualidad y, al mismo tiempo, le da relevancia a Twitter, tanto en sus aspectos positivos como en los negativos.

Una vez que Twitter aparece como algo relevante, mucha gente empieza a utilizar la red para sus propios objetivos. Por ejemplo, se utilizan cuentas falsas con bots diseñados para convertir cualquier tema de su elección en trending topic en cuestión de minutos. Cuando se actúa así, por supuesto, la información sobre la relevancia real de un tema está falseada porque hay gente dedicada activamente a esa falsificación y, por añadidura, no se necesita ser un gran experto en redes sociales para ello.

Ésta es una parte negativa pero hay algo aún peor: La interacción entre miembros de Twitter es muy animada. Es fácil identificar grupos -incluso hay aplicaciones que permiten hacerlo automáticamente- y hay una fuerte presión hacia la conformidad dentro de esos grupos. Sus miembros, buscando el aplauso de sus compañeros de grupo, presentan visiones cada vez más extremas sobre cualquier tema controvertido y las discusiones resultantes aparecen en los medios más tradicionales como tendencias confundiendo la caricatura Twitter con la imagen real de una sociedad, imagen que a su vez se ve afectada por la difusión de la caricatura como realidad.

Hay muchos ejemplos actuales pero el caso español y su situación política es paradigmático. Tenemos de todo: Bots convirtiendo cualquier cosa en trending topic y gente que va derivando hacia visiones cada vez más extremas en sus posiciones políticas, especialmente si se trata de cuentas no identificadas o se trata de líderes de opinión que no quieren decepcionar a su auditorio. Por añadidura, esto no es un efecto específicamente español sino que, si se sigue la campaña americana, se encuentran exactamente los mismos fenómenos: La velocidad de la interacción y la brevedad de los mensajes, sin mucho espacio para matices, pueden ser los factores determinantes de ese comportamiento.

En suma, Twitter es una herramienta valiosa para mantenerse actualizados sobre cualquier tema pero, al mismo tiempo, tiene facetas muy negativas cuya influencia trasciende Twitter. Estar dentro es positivo pero mantenerse activo es algo para pensárselo dos veces. Aceptar las tendencias que marca Twitter como reales es algo que debe evitarse y no sólo porque probablemente sean falsas sino porque, dándoles carta de naturaleza, se puede contribuir a que se conviertan en reales aunque originalmente no lo fueran. Quizás todos tenemos una tarea de evitar que eso ocurra porque, debido a la presión hacia la conformidad, suele ocurrir que la posición ganadora se le llevan precisamente las más impresentables tendencias y comentarios…sin distinción de adscripción ideológica o de cualquier otro tipo.

Big Aviation is still a game of two players

And one of them, Airbus,  is celebrating its birthday.

Years ago, three major players were sharing the market but, once McDonnell Douglas disappeared, big planes were made by one of them. Of course, we should not forget Antonov, whose 225 model is still the biggest plane in the world, some huge Tupolev and Lockheed Tristar but the first ones never went out of their home markets while Lockheed Tristar could be seen as a failed experiment from the manufacturer.

Airbus emphasizes its milestones in the timeline but, behind these, there is a flow marked by efficiency through I.T. use.

Airbus was the first civilian planes manufacturer having a big plane with a cockpit for only two people (A-310) and Airbus was the first civilian plane manufacturer to introduce widely fly-by-wire technology (the only previous exception was the Concorde). Finally, Airbus introduced the commonality concept allowing pilots from a model to switch very fast to a different model keeping the rating for both.

Boeing had a more conservative position: B757 and B767 appeared with only two people in the cockpit after being redesigned to compete with A-310. Despite the higher experience of Boeing in military aviation and, hence, in fly-by-wire technology, Boeing deferred for a long time the decision to include it in civilian planes and, finally, where Boeing lost the efficiency battle was when it appeared with a portfolio whose products were mainly unrelated while Airbus was immerse in its commonality model.

The only point where Boeing arrived before was in the use of twin planes for transoceanic flights through the ETOPS policy. Paradoxically the ones in the worst position were the two American companies that were manufacturing three engine planes, McDonnell Douglas and Lockheed instead of Airbus. That was the exception because, usually, Boeing was behind in the efficiency field.

Probably -and this is my personal bet- they try to build a family starting with B787. This plane should be for Boeing the A320 equivalent, that is, the starter of a new generation sharing many features.

As a proof of that more conservative position, Boeing kept some feedbacks that Airbus simply removed like, for instance, the feeling of the flight controls or the feedback from autopilot to throttle levers. Nobody questionned if this should be made and it was offered as a commercial advantage instead of a safety feature since it was not compulsory…actually, the differences among both manufacturers -accepted by the regulators as features independent of safety-  have been in the root of some events

Little-size Aviation is much more crowded and, right now, we have two new incomers from Russia and China (Sukhoi and Comac) including the possibility of an agreement among them to fight for the big planes market.

Anyway, that is still in the future. Big Aviation is still a game of two contenders and every single step in that game has been driven by efficiency. Some of us would like understability -in normal and abnormal conditions- to be among the priorities in future designs, whatever they come from the present contenders or from any newcomer.

Published in my Linkedin profile

Air Safety and Hacker Frame of Mind

If we ask anyone what a hacker is, we could get answers going from cyberpiracy, cyberdelincuency, cybersecurity…and any other cyberthing. However, it’s much more than that.

Hackers are classified depending of the “color of their hats”. White hat hacker means individual devoted to security, black hat hacker means cybercriminal and grey hat hacker means something in the middle. That can be interesting as a matter of curiosity but…what do they have in common? Furthermore, what do they have in common that can be relevant for Air Safety?

Simonyi, the creator of WYSIWYG, warned long ago about an abstraction scale that was adding more and more steps. Speaking about Information Technology, that means that programmers don’t program a machine. They instruct a program to make a program to be run by a machine. Higher programming levels mean longer distance from the real thing and more steps between the human action and the machine action.

Of course, Simonyi warned of this as a potential problem while he was speaking about Information Technology but…Information Technology is now ubiquitous and this problem can be found anywhere including, of course, Aviation.

We could say that any IT-intensive system has different layers and the number of layers defines how advanced the system is. So far so good, if we assume that there is a perfect correspondance between layers, that is, every layer is a symbolic representation of the former one and that representation should be perfect. That should be all…but it isn’t.

Every information layer that we put over the real thing is not a perfect copy -it should be nonsense- but, instead, it tries to improve something in safety, efficiency or, very often, it claims to be improving both. However, avoiding flaws in that process is something that is almost impossible. That is the point where problems start and when hacker-type knowledge and frame of mind should be highly desirable for a pilot.

The symbolic nature of IT-based systems makes its flaws to be hard to diagnose since their behavior can be very different to mechanic or electric systems. Hackers, good or bad, try to identify these flaws, that is, they are very conscious of this symbolic layer approach instead of assuming an enhanced but perfect representation of the reality below.

What means a hacker frame of mind as a way to improve safety? Let me show two examples:

  • From cinema: The movie “A beautiful mind”, devoted to John Nash and showing his mental health problems shows at a moment how and why he was able to control these problems: He was confusing reality and fiction until a moment where he found something that did not fit. It happened to be a little girl that, after many years, continued being a little girl instead of an adult woman. That gave him the clue to know which part of his life was created by his own brain.
  • From Air Safety: A reflection taken from the book “QF32” by Richard de Crespigny: Engine 4 was mounted to our extreme right. The fuselage separated Engine 4 from Engines 1 and 2. So how could shrapnel pass over or under the fuselage, then travel all that way and damage Engine 4? The answer is clear. It can’t. However, once arrived there, a finding appears crystal-clear: Information coming from the plane is not trustable because in any of the IT-layers the correspondance reality-representation has been lost.

Detecting these problems is not easy. It implies much more than operating knowledge and, at the same time, we know that nobody has full knowledge about the whole system but only partial knowledge. That partial knowledge should be enough to define key indicators -as it happens in the mentioned examples- to know when we work with information that should not be trusted.

The hard part of this: The indicators should not be permanent but adapted to every situation, that is, the pilot should decide about which indicator should be used in situations that are not covered by procedures. That should bring us to other issue: If a hacker frame of mind is positive for Air Safety, how to create, nurture and train it? Let’s use again the process followed by a hacker to become such a hacker:

First, hackers look actively for information. They don’t go to formal courses expecting the information to be given. Instead, they look for resources allowing them to increase their knowledge level. Then, applying this model to Aviation should suppose a wide access to information sources beyond the information provided in formal courses.

Second, hackers training is more similar to military training than academic training, that is, they fight to intrude or to defend a system and they show their skills by opposing an active enemy. To replay a model such as this, simulators should include situations that trainers can imagine. Then, the design should be much more flexible and, instead of simulators behaving as a plane is supposed to do, they should have room to include potential situations coming from information misrepresentation or from situations coming from automatic answers to defective sensors.

Asking for a full knowledge of all the information layers and their potential pitfalls can be utopic since nobody has that kind of knowledge, including designers and engineers. Everybody has a partial knowledge. Then, how can we do our best with this partial knowledge? Looking for a different frame of mind in involved people -mainly pilots- and providing the information and training resources that allow that frame of mind to be created and developed. That could mean a fully new training model.

Published originally in my Linkedin profile

Sterile discussions about competencies, Emotional Intelligence and others…

When “Emotional Intelligence” fashion arrived with Daniel Goleman, I was among the discordant voices affirming that the concept and, especially, the use of it, was nonsense. Nobody can seriously reject that personal features are a key for success or failure. If we want to call it Emotional Intelligence that’s fine. It’s a marketing born name not very precise but, anyway, we can accept it.

However, losing the focus is not acceptable…and some people lose the focus with statements like “80% of success is due to Emotional Intelligence, well above the percentage due to “classic” intelligence. We lose focus too with statements comparing competencies with academic degress and the role of each part in professional success. These problems should be analyzed in a different and simpler way: It’s a matter of sequence instead of percentage.

An easy example: What is more important for a surgeon to be successful? The academic degree or the skills shown inside the OR? Of course, this is a tricky question where the trick is highly visible. To enter the OR armed with an scalpel, the surgeon needs an academic recognition and/or a specific license. Hence, the second filter -skills- is applied over the ones who passed the first one -academic recognition- and we cannot compare in percentage terms skills and academic recognition.

Of course, this is an extreme situation but we can apply it to the concepts where some sterile discussions appear. Someone can perform well thank to Emotional Intelligence but the entrance to the field is guaranteed with intelligence in the most common used meaning. Could we say that, once passed an IQ threshold we should better improve our interaction skills than -if possible- improve 10 more IQ points? Possibly…but things don’t work that way, that is, we define the access level through a threshold value and performance with other criteria, always comparing people that share something: They all are above the threshold value. Then…how can I say “Emotional Intelligence is in the root of 80% of success”? It should be false but we can convert it into true by adding  “if the comparison is made among people whose IQ is, at least medium-high level”. The problem is that, with this addition, it is not false anymore but this kind of statement should be a simple-mindedness proof.

We cannot compare the relative importance of two factors if one of them is referred to job access while the other is referred to job performance once in the job. It’s like comparing bacon with speed but using percentages to appear more “scientific”.

Flight-Deck Automation: Something is wrong

Something is wrong with automation. If we can find diagnostics performed more than 20 years ago and the conclusions are still current…something is wrong.

Some examples:

Of course, we could extend the examples to books like Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering published by Rasmussen in 1986, Safeware written by Leveson in 1995, Normal Accidents by Perrow in 1999, The Human Interface by Raskin in 2000 and many others.

None of these resources is new but all of them can be read by someone with interest in what is happening NOW. Perhaps there is a problem in the basics that is not still properly addressed.

 Certainly, once a decision is made, going back is extremely expensive and manufacturers will try to defend their solutions. An example that I have used more than once is the fact that modern planes have processors so old that the manufacturer does not make them anymore. Since the lifetime of a plane is longer than the lifetime of some key parts, they have to stock those parts since they cannot ask the manufacturers to send them.

The obvious solution should be renewal but this should be so expensive that they prefer having brand-new planes with old-fashioned parts to avoid new certification processes. Nothing to oppose to this practice. It’s only a sample of a more general practice: Keeping attached to a design and defend it against any doubt –even if the doubt is reasonable- about its adequacy.

 However, this rationale can be applied to products already in the market. What about the new ones? Why the same problems appear once and again instead of being finally solved?

 Perhaps, a Human Factors approach could be useful to identify the root problem and help to fix it. Let’s speak about Psychology:

 The first psychologist that won a Nobel Prize was Daniel Kahnemann. He was one of the founders of the Behavioral Economics concept showing how we use heuristics that usually works but we can be misguided in some situations by heuristics. To show that, he and many followers designed interesting experiments that make clear that we all share some “software-bugs” that can drive us to commit a mistake. In other words, heuristics should be understood as a quick-and-dirty approach, valid for many situations but useless if not harming in others.

 Many engineers and designers would be willing to buy this approach and, of course, their products should be designed in a way that would enforce a formal rational model.

 The most qualified opposition to this model comes from Gigerenzer. He explains that heuristics is not a quick-and-dirty approach but the only possible if we have constraints of time or processing possibilities. Furthermore, for Gigerenzer people extracts intelligence from context while the experiments of Kahnemann and others are made in strange situations and designed to misguide the subject of the experiment.

An example, used by Kahnemann and Tversky is this one:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

 Which is more probable?

  •  Linda is a bank teller.
  • Linda is a bank teller and is active in the feminist movement.

The experiment tries to show the conjunction fallacy, that is, how many people should choose the second alternative while the first one is not only wider but comprises the second one.

The analysis of Gigerenzer is different: Suppose that all the information about Linda is the first sentence Linda is 31 years old. Furthermore, suppose you don’t give information and simply makes the questions…we could expect that the conjunction fallacy should not appear. It appears because the experimenter provides information and, since the subject is given information, he supposes that this is RELEVANT…otherwise, why is the subject fed with this information?

In real life, relevance is a clue. If someone tells us something, we understand that it has a meaning and that this information is not included to deceive us. That’s why Gigerenzer criticizes the Behavioral Economics approach, which can be shared by many designers.

For Gigerenzer, we decide about how good a model is comparing it with an ideal model –the rational one- but if, instead, we decide about which is the best model looking at the results, we could find some surprises. That’s what he did at Simple Heuristics that Make Us Smart, that is, comparing complex decision models with others that, in theory, should get a worse performance and finding that, in many cases, the “bad” model could get better results than the sophisticated one.

Let’s go back to automation design. Perhaps we are making the wrong questions at the beginning. Instead of “What information would you like to have?”  getting a Santa Claus letter as an answer, we should ask what are the cues that you use to know that this specific event is happening?

FAA, in its 1996 study, complained about the fact that some major failures as an engine-stop can be masked by a bunch of warnings about different systems failing, making hard to discern that all of them came from a common root, that is, the engine stop. What if we ask “Tell me one fact –exceptionally I would admit two- that should tell you in a clear and fast way that one of the engines is stopped.”

We have a nice example from QF32 case. Pilots started to distrust the system when they got information that was clearly false. It was a single fact but enough to distrust. What if, instead of deciding this way jumping to the conclusion from a single fact, they should have been “rational” trying to assign probabilities in different scenarios? Probably, the plane should not have fuel enough to allow this approach.

Rasmussen suggested one approach –a good one- where the operator should be able to run cognitively the program that the system was performing. The approach is good but something is still missing: How long should it take for the operator to replicate the functional model of the system?

In real life situations, especially if they have to deal with uncertainty –not calculated risk- people use very few indicators easy and fast to obtain. Many of us remember the BMI-092 case. Pilots were using an indicator to know which engine had the problem…unfortunately, they came from a former generation of B737 and they did not know that the one they were flying had air bleeding in both engines instead of only one. The key used to determine the wrong engine should have been correct in an older plane.

Knowing the cues used by pilots, planes could be designed in a human-centered approach instead of creating an environment that does not fit with the ways used by people to perform real tasks in real environments.

When new flight-deck designs appeared, manufacturers and regulators were careful enough to keep the basic-T, even though it could appear in electronic format but that was the way that pilots used to get the basic information. Unfortunately, this has disappeared in many other things and things like position of power levers with autopilot, position of flightsticks/horns and if they have to transmit pressure or not or if the position should be common to both pilots or not…had a very different treatment from a human-centered approach. Instead, the screen-mania seems to be everywhere.

A good design starts with a good question and, perhaps, questions are not yet good enough and that’s why analyses and complains 20 and 30 years old still keep current.

 

 

 

 

 

 

A %d blogueros les gusta esto: