Categoría: Ciencia y tecnología

La banda de la porra en Twitter ¿Son sólo bots?

Hace sólo unos días se publicó con pruebas irrefutables como los chicos del nazipopulismo utilizaban bots, es decir, un montón de cuentas falsas de Twitter que transmitían los mismos mensajes al mismo tiempo. Estas cuentas son utilizadas para difundir los hashtags con las consignas del momento y convertirlas en el trending topic de turno.

Esto ya es sabido pero acabo de observar algo extraño y que podría tener el mismo origen, sobre todo teniendo en cuenta que siempre he sido abiertamente crítico con el nazipopulismo:

  1. Hace unos días, una persona empieza a seguir mi cuenta de Twitter. Me sorprende que tiene miles de seguidores y sus mensajes son absolutamente inocuos sin que pueda encontrarse ninguno con la menor relación con la política.
  2. Hoy me encuentro con otra persona que empieza a seguir mi cuenta. El perfil es idéntico; tiene miles de seguidores y los mensajes no sólo son inocuos: SON LOS MISMOS.

Naturalmente, la cosa huele mal y bloqueo a las dos. ¿Alguna idea?¿Son spammers o hay otro objetivo?

Anuncios

A comment about a good reading: Air Safety Investigators by Alan E. Diehl

Some books can be considered as a privilege since they are an opportunity to have a look at an interesting mind. In this case it’s the mind of someone who was professionally involved in many of the air accidents considered as HF milestones.

The author, Alan Diehl, has worked with NTSB, FAA and U.S. Air Force. Everywhere, he tried to show that Human Factors had something important to say in the investigations. Actually, I borrowed for my first sentence something that he repeats once and again: The idea of trying to get into the mind of the pilot to know why a decision was made.

Probably, we should establish a working hypothesis about people involved in an accident: They were not dumb, nor crazy and they were not trying to kill themselves. It would work fine almost always.

Very often, as the author shows, major design and organization flaws are under a bad decision driving to an accident. He suffered some of these organization flaws in his own career by being vetoed in places where he challenged the statu quo.

One of the key cases representing a turning point for his activity but, regretfully, not for Aviation Safety in military environments happened in Gulf war: Two F15 planes shooted two American helicopters. Before that, he tried to implement CRM principles in U.S. Air Force. It was rejected by a high rank officer and, after the accident, they tried to avoid any mention of CRM issues.

 Diehl suffered the consequences of disobeying the orders about it as well as whistle-blowing some bad Safety related practices in the Air Force. Even though those practices represented a big death toll that did not make a change.

As an interesting tip, almost at the end of the book, there is a short analysis of different reporting systems, how they were created and the relationship among them. Even though, it does not pretend to be an important part in the book, it can be very clarifying for many people who can get lost in the acronyms soup.

However, the main and more important piece of the book is CRM related: Diehl fought hardly to get CRM established after a very well-known accident. It involved a United DC-8 in Portland, who crashed because it ran out of fuel while the pilot was worried about the landing gear. That made him delay the landing beyond any reasonable expectation.

It’s true that Portland case was important as well as Los Rodeos and Staines cases were also very important as major events to be used as inputs for the definition of CRM practice. However, and that is a personal opinion, something could be lost related with CRM: When Diehl had problems with Air Force, he defended CRM from a functional point of view. His point, in short, was that we cannot admit the death toll that its absence was provoking but…is CRM absence the real problem or does it have much deeper roots?

CRM principles can be hard to apply in an environment where power distance is very high. Once there, you can decide if a plane is a kind of bubble where this high power distance does not exist or there is not such a bubble and, as someone told me, as a pilot I’m in charge of the flight but the real fact is that a plane is a barracks extension and the higher rank officer inside the plane is the real captain. Nothing to be surprised if we attend to the facts under the air accident that beheaded the State in Poland. “Suggestions” by the Air Force chief are hard to be ignored by a military pilot.

Diehl points out how in many situations pilots seem to be inclined to play with their lives instead of keeping safety principles.  Again, he is right but it can be easily explained: Suppose that the pilot, in the flight that crashed with all the Polish Government onboard, rejects the “suggestion” and goes to the alternate airport. Nothing should have happened except…the outcome for the other option is not visible and everyone should find reasons to explain why the pilot should have landed in the place where he tried to do it. His career should be simply ruined because nobody would admit the real danger under the other option.

Once you decide, it’s impossible to know the outcome of the alternate decision and that makes pressure especially hard to resist. Then, even if restricted to the cockpit or a full plane, CRM principles can be hard to apply in some organizations. Furthermore, as Diehl suggests in the book, you can extend CRM concepts well beyond the cockpit trying to make of it a change management program.

CRM, in civilian and military organizations, means a way to work but we can find incompatibilities between CRM principles and organizational culture principles. Management have to deal with these contradictions but, if the organizational culture is very strong, it will prevail and management will not deal with the contradictions. They will simply decide for the statu quo ignoring any other option.

Should have CRM saved the many lost lives because of its absence? Perhaps not. There is a paradox in approaches like CRM or, more recently, SMS: They work fine in places where they should be less required and they don’t work in places where its implementation should be a matter of urgency. I’m not trying to play with words but establish a single fact and I would like to do so with an example:

Qantas, the Australian airline, has a highly regarded CRM program and many people, inside and outside that Company, should agree that CRM principles meant a real safety improvement for the Company. Nothing to oppose but let me show it in a different light:

Suppose for a moment that someone decides removing all the CRM programs in the world because of…whatever. Once done, we can ask which companies should be the most affected because of that. Should be Qantas among them? Hard to answer but probably not. Why?

CRM principles work precisely in the places where these principles were already working in the background. Then, CRM brings order and procedures to a previous situation that we could call “CRM without CRM program”, for instance, a low power distance where the subordinate is willing to voice any safety concern. In this case, the improvement is clear. If we suddenly suppress the activity, the culture should keep alive these principles because they fitted with that culture from the very first moment and before.

What happens when CRM principles are against organization culture? Let me put it in short: Make-up. They will accept CRM as well as they accept SMS since they both are mandatory but everyone will know the truth inside the organization. Will CRM save lives in this organizations, even if they are enforced to implement it?

A recent event can answer that: Asiana accident in San Francisco happened because a first officer did not dare to tell his captain that he was unable to land the plane manually (of course, as usual, many more factors were present but this was one of them and extremely important).

Diehl clearly advocates for CRM and I believe he is right and with statistical information who speaks about safety improvement. My point is that improvement is not homogeneous and it happens mainly in places that were already willing to accept CRM principles and, in a non-structured way, they were already working with them.

CRM by itself does not have the power to change the organizational culture in places that reject its principles and the approach should be different. A very good old book, Critical Path Renewal by Beer, Eisenstat and Spector explains clearly why change programs don’t work and they show a different way to get the change in organizations who reject it.

Anyone trying to make a real change should flee from change programs even if we agree with the goals but one-size-fits-all does not work. Some principles, like the ones under CRM or SMS, are valid from safety point of view but, even though everyone will pay lip service to the goals, many organizations won’t accept the changes required to get there. That is still a hard challenge to be completed.

Published originally in my Linkedin profile

Air Safety and Hacker Frame of Mind

If we ask anyone what a hacker is, we could get answers going from cyberpiracy, cyberdelincuency, cybersecurity…and any other cyberthing. However, it’s much more than that.

Hackers are classified depending of the “color of their hats”. White hat hacker means individual devoted to security, black hat hacker means cybercriminal and grey hat hacker means something in the middle. That can be interesting as a matter of curiosity but…what do they have in common? Furthermore, what do they have in common that can be relevant for Air Safety?

Simonyi, the creator of WYSIWYG, warned long ago about an abstraction scale that was adding more and more steps. Speaking about Information Technology, that means that programmers don’t program a machine. They instruct a program to make a program to be run by a machine. Higher programming levels mean longer distance from the real thing and more steps between the human action and the machine action.

Of course, Simonyi warned of this as a potential problem while he was speaking about Information Technology but…Information Technology is now ubiquitous and this problem can be found anywhere including, of course, Aviation.

We could say that any IT-intensive system has different layers and the number of layers defines how advanced the system is. So far so good, if we assume that there is a perfect correspondance between layers, that is, every layer is a symbolic representation of the former one and that representation should be perfect. That should be all…but it isn’t.

Every information layer that we put over the real thing is not a perfect copy -it should be nonsense- but, instead, it tries to improve something in safety, efficiency or, very often, it claims to be improving both. However, avoiding flaws in that process is something that is almost impossible. That is the point where problems start and when hacker-type knowledge and frame of mind should be highly desirable for a pilot.

The symbolic nature of IT-based systems makes its flaws to be hard to diagnose since their behavior can be very different to mechanic or electric systems. Hackers, good or bad, try to identify these flaws, that is, they are very conscious of this symbolic layer approach instead of assuming an enhanced but perfect representation of the reality below.

What means a hacker frame of mind as a way to improve safety? Let me show two examples:

  • From cinema: The movie “A beautiful mind”, devoted to John Nash and showing his mental health problems shows at a moment how and why he was able to control these problems: He was confusing reality and fiction until a moment where he found something that did not fit. It happened to be a little girl that, after many years, continued being a little girl instead of an adult woman. That gave him the clue to know which part of his life was created by his own brain.
  • From Air Safety: A reflection taken from the book “QF32” by Richard de Crespigny: Engine 4 was mounted to our extreme right. The fuselage separated Engine 4 from Engines 1 and 2. So how could shrapnel pass over or under the fuselage, then travel all that way and damage Engine 4? The answer is clear. It can’t. However, once arrived there, a finding appears crystal-clear: Information coming from the plane is not trustable because in any of the IT-layers the correspondance reality-representation has been lost.

Detecting these problems is not easy. It implies much more than operating knowledge and, at the same time, we know that nobody has full knowledge about the whole system but only partial knowledge. That partial knowledge should be enough to define key indicators -as it happens in the mentioned examples- to know when we work with information that should not be trusted.

The hard part of this: The indicators should not be permanent but adapted to every situation, that is, the pilot should decide about which indicator should be used in situations that are not covered by procedures. That should bring us to other issue: If a hacker frame of mind is positive for Air Safety, how to create, nurture and train it? Let’s use again the process followed by a hacker to become such a hacker:

First, hackers look actively for information. They don’t go to formal courses expecting the information to be given. Instead, they look for resources allowing them to increase their knowledge level. Then, applying this model to Aviation should suppose a wide access to information sources beyond the information provided in formal courses.

Second, hackers training is more similar to military training than academic training, that is, they fight to intrude or to defend a system and they show their skills by opposing an active enemy. To replay a model such as this, simulators should include situations that trainers can imagine. Then, the design should be much more flexible and, instead of simulators behaving as a plane is supposed to do, they should have room to include potential situations coming from information misrepresentation or from situations coming from automatic answers to defective sensors.

Asking for a full knowledge of all the information layers and their potential pitfalls can be utopic since nobody has that kind of knowledge, including designers and engineers. Everybody has a partial knowledge. Then, how can we do our best with this partial knowledge? Looking for a different frame of mind in involved people -mainly pilots- and providing the information and training resources that allow that frame of mind to be created and developed. That could mean a fully new training model.

Published originally in my Linkedin profile

Internet y su presunta omnisciencia: La próxima guerra está en la calidad de la información

Ray Kurzweil decía que la gran revolución que han traído los sistemas avanzados de información está en un simple hecho: Reproducir y transmitir la información tiene un coste virtualmente igual a cero. Se supone que eso significa romper una diferenciación clásica entre los que tienen acceso a la información y los que no la tienen puesto que, según Kurzweil, ahora todos lo tienen.

Por puro azar, estos últimos días he tenido que buscar información sobre distintos temas y, cómo no, he recurrido a hacer búsquedas más o menos avanzadas en Google y en sitios que se suponen especializados en proveer información. Tengo que anticipar que no se trataba de cuestiones filosóficas, religiosas o similares sino de preguntas que tienen una respuesta clara . Otra cosa es acceder a ella a través de la maraña de informaciones falsas o desfasadas. Esto ocurre incluso cuando se trata de asuntos directamente relacionados con Internet.

Ejemplo: Teléfono Nexus 5 con el compromiso de actualización por parte de Google a la última versión de Android. Lo que no dice Google es cuándo llega esa última versión y, los que somos poco pacientes, buscamos otras vías como, por ejemplo, descargar la actualización oficial de los sitios de Google. Esto requiere cierta manipulación en el teléfono como desbloquear el bootloader rootear el teléfono u otras piezas de la tecnoverborrea.

En cualquiera de estas opciones, Google ofrece varias páginas de resultados, incluyendo videos de Youtube. El problema está cuando se intenta poner en práctica y se ve que las instrucciones pueden ser desfasadas, incompletas o, simplemente, el teléfono no hace lo que, según las instrucciones leídas en Internet, tendría que hacer.

Lo curioso del caso es que, después de tratar diversas soluciones y en alguna de éllas llegar a bloquear el terminal, apareció una solución: Una herramienta software llamada Nexus Root Toolkit que permite al usuario hacer lo que quiera con el teléfono: Rootear, desrootear, bloquear o desbloquear el bootloader, cambiar la versión del sistema operativo…lo que sea.

¿Por qué llegar a esta herramienta supone una peregrinación y un ensayo y error de soluciones que supuestos o reales expertos van poniendo en Internet?

Otro ejemplo, quizás algo menos escandaloso porque no va al propio terreno en el que se supone que Internet debería tener información de primera calidad, está en la búsqueda de diferencias en el diseño entre dos tipos de avión y en temas muy específicos: La información existe pero encontrarla con un buscador o en un sitio de preguntas y respuestas tipo Quora es prácticamente imposible y al final lo más operativo es telefonear a alguien que se sabe que dispone de tal información…como en los viejos tiempos.

Kurzweil tiene razón: La gran revolución de la tecnología de la información es la desaparición del coste de multiplicar y transmitir información pero esa gratuidad virtual ha traído consigo un problema: Todo el mundo tiene un altavoz sobre cualquier tema y no sólo los que tengan algo que decir sobre él. Encontrar una señal válida entre una masa creciente de ruido es cada vez más difícil y el aumento de número de páginas o de velocidades de acceso no sólo no arreglan este problema sino que contribuyen a agravarlo.

Internet crece de una forma espectacular pero la calidad de la información que contiene no lo hace. Más bien lo contrario.

Flight-Deck Automation: Something is wrong

Something is wrong with automation. If we can find diagnostics performed more than 20 years ago and the conclusions are still current…something is wrong.

Some examples:

Of course, we could extend the examples to books like Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering published by Rasmussen in 1986, Safeware written by Leveson in 1995, Normal Accidents by Perrow in 1999, The Human Interface by Raskin in 2000 and many others.

None of these resources is new but all of them can be read by someone with interest in what is happening NOW. Perhaps there is a problem in the basics that is not still properly addressed.

 Certainly, once a decision is made, going back is extremely expensive and manufacturers will try to defend their solutions. An example that I have used more than once is the fact that modern planes have processors so old that the manufacturer does not make them anymore. Since the lifetime of a plane is longer than the lifetime of some key parts, they have to stock those parts since they cannot ask the manufacturers to send them.

The obvious solution should be renewal but this should be so expensive that they prefer having brand-new planes with old-fashioned parts to avoid new certification processes. Nothing to oppose to this practice. It’s only a sample of a more general practice: Keeping attached to a design and defend it against any doubt –even if the doubt is reasonable- about its adequacy.

 However, this rationale can be applied to products already in the market. What about the new ones? Why the same problems appear once and again instead of being finally solved?

 Perhaps, a Human Factors approach could be useful to identify the root problem and help to fix it. Let’s speak about Psychology:

 The first psychologist that won a Nobel Prize was Daniel Kahnemann. He was one of the founders of the Behavioral Economics concept showing how we use heuristics that usually works but we can be misguided in some situations by heuristics. To show that, he and many followers designed interesting experiments that make clear that we all share some “software-bugs” that can drive us to commit a mistake. In other words, heuristics should be understood as a quick-and-dirty approach, valid for many situations but useless if not harming in others.

 Many engineers and designers would be willing to buy this approach and, of course, their products should be designed in a way that would enforce a formal rational model.

 The most qualified opposition to this model comes from Gigerenzer. He explains that heuristics is not a quick-and-dirty approach but the only possible if we have constraints of time or processing possibilities. Furthermore, for Gigerenzer people extracts intelligence from context while the experiments of Kahnemann and others are made in strange situations and designed to misguide the subject of the experiment.

An example, used by Kahnemann and Tversky is this one:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

 Which is more probable?

  •  Linda is a bank teller.
  • Linda is a bank teller and is active in the feminist movement.

The experiment tries to show the conjunction fallacy, that is, how many people should choose the second alternative while the first one is not only wider but comprises the second one.

The analysis of Gigerenzer is different: Suppose that all the information about Linda is the first sentence Linda is 31 years old. Furthermore, suppose you don’t give information and simply makes the questions…we could expect that the conjunction fallacy should not appear. It appears because the experimenter provides information and, since the subject is given information, he supposes that this is RELEVANT…otherwise, why is the subject fed with this information?

In real life, relevance is a clue. If someone tells us something, we understand that it has a meaning and that this information is not included to deceive us. That’s why Gigerenzer criticizes the Behavioral Economics approach, which can be shared by many designers.

For Gigerenzer, we decide about how good a model is comparing it with an ideal model –the rational one- but if, instead, we decide about which is the best model looking at the results, we could find some surprises. That’s what he did at Simple Heuristics that Make Us Smart, that is, comparing complex decision models with others that, in theory, should get a worse performance and finding that, in many cases, the “bad” model could get better results than the sophisticated one.

Let’s go back to automation design. Perhaps we are making the wrong questions at the beginning. Instead of “What information would you like to have?”  getting a Santa Claus letter as an answer, we should ask what are the cues that you use to know that this specific event is happening?

FAA, in its 1996 study, complained about the fact that some major failures as an engine-stop can be masked by a bunch of warnings about different systems failing, making hard to discern that all of them came from a common root, that is, the engine stop. What if we ask “Tell me one fact –exceptionally I would admit two- that should tell you in a clear and fast way that one of the engines is stopped.”

We have a nice example from QF32 case. Pilots started to distrust the system when they got information that was clearly false. It was a single fact but enough to distrust. What if, instead of deciding this way jumping to the conclusion from a single fact, they should have been “rational” trying to assign probabilities in different scenarios? Probably, the plane should not have fuel enough to allow this approach.

Rasmussen suggested one approach –a good one- where the operator should be able to run cognitively the program that the system was performing. The approach is good but something is still missing: How long should it take for the operator to replicate the functional model of the system?

In real life situations, especially if they have to deal with uncertainty –not calculated risk- people use very few indicators easy and fast to obtain. Many of us remember the BMI-092 case. Pilots were using an indicator to know which engine had the problem…unfortunately, they came from a former generation of B737 and they did not know that the one they were flying had air bleeding in both engines instead of only one. The key used to determine the wrong engine should have been correct in an older plane.

Knowing the cues used by pilots, planes could be designed in a human-centered approach instead of creating an environment that does not fit with the ways used by people to perform real tasks in real environments.

When new flight-deck designs appeared, manufacturers and regulators were careful enough to keep the basic-T, even though it could appear in electronic format but that was the way that pilots used to get the basic information. Unfortunately, this has disappeared in many other things and things like position of power levers with autopilot, position of flightsticks/horns and if they have to transmit pressure or not or if the position should be common to both pilots or not…had a very different treatment from a human-centered approach. Instead, the screen-mania seems to be everywhere.

A good design starts with a good question and, perhaps, questions are not yet good enough and that’s why analyses and complains 20 and 30 years old still keep current.

 

 

 

 

 

 

“Cuarto milenio” o el atrevimiento de la ignorancia

Hace unos días tropecé con el programa “Cuarto Milenio” y me quedé a verlo cuando empezaron a hablar del armamento secreto del III Reich durante la fase final de la II Guerra Mundial. Tenían en el estudio a un supuesto experto que contó dos cosas, sólo dos, sobre las maravillas tecnológicas de la Alemania nazi y las dos contenían errores garrafales. A pesar de ello, vinieron después los tonos de asombro, los “¿qué habría pasado si la guerra dura más?”, etc.

Primer error: Los científicos alemanes de la época consiguieron un avión invisible al radar y, por ello, podía considerarse un precedente de los aviones stealth actuales. Pues bien, el avión citado –Go229– era muy meritorio por diseño -tipo ala delta- por prestaciones y por ligereza pero su invisibilidad al radar no la conseguía mediante sofisticados diseños en sus ángulos ni mediante el uso de materiales estudiados para que las ondas del radar no rebotasen sobre ellos. Simplemente, el avión estaba hecho básicamente de madera. Sin duda, una obra de arte tecnológica pero saltar gratuitamente setenta años en el tiempo y hablar de tecnologías que permitieran la invisibilidad al radar parecía sacar un poco las cosas de quicio ¿no?

Segundo error: Instalación alemana en la provincia de Lugo. Dos antenas contiguas -y atención al detalle de “contiguas”- de 150 metros de altura, supuestamente utilizadas por los submarinos alemanes para determinar su posición por triangulación. Veamos: La triangulación es una técnica muy antigua y se utilizaba ya en la II Guerra Mundial para descubrir donde se encontraban las emisoras clandestinas. Un aparato, llamado radiogoniómetro, tenía la capacidad de determinar la dirección desde la que venía una emisión pero no la distancia. Si disponemos de un segundo aparato en una posición distinta, el punto donde se cruzan las líneas es el lugar en que se encuentra la emisora.

 La utilización de dos antenas para que un submarino, mediante triangulación, pudiera determinar su posición implica que dichas antenas se encuentran distantes, no contiguas, de forma que el submarino pueda dibujar dos líneas con la dirección de emisión de cada una de ellas y, así, saber que en el cruce de ellas se encuentra el submarino. Si las dos antenas están juntas, no hay triangulación que valga.

Segundo problema: La Tierra es redonda y esto implica que la distancia que alcanza una emisora situada en tierra es escasa. En cierto modo, el problema se resuelve si quien tiene que detectar la señal es un avión y, especialmente, si está a diez kilómetros del suelo y puede, con ello, compensar el efecto de la redondez terrestre y conseguir un mayor alcance pero ¿un submarino? Además de necesitar que las antenas estuvieran distantes entre sí, el submarino está especialmente afectado por la forma de la Tierra y el alcance sería tan escaso que el submarino tendría que estar prácticamente en la costa para detectar las señales.

Lo dicho: La ignorancia es atrevida y, cuando se acompaña de la autosuficiencia exhibida por algunos programas de televisión, es además cargante.

Redes sociales: ¿La nueva censura?

Hay que ser muy cuidadoso en la distinción entre opinión pública y opinión publicada y, de no serlo, pueden producirse consecuencias que acaban derivando en una nueva modalidad de censura:

Nadie, absolutamente nadie, se comporta igual cuando está en su casa, cuando está con amigos, cuando está con un conjunto de colegas o cuando está hablando para un medio público. Las redes sociales, cuando son mal utilizadas, invitan a confundir ámbitos y a que todos acabemos comportándonos siempre como si tuviéramos una cámara delante y nuestras palabras fueran a ser publicadas inmediatamente.

Ayer mismo me llevé una desagradable sorpresa en ese terreno: En un congreso, excelente en su organización y temática, hubo personas que se dedicaron a transmitir por Twitter cosas que se estaban diciendo allí. Naturalmente, se trataba de un foro supuestamente profesional y supuestamente entre colegas donde las reglas sobre qué y cómo se dice no son las mismas que cuando se habla ante una cámara. En las menciones en Twitter no sólo se quedaron con lo más escandaloso y sin añadir matizaciones hechas en la reunión sino que se llegó a incluir comentarios privados realizados fuera de la reunión general y con un café delante.

Si ése es el uso que cabe esperar de redes como Twitter, nadie debería extrañarse de que las reuniones profesionales sufran un empobrecimiento derivado de la desconfianza sobre quién y cómo va a utilizar lo que se diga. Tampoco los que actúen de esta forma deberán extrañarse de que, una vez identificados, los demás hagan un muro de silencio a su alrededor y procuren evitarlos.

Si esto es lo que debemos esperar del futuro, versiones anteriores de la censura nos acabarán pareciendo una broma comparadas con lo que viene.

A %d blogueros les gusta esto: