Categoría: Ciencia y tecnología

BIG DATA: WILL IT DELIVER AS PROMISED?

 

The Big Data concept is still relatively new but the concept inside is very old: If you have more and more data, you can eliminate ambiguity and there are less requirements of hunches since data are self-explanatory.

That is a very old idea coming from I.T. people. However, reality always insisted in delaying the moment when that can be accomplished. There are two problems to get there:

  1. As data grow, it is more necessary a context analysis to decide which one are relevant and which others can be safely ignored.
  2. At the other side of the table, we could have people trying to misguide automatic decision supporting systems. Actually, the so-called SEO (Search Engine Optimization) could be properly renamed GAD (Google Algorithm Deception) to explain more clearly what it is intended to do.

Perhaps, by now, Big Data could be less prone to the second problem than anyone performing Web Analytics. Web has become the battlefield for a quiet fight:

By one side, the ones trying to get better positions for them and the positive news about them. These are also the ones who try to hide negative news throwing positive ones and repeating them to be sure that the bad ones remain hidden in search results.

By the other side, we have the Masters of Measurement. They try to get magic algorithms able to avoid tricks from the first ones, unless they decide paying for their services.

Big Data has an advantage over Web data: If a company can have its own data sources, they can be more reliable, more expensive to deceive and any attempt could be quite easily visible. Even though, this is not new: During the II World War, knowing how successful a bombing had been was not a matter of reading German newspapers or listening to German radio stations.

The practice known as content analysis used indirect indicators like funerals or burials information that could be more informative if and only if the enemy did not know that these data were used to get information. In this same context, before D-Day, some heavily defended places with rubber-made tanks tried to fool reconnaissance planes about the place where the invasion was to start. That practice has remained for a long time. Actually, it was used even in the Gulf War, adding to the rubber tanks heat sources aimed to deceive infrared detectors, who should get a similar picture to the one coming from running engines.

Deceiving Big Data will be harder than deceiving Internet data but, once known who is using specific data and what is doing with them, there will be always a way to do this. An easy example: Inflation indicators: A Government can decide changing the weight in the different variables or changing prices of Government-controlled prices to get a favorable picture. In the same way, if Big Data is used to give information to external parties, we should not need someone from outside trying to deceive the system. That should be done from inside.

Anyway, the big problem is about the first point: Data without a context are worthless…and the context could be moving faster than any algorithm designed to give meaning to the data. Many surprising outcomes have happened in places where all the information was available. However, that information has been correctly read only after a major disaster. For instance, emergence of new political parties could be seen but, if major players decided to dismiss them, it comes as a surprise for them, even though data were available. The problem was in the decision about what deserves to be analyzed and how to do it, not in the data themselves.

Other times, the problem comes from fast changes in the context that are not included in the variables to analyze. In the case of Spain, we can speak about the changes that 11M, and how it was managed by the different players, supposed in the election three days after. In another election, everybody had a clear idea about who was going to get a position that required an alliance. Good reasons advised an agreement and data showed that everybody was sure that the agreement was coming…but it wasn’t. One of the players was so sure that things were already done that tried to impose conditions that the other players saw as unacceptable. Consequence: The desired position was to the hands of a third player. Very recently, twopeople, both considered as growing stars, can have spoiled their options in minor incidents.

In short, we can have a huge amount of data but we cannot analyze all of them but the ones considered as relevant. At doing that, there is not an algorithm or an amount of data that can be a good replacement for an analysis performed by human experts. An algorithm or automatic system can be fooled, even by another automatic system designed to do that, context analysis can lose important variables that have been misjudged and sudden changes in the context cannot be anticipated by any automatic system.

Big Data can be helpful if rationally used. Otherwise, it will become another fad or worse: It could become a standard and nobody would dare deciding against a machine with a lot of data and an algorithm, even when they are wrong.

Las redes de «phishing» cada vez más audaces

Alguien mira mi perfil de Linkedin y, puesto que soy Open Networker, entra en contacto conmigo y me envía un mensaje: En éste me habla de la supuesta implantación de una empresa de aviación ligera en España y de la necesidad de alguien que les asesore en el arranque en el área de Recursos Humanos.

Sorprendente: Le pido detalles como si se trata de aviación ligera o ultraligera, en qué parte de España se quieren implantar y algunos datos iniciales más a los que no me contesta y me pide un contacto por Skype. Tampoco tengo problema y, tras darle mi usuario, me encuentro con que alguien me pide el contacto y tiene una fotografía muy profesional, de un personaje encorbatado y de aspecto sajón.

Cuando mantenemos la conversación, me llevo dos sorpresas: La primera es que el acento no correspondía con la fotografía sino que, más bien, aunque era un inglés básicamente correcto, tenía un fuerte acento africano sin que, por mi parte, fuera capaz de discriminar mucho más.

La segunda sorpresa fue que ya no querían asesoramiento en el ámbito de Recursos Humanos sino «contables» porque la empresa tampoco iba a estar en España sino que tenía actividad comercial y el «contable» se encargaría de realizar los cobros. Ya va sonando conocido ¿verdad?

Parece que la ingeniería social no tiene límites cuando se trata de cazar incautos en el phishing.

Fake Internet leaders

In the last weeks, I have seen ads by companies who claim to be worldwide head-hunters, networks challenging Linkedin and some other Internet giants. A few moments ago, I was seeing the «Linkedin made in Spain».

The homepage design can be carefully crafted and someone could think that the place is so important as they say to be. However, the imposture is so easy to discover that, surprisingly, people still try to fake its real importance in Internet.

An example: Trying to test if «Linkedin made in Spain» is so important as they say, I wrote its Web address in Alexa. These are the results compared with the real Linkedin:

Quantitative Results:

Captura Gonway cuantitativa

Captura Linkedin cuantitativo

The rank for Linkedin in Spain is 9th, from the same source.

Now, a qualitative vision from Alexa:

Qualitative Results:

Captura Alexa

Captura Alexa para Linkedin

These are the easy-to-obtain data. Please, before playing Internet tycoon remember: Fakes are easy to discover.

Artificial Intelligence (GOFAI): Story of an old mistake

Is there something surprising in this picture? Anyone familiar with robot arms can be surprised by the almost human looking of this hand, quite uncommon in robots used in manufacturing and surgery. A robot hand does not try to imitate the external shape of a human hand. It usually has two or three mobile pieces working as fingers. However, we have in the picture a copy of a human hand. Now, the second surprise: The picture is taken from a design in the M.I.T museum and whose author is Marvin Minsky.

Anything wrong with that? Minsky reigned for years in the I.A. activity in M.I.T. and one of his principles, as told by one of his successors, Rodney Brooks, was this: When we try to advance at building Intelligent Machines, we are not interested at all about human brain. This is the product of an evolution that, probably, left non-functional remains. Therefore, it is better starting from scratch instead of studying how a human brain works. This is not a literal sentence but an interpretation that can be made, from their own writings, about how Minsky and his team could think about Artificial Intelligence.

We cannot deny that, thinking this way, they have a point and, at the same time, an important confusion: Certainly, human brain can have some parts as a by-product from evolution and these parts could not add anything to its right performance. Moreover, these parts could have a negative contribution. However, this is not to be applied only to brain. Human eye has some features that could make an engineer designing a camera with the eye as guide to be immediatly fired. Even though, we do not complain about how the eye works, even if we compare with almost all Animal Kingdom but the comparison should be tricky: Human brain «fills» many of the deficiencies of the eye as, for instance, the blind spot and gives human sight features that are not related with the eye as, for instance, a very advanced ability to detect movements. That ability is related with specialized neurons much more than with the lens (the eye). The paradox: Human hand comes from the same evolutive process that the brain or eye and, hence, it is surprising for someone, who made a basic principle of dismissing human features, to design a hand that imitates a human one far beyond functional requirements.

Confusion in this old I.A. group is between shape and function. They are right: We can have evolutive remains but there is a fact: Neurons, far slower than electronic technology, are able to get results hard to reach for advanced technology in fields as shape recognition or others, apparently as trivial as catching a ball that comes flying. Usually, sportpeople are not seen as a paradigm of cerebral activity but the fact is that movements required to catch a ball, not to say in a highly precise way, are out of reach for advanced technology. Principle based in «evolutive remains» is clear but, if results coming from this pretended defective organ are, in some fields, much better than the ones that we can reach through technology…is it not worth trying to know how it works?

Waiting to have more storing room and more speed is a kind of «Waiting for Godot» or an excuse, since present technology is able to provide products much more faster than the humble neurons and storing capacity has very high limits and it is still growing. Do they need more speed to arrive before someone that is already much slower? Hard to explain.

The same M.I.T. museum where the hand is has a recording where a researcher, working with robot learning, surprises because of her humility: At the end of the recording when she speaks about her work with a robot, she confesses that they are missing something beyond speed or storing capacity. Certainly, something is missing: They could be in the same situation that the drunk looking for a key under a light, not because he has lost there but because that is the only place with light enough to look for something.

I.A. researchers did not stop to think in depth in the nature of intelligence or learning. However, they tried to produce them in their technological creations getting quite poor results, as well in their starting releases as in the ones with features like parallel processing, neuron networks or interaction among learning agents. Nothing remotely similar to an intelligent behavior nor a learning deserving that name.

Is that an impossible attempt? Human essentialists would say so. Rodney Brooks, one of the successors of Minsky sustains the opposite position based in a fact: Human essentialists always said: «There is a red line here impossible to trespass» and technological progress once and again forced them to put the supposed limit further. Brook was right but…this fact does not show at all that a limit, wherever it could be, does not exists as Brooks tries to conclude…that should be a jump hard to justify, especially after series of experiments that never got to show an intelligent behavior and some scientific knowledge in the past had to be changed by a new one. When scientists where convinced about the fat that Earth was flat, navigation already existed but techniques had to change radically after the real shape of Earth was common knowledge. Brooks could be in a similar situation: Perhaps technological progress does not have a known limit but it does not mean that his particular line in this technological progress has a future. It could be one of many cul-de-sac where science has come once and again.

As a personal opinion, I do not discard the feasibility of intelligence machines. However, I discard that they can be built if there is not a clear idea about the real nature of intelligence and what the learning mechanisms are. The not-very-scientific attitudes that the I.A. group showed against «dissidents» drove to dismiss people like Terry Winograd, once his findings made him uncomfortable or others, like Jeff Hawkins, were rejected from the beginning due to his interest about how the human brain works.These and other people like Kurzweil and others could start a much more productive way to study Artificial Intelligence than the old one.

The I.A. past exhibits too much arrogance and, as it happens in many academic institutions, a working style based in loyalties to specific persons and to a model that, simply, does not work. The hand of the picture shows something more: Contradictions with the own thinking. I do not know if an intelligent machine will be real but, probably, it will not come from a factory working with the principle of dismissing anything that they ignore.Finding the right way requires being more modest and being able to doubt about the starting paradigms.

Lessons from 11S about Technology

 

Long time ago, machines started to be stronger and more precise than people. That is not new but…are they smarter too? We can forget developments near to SciFi like artificial intelligence based in quantum computing or interaction among simple agents. Instead, we are going to deal with present technology, its role in an event like 11S and the conclusions that we can get from that.

 

 Let’s start with a piece of information: A first generation B747 plane required three/four people in a cockpit with more than 900 elements. A last generation B747 only requires two pilots and the number of elements inside the cockpit decreased in two thirds. Of course, this has been posible through I.T. introduction and, as a by-product, rhrough automation of tasks that, previously, had to be performed manually. The new plane appears as easier than the old one. However, the amount of tasks that the plane performs now on its own makes it a much more complex machine.

 

 Planes used in 11S could be considered as state-of-the-art planes at that time and this technological level made the fact possible, of course, together with a number of things far from technology. Something like 11S should have been hard with a less advanced plane. Handling old planes is harder and the collaboration of pilots in a mass-murder should have been required. Not an easy task getting the collaboration of someone in his own death under death threat.

 

The solution was making the pilot expendable and that, if the plane is flying, requires another pilot willing to take his own life. How is the training cost for that pilot? In money terms, a $120.000 figure could be more less adjusted if speak about training a professional pilot. However, this could not be hard to get for the people that organized and financed 11S. A barrier harder to pass is the time required for this training. Old planes were very complicated and their handling required a good amount of training to be acquired along several years. Should terrorists be so patient? Could they trust in the commitment of future self-killers along the years?

 

 Both questions could invite the organizers to reject the plans as unfeasible. However, technology played its role in a very easy way: Under normal situations, modern planes are easier to handle and, hence, they can be flown by people less knowledgeable and less expert. Coming from this point, situation appears under a different light: How long it takes for a rookie pilot getting the dexterity required to handle the plane at the level required by the objectives? Facts showed the answer: A technologically advanced passenger plane is easy to handle –at the level required- by a low-experienced pilot after an adaption through simulator training.

 

Let’s go back to the starting question: Machines are stronger and more precise than people. Are they smarter too? We could start discussing the different definitions about intelligence but, anyway, there is something that machines can do: Once a way to solve a problem is defined, that way can be programmed into a machine to get the problem automatically solved once and again. As a consequence, there is displacement of complexity from people to the machine, allowing modern and complex machines to be handled by people less able than former machines with more complex interfaces.

 

 Of course, there is an economic issue here: An important investment in technological design can be recovered if the number of machines sharing the design is high enough. Investment in design is made only once but it can drive to important savings in thousands of pilots training. At this moment, automation paradox appears: Modern designs produce more complex machines with a good part of the tasks automated. Automation makes these machines easier to handle under normal conditions than the previous ones. Hence, less trained people can operate machines that, internally, are very complex. Once complexity is hidden at interface level, less trained people can drive more complex machines and that is the place where automation payback is.

 

 The scaring question is this one: What happens in unforeseen situations and, hence, not included in technological design? If we speak about high risk activities, the manufacturer uses to have two answers to this questions: Redundancy and manual handling. However, both possibilities require a previous condition: The problem has to be identified as such in a clear and visible way. If not or if, even after being identified, the problem appears in a situation where there is not available time, people trained to operate the machine can find that the machine “becomes crazy” without any clue about the causes of the anomalous behavior.

 

 Furthermore, if the operator receives a full training, that is, not only related with interface but related with the knowledge of the principles of internal design, automation could not be justified due to increased training costs. We already know the alternative: The capacity to answer to an unforeseen event is seriously jeopardized. 11S is one of the most dramatic tests about how people with low training can perform tasks that, before, should have required more training. However, this is not an uncommon situation and it is nearer to our daily life than we could suspect.

 

Everytime we have a problem in the phone, an incidence with the Bank, an administrative problema in the gaz or electricity bill…we can start a process calling the Customer Service. How many times, after bouncing from one Department to other, someone tells us that we have to dial the number that we had dialed at the beginning? Hidden under these experiences, there is a technological development model based in complex machines and simple people. Is this a sustainable model? Technological development produce machines harder and harder to understand by their operators. In that way, we make better and better things that we already knew how to do and things that already were hard become harder and harder.

 

 11S was possible, among other things, as a consequence of a technological evolution model. This model is showing to be exhausted and requiring a course change. Rasmussen stated the requirements of this course change under a single condition: Operator has to be able to run cognitively the program that the machine is performing. This condition is not met and, in case of being mandatory, it could erase the economic viability driving to a double challenge: One of them should be technological making technology understandable to users beyond operating level under known conditions and the other one is organizational avoiding the loss of economic advantages..

 

Summarizing, performing better in things that we already performed well and, to do that, performing worse in things that we already were performing poorly is not a valid option. People require answer always, not only when automation and I.T. allow it. Cost is the main driver of the situation. Organizations do not answer unforeseen external events and, even worse, complexity itself can produce events from inside that, of course, do not have an answer neither.

 

 A technological model aimed to make easier the “what” hiding the “why” is limited by its own complexity and it is constraining in terms of human development. For a strictly economic vision, that is good news: We can work with less, less qualified and cheaper people. For a vision more centered in human and organizational development, results are not so clear. By one side, complexity puts a barrier preventing the technological solution of problems produced by technology. By other side, that complexity and the opacity of I.T. make the operators slaves without the opportunity to be freed by learning.

 

Three myths in technology design and HCI: Back to basics

It has been a coincidence driven by the anniversary of Spanair accident but, for a few days, comments about the train accident in Santiago de Compostela and about the Spanair accident appeared together. Both have a  common feature beyond, of course, a high and deadly cost. This feature could be stated like this: «A lapsus cannot drive to a major accident. If it happens, something is wrong in the system as a whole».

The operator -pilot, train driver or whoever- can be responsible if there is negligence or clear violation but a lapsus should be avoided by the environment and, if it is not possible, its consequences should be decreased or nullified by the system. Clearly, it did not happen in any of these cases but…what was the generic problem? There are some myths related to technology development that should be explicity addressed and they are not:

  • First myth: There is not an intrinsic difference between open and closed systems. If a system is labeled as open, that comes only from ignorance and technology development can convert it into a closed one: To be short and clear, a closed system is one where everything can be foreseen and, hence, it is possible to work with explicit instructions or procedures while an open one has different sources of interaction from outside or inside and it makes impossible to foresee all posible disturbances. If we accept the myth as a truth, no knowledge beyond operative level is required from the operator once technology reached the right point to consider a system as closed. Normative approach should be enough since every disturbance can be foreseen.

Kim Vicente, in his Cognitive Work Analysis used a good metaphor to attack this idea: Is it better having specific instructions to arrive to a place or is it better a map? Specific instructions can be optimized but they fail under closed streets, traffic jams and many other situations. A map is not so optimized but it provides resources under unforeseen situations. What if the map is so complex that including it in the training program should be very expensive? What if the operator was used to a roadmap and now he has to learn how to read an aeronautical or topographic map? If the myth works, there is not problem. Closed street and traffic jams do not exist and, if they do, they always happen in specific places that can be foreseen.

  • Second myth: A system where the operator has a passive role can be designed in a way that enables situation awareness. Perhaps to address this myth properly, we should go back to a classic experiment in Psychology:  http://bit.ly/175gKIc where a cat is transporting another one in a cart. Supposedly, the visual learning of both cats should be the same since they have a common information. However, results say that it does not happen. The transporting cat get a much better visual learning than the transported one. We don’t really need the cats nor the experiment no know that. Many of us can go a lot of times to a place while other person is driving. What happens when we are asked to go alone to that place? Probably, we did not learn how to go. If this happens with cats and with many of us…is it reasonnable to believe that the operator will be able to solve an unplanned situation where he has been fully out of the loop? Some designs could be removing continuous feedback features because they are hard and expensive to keep and, supposedly, they add nothing to the system. Time ago, a pilot in a highly automated plane told me: «Before, I was able to drive the plane; now the plane drives me»…this is other way to describe the present situation.
  • Third myth: Availability bias: We are going to do our best with our resources. This can be a common approach by designers: What can we offer with the things that we have or we can develop at a reasonnable cost? Perhaps that is not the right question. Many things that we do in our daily life could be packed in an algorythm and, hence, automated. Are we stealing pieces of situation awareness at doing so? Are we converting the «map» into «instructions» without resources if these instructions cannot be applied? However, for the last decades, designers have been behaving like that: Providing an output under the shape of a light, a screen or a sound it quite easy while handles, hydraulic lines working -and transmitting- pressure and many other mechanical devices are harder and expensive to include.

Perhaps whe should remember again «our cat» and how visual and auditive cues could not be enough. The right question is never about what technology is able to provide but about what is the situation awareness that the operator has at any moment and what are his capabilities and resources to solve an unplanned problem. Once we answer this question, perhaps some surprises could appear. For instance, we could learn that not everything that can be done, has to be done and, by the same token, some things that should be done have not a cheap and reliable technology available. Starting a design trying to provide everything that technology can provide is a mistake and, sometimes, this mistake is subtle enough to pass undetected for years.

Many recent accidents are pointing to these design flaws, not only Spanair and Renfe ones:  Automated pilots that get data from faulty sensors (Turkish Airlines, AF447 or Birgenair) , stick-shakers that can be programmed -instead of behaving as the natural reaction of a plane near to stall- provoking an over-reaction from fatigued pilots (Colgan), indicators where a single value can mean opposite things (Three-Mile Island) and many others.

It’s clear that we live in a technological civilization. That means assuming some risks, even catasthropic ones, like an EPM or a massive solar storm. However, there are other minor and current risks that should be controlled. Having people to solve the problems while, at the same time, we steal them the resources they should need to do that is unrealistic. If, driven by cost-consciousness, we assume that unforeseen situations are below one in thousand million and, hence, this is an acceptable risk, be coherent: Eliminate the human operator. By the other side, if we think that unforeseen situations can appear and have to be managed, we have to provide people with the right means to do so.  Both are valid and legitimate ways to behave. Removing resources -including the ones that allow situation awareness- and, once the unforeseen situation appears, having an operator as a breaker to burn speaking of «lack of training», «inadequate procedure compliance» and other common labels is not a right nor legitimate way. Of course, accidents will happen even if everything is properly done but, at least, the accidents waiting to happen should be removed.

Accidente de Santiago: Declaraciones del maquinista

Tras el accidente de Santiago de Compostela, debo encontrarme entre los pocos españoles que no son especialistas en trenes. Sin embargo, sí estoy suficientemente familiarizado con los factores humanos y con la seguridad para encontrar cosas que me sorprenden en las declaraciones publicadas del maquinista implicado en el accidente:

  • Cuando el juez le pregunta si pueden recorrerse cuatro kilómetros distraído, la respuesta del maquinista es de puro sentido común: “A 200 kms./h. cuatro kilómetros pasan muy deprisa”. Cierto. A 240 kms./h. cuatro kilómetros pasan exactamente en un minuto.

Tras esta respuesta, alguien podría pensar que, si va completamente distraído mientras conduce durante todo un minuto, seguramente acabará fuera de la carretera y tendrá razón: Eso ocurre en la carretera; no ocurre en los trenes, al igual que no ocurre en los barcos o en los aviones y eso nos debería dar una primera pista:

La carretera exige del conductor una mayor atención al entorno y un cierto nivel de actividad debido a la necesidad de seguir el trazado, de anticiparse a situaciones relacionadas con el tráfico, etc. Sin embargo, en otros medios de transporte lo que se exige, salvo en determinados momentos, es mantenerse alerta para supervisar el funcionamiento del vehículo que se va controlando. En aviación, se ha criticado con frecuencia a los sistemas muy automatizados que mantengan al piloto fuera del “loop de control” o, en otros términos, que el avión vuela solo pero, cuando algo ocurre, se requiere la intervención urgente de alguien cuya función era estar ahí “por si acaso”.

Si en algo estamos de acuerdo todos los que, de una u otra manera, trabajamos en factores humanos es en el hecho de que somos unos pésimos supervisores. No estamos “diseñados” para supervisar sino para hacer. Cuando alguien nos pone a supervisar, es inevitable que surjan las distracciones y los fallos de atención; éste es un hecho muy conocido en aviación pero su aplicación no es exclusiva de este ámbito.

  • Cuando el juez le pregunta si existe un sistema de frenado automático, el maquinista responde que en esa zona es él el que frena y no un sistema automático. Al repetirle la pregunta, insistiendo en saber qué es lo que ocurriría si el maquinista no frena en el punto en que debería llega la sorpresa: Si va por encima de 200 kms./h. el tren pondría en marcha todos los dispositivos de frenado automáticamente hasta detenerse pero, si va por debajo de 200 kms./h., no pasa nada. La curva del accidente está limitada a 80 kms./h.¿Sirve de algo una protección automática que sólo funciona cuando se circula a más de 200 kms./h.? La respuesta la tenemos en los periódicos de los días pasados y el video en Youtube.

No hay término medio válido aquí. Si se asume que el maquinista es un mero vigilante, dótese a la línea y al tren de sistemas automáticos que impidan situaciones como la que produjo el accidente de Santiago y asumamos que el maquinista se va a distraer, como lo hacemos todos cuando estamos en tarea de vigilancia porque, una vez más, no estamos hechos para vigilar sino para hacer.

Si se asume que el maquinista tiene un papel más activo, diséñense trenes que exijan ese papel más activo. Los aviones a prueba de fallos de pilotos y los trenes a prueba de fallos de maquinistas también traen sus propios problemas, a veces en forma de falta de realismo sobre cuáles son las cosas que hacemos bien y cuáles las que hacemos mal.

Normal Accidents, Blackened Swans and Human Error

Long ago, in 1984, Charles Perrow published the book destined to be the most important one written by him: Normal Accidents. In this book, Perrow established that accidents happened due to an increasing complexity linked to tightly-coupled organizations. Snowball effects could happen through unexpected interactions among parts of the system.

Perrow had many followers -Hollnagel is perhaps one of the most brilliant-  that voiced their concern about the rationale behind technology improvement. Accidents increase their outcome through the same channels that organizations use for their normal activity. Accidents in efficient organizations are efficient too. The risk concept, understood as a product of impact by probability is changed through complexity: Probability is decreased and potential impact is increased.

A few years ago, Nassim Taleb used an interesting concept, Black Swan, to speak about situations that, simply, were supposed not to happen. Therefore, nobody had provided resources for this possibility. Once the accident was happened, we could attend to an incompetence exhibition in the management of the event because everyone was convinced that it cannot happen. We can try to reason in a fine and honest way but, even though, we won’t avoid the existence of black swans , that is, we always are going to find fully unexpected situations. However, there is a variety of situations that can pass undetected, that is, the blackened swans. 

What is a blackened swan? Something that, actively, we have chosen not to see. Perrow, Hollnagel and others tell about a dynamics that many others do not want to see: If an airline run into serious financial problems or works with a very short profit margin, we could reasonnably thing that they are saving money in the less visible parts -one of them is Maintenance- but Aviation regulator don’t know about financial issues and financial specialists do not know about Aviation. As it happened in the Titanic, both things work as watertight compartments. As it happened in the Titanic compartments are not fully watertight and the vessel can be sunk: When a pilot is taught that advanced electronic systems can guarantee that the plane cannot stall…he will pull the flighstick, he’ll do it still more whole-heartedly if a synthetic voice encourages him to do that and he’ll do it without a feeling that something is wrong if the flighstick does not give any feedback through pressure feeling. If regulations allow a twin plane to fly more than five hours through oceanic ways without any available airport…sooner or later, a plane will have to go into the water with a full load of passengers. If a company loads the minimum fuel required by law and it had problems in the past because of that, a moment will come with a plane going down because of fuel-starvation…these are the blackened swans, risk situations that everybody knows but where accountable people look to other place.

When these blackened swans situations produce its expected outcomes, we always will find people telling us that it was a black swan. Of course, to do that, they will carefully hide the fact that they painted it before to not see the risk and, hence, to say that they were ignorant about the risk before the event. There is still another resource to justify why a situation does not change: Call it Human Error.

Los políticos no se llevan bien con Linkedin

Hace unos días, un amigo me comentó que le había pedido el contacto en Linkedin a Mariano Rajoy y que lo había aceptado. Me picó la curiosidad y empecé a buscar perfiles públicos de políticos en Linkedin, empezando por el propio Rajoy. Lo primero que encontraremos, en gente que se supone que vive públicamente, es que ninguno de los más conocidos aparece como Open Networker y acceder a ellos supone realizar complicados equilibrios y facilitar que algún periodista, en un ataque de creatividad, pudiera publicar algo como esto que, como carnaza, no está nada mal:

Invitación a Rajoy

Con esa salvedad, el perfil de Rajoy no es de los peores aunque tiene un detalle divertido: El periodo comprendido entre 1986 y 1989, fecha de la fundación del PP, aparece en su perfil como «miembro del Congreso de los Diputados» sin referencia alguna a Alianza Popular por aquello de los complejos centristas.

Rosa Díez tiene en su perfil 135 míseros contactos, lo que deja claro que no utiliza Linkedin como vehículo de comunicación pero, eso sí, todavía supera y por mucho a su competencia más directa, Albert Rivera que tiene ¡¡¡2!!! contactos, no tiene siquiera una URL personalizada y la redacción de su perfil no pasaría un examen de primaria con joyas como ésta:  Habla catalán y castellano y tiene el nivel Advanced de inglés obtenido a la escuela de idiomas de Esade. Pues bien, incluso Albert Rivera supera a Esperanza Aguirre que tiene CERO contactos. ¿Para qué está en Linkedin entonces?

Algo más afinado está el perfil de María Dolores de Cospedal aunque sus 81 contactos sugieren que tampoco en su caso Linkedin es una vía de comunicación preferente. Por cierto, detalle simpático: Cuando se ven estos perfiles, no se piense que Linkedin los está homologando con otros políticos extranjeros sino que la información de Los usuarios también vieron saca a personajes como Cristiano Ronaldo, Lionel Messi o Risto Mejide. O Linkedin falla o no tienen un perfil muy claramente definido, dando lugar a este tipo de comparaciones.

La conclusión más obvia es que, si alguien busca hacer de la actividad política una profesión, al menos en España Linkedin no parece la vía más adecuada. En el extranjero, con variaciones, las cosas cambian. El perfil de Angela Merkel, también sin contactos, no tiene compañeros de viaje del mundo de la farándula:

Merkel

También fuera de nuestras fronteras, Obama es ya otro mundo. Además de sus casi 900.000 seguidores en Linkedin, quienes han visto su perfil han visto también estos otros:

ObamaNo es lo mismo ¿verdad? Añadamos que todos los movimientos que está haciendo Linkedin se van acercando más a convertirla en una red generalista aunque manteniendo una orientación clara al ámbito profesional o, dicho de otra forma, no es un portal de empleo ni un sitio donde sólo están los que buscan trabajo. ¿Entendido, señores políticos? Pues hala…trabajen un poco que no es malo.

Estafas online

Un sitio de subastas, con profusión de anuncios en distintos medios, que pretende hacer la competencia a Ebay, y que anuncia rebajas de hasta un 80% en artículos a estrenar y que dice que la diversión comprando nunca termina oculta en su letra pequeña un detallito: PAGAS POR CADA PUJA. Esto es lo que explica que las pujas empiecen en cero y que varíen en unos pocos céntimos cada vez que alguien hace una puja nueva; el artículo vendido al 20% de su precio ha dejado un pastón en forma de los aproximadamente cincuenta céntimos que se pagan cada vez que se suben cuatro céntimos.

Me enteré de esto por la vía dura; cuando después de pujar varias veces por un artículo que dicen que no puedo seguir pujando porque ya «no tengo créditos»…y no los tengo porque, con cada puja que hacía, los consumía hasta que mis diez euros iniciales se fueron, no en la compra de ningún producto del que ganase la subasta sino ¡¡¡en pujar!!! Al precio oficial del crédito, significa que cada puja cuesta aproximadamente 50 céntimos y, claro, entrar en el «mercado negro» de créditos, que venden como un artículo más, parece ya demasiado.

Tratar de conseguir un teléfono móvil de 700 euros de precio en 140 puede parecer un negocio ruinoso pero, si se tiene en cuenta que, por decreto, la puja ha empezado en cero y, hasta llegar a los 140, ha tenido que haber unos 2.800 pujas, cada una de las cuales ha dejado unos cincuenta céntimos…resulta que por el teléfono se han obtenido…¡¡¡1.540 euros más los gastos de envío!!!

Me doy por estafado en diez euros y espero que este post ayude a otros a no serlo. ¿No he puesto el nombre? Lo sé; la palabra «estafa» que es la que mejor describe la forma de operar junto con el nombre de la empresa es una forma de garantizarse problemas y prefiero que, puestos a elegir, sea la idea de estafa la que quede en evidencia. De todas formas, hay las suficientes miguitas de pan para seguir el rastro y encontrar el sitio.