Categoría: Gestión de conocimiento

WHEN THE WORLD IS FASTER THAN ITS RULES

Anyone in touch with dynamic fields can find this phenomenon: Things are faster than the rules intending to control them. Hence, if the capacity to be enforced is very strong, old rules can stop the advancement. By the same token, if that capacity is weak, rules are simply ignored, and the world evolves following different paths.

The same fact can be observed in many different fields:

Three months ago, an article was titled “POR QUÉ ALBERT EINSTEIN NO PODRÍA SER PROFESOR EN ESPAÑA” (Why Albert Einstein could not be a professor in Spain) and, basically, the reason was in a bureaucratic model tailored for the “average” teacher. This average teacher, just after becoming a Bachelor, starts with the doctorate entering a career path that will finish with the retirement in the University. External experience is not required and, very often, is not welcome.

The age, the publications and the length of the doctoral dissertation (17 pages) could have made impossible for Einstein to teach in Spain. The war for talent means in some environments fighting it wherever it can be found.

If we go to specific and fast evolving fields, things can be worse:

Cybersecurity can be a good example. There is a clear shortage of professionals in the field and it is worsening. The slowness to accept an official curriculum means that, once the curriculum is accepted, is already out-of-date. Then, a diploma is not worth and, instead, certification agencies are taking its place, enforcing up-to-date knowledge for both, getting and keeping the certification.

Financial regulators? Companies are faster than regulators and a single practice can appear as a savings plan, as an insurance product or many other options. If we go to derivative markets, the speed introduces different parameters or practices like high-frequency trading hard to follow.

What about cryptocurrencies? They are sidestepping control by the Governments and, still worse, they can break one of the easiest ways for the States to get funds. Governments would like to break them and, in a few weeks, EU will have a new rule to “protect privacy” that could affect the blockchain process, key for the security of cryptocurrencies and…many Banks operations.

Aviation? The best-selling airplane in the Aviation history -Boeing 737- was designed in 1964 and it started to fly in 1968. The last versions of this plane don’t have some features that could be judged as basic modifications because the process is so long and expensive (more and more long and expensive) that Boeing prefers to keep attached to some features designed more than 50 years ago.

In any of these fields or many others that could be mentioned, the rules are not meeting its intended function, that is, to keep functionality and, in the fields where it is required, safety as a part of the functionality. Whatever the rule can be ignored or can be a heavy load to be dragged in the development, it does not work.

We can laugh at the old “1865 Locomotive Act” with delicious rules such as this: The most draconic restrictions and speed limits were imposed by the 1865 act (the “Red Flag Act”), which required all road locomotives, which included automobiles, to travel at a maximum of 4 mph (6.4 km/h) in the country and 2 mph (3.2 km/h) in the city, as well as requiring a man carrying a red flag to walk in front of road vehicles hauling multiple wagons (Wikipedia).

However, things were evolving in 1865 far slower than now. Non-functional rules like that could be easily identified and removed before becoming a serious problem. That does not happen anymore. We try to get more efficient organizations and more efficient technology, but the architecture of the rules should be re-engineered too.

Perhaps the next revolution is not technologic despite it can be fueled by technology. It could be in the Law: The governing rules -not the specific rules but the process to create, modify, change or cancel rules- should be modified. Rules valid for a world already gone are so useful as a weather forecast for the past week.

Useless diplomas, lost talent, uncontrolled or under-controlled new activities or product design where the adaptation to the rules are a major part of the development cost and time are pointing to a single fact: The rules governing the world are unable to keep the pace of the world itself.

Anuncios

Air Safety and Hacker Frame of Mind

If we ask anyone what a hacker is, we could get answers going from cyberpiracy, cyberdelincuency, cybersecurity…and any other cyberthing. However, it’s much more than that.

Hackers are classified depending of the “color of their hats”. White hat hacker means individual devoted to security, black hat hacker means cybercriminal and grey hat hacker means something in the middle. That can be interesting as a matter of curiosity but…what do they have in common? Furthermore, what do they have in common that can be relevant for Air Safety?

Simonyi, the creator of WYSIWYG, warned long ago about an abstraction scale that was adding more and more steps. Speaking about Information Technology, that means that programmers don’t program a machine. They instruct a program to make a program to be run by a machine. Higher programming levels mean longer distance from the real thing and more steps between the human action and the machine action.

Of course, Simonyi warned of this as a potential problem while he was speaking about Information Technology but…Information Technology is now ubiquitous and this problem can be found anywhere including, of course, Aviation.

We could say that any IT-intensive system has different layers and the number of layers defines how advanced the system is. So far so good, if we assume that there is a perfect correspondance between layers, that is, every layer is a symbolic representation of the former one and that representation should be perfect. That should be all…but it isn’t.

Every information layer that we put over the real thing is not a perfect copy -it should be nonsense- but, instead, it tries to improve something in safety, efficiency or, very often, it claims to be improving both. However, avoiding flaws in that process is something that is almost impossible. That is the point where problems start and when hacker-type knowledge and frame of mind should be highly desirable for a pilot.

The symbolic nature of IT-based systems makes its flaws to be hard to diagnose since their behavior can be very different to mechanic or electric systems. Hackers, good or bad, try to identify these flaws, that is, they are very conscious of this symbolic layer approach instead of assuming an enhanced but perfect representation of the reality below.

What means a hacker frame of mind as a way to improve safety? Let me show two examples:

  • From cinema: The movie “A beautiful mind”, devoted to John Nash and showing his mental health problems shows at a moment how and why he was able to control these problems: He was confusing reality and fiction until a moment where he found something that did not fit. It happened to be a little girl that, after many years, continued being a little girl instead of an adult woman. That gave him the clue to know which part of his life was created by his own brain.
  • From Air Safety: A reflection taken from the book “QF32” by Richard de Crespigny: Engine 4 was mounted to our extreme right. The fuselage separated Engine 4 from Engines 1 and 2. So how could shrapnel pass over or under the fuselage, then travel all that way and damage Engine 4? The answer is clear. It can’t. However, once arrived there, a finding appears crystal-clear: Information coming from the plane is not trustable because in any of the IT-layers the correspondance reality-representation has been lost.

Detecting these problems is not easy. It implies much more than operating knowledge and, at the same time, we know that nobody has full knowledge about the whole system but only partial knowledge. That partial knowledge should be enough to define key indicators -as it happens in the mentioned examples- to know when we work with information that should not be trusted.

The hard part of this: The indicators should not be permanent but adapted to every situation, that is, the pilot should decide about which indicator should be used in situations that are not covered by procedures. That should bring us to other issue: If a hacker frame of mind is positive for Air Safety, how to create, nurture and train it? Let’s use again the process followed by a hacker to become such a hacker:

First, hackers look actively for information. They don’t go to formal courses expecting the information to be given. Instead, they look for resources allowing them to increase their knowledge level. Then, applying this model to Aviation should suppose a wide access to information sources beyond the information provided in formal courses.

Second, hackers training is more similar to military training than academic training, that is, they fight to intrude or to defend a system and they show their skills by opposing an active enemy. To replay a model such as this, simulators should include situations that trainers can imagine. Then, the design should be much more flexible and, instead of simulators behaving as a plane is supposed to do, they should have room to include potential situations coming from information misrepresentation or from situations coming from automatic answers to defective sensors.

Asking for a full knowledge of all the information layers and their potential pitfalls can be utopic since nobody has that kind of knowledge, including designers and engineers. Everybody has a partial knowledge. Then, how can we do our best with this partial knowledge? Looking for a different frame of mind in involved people -mainly pilots- and providing the information and training resources that allow that frame of mind to be created and developed. That could mean a fully new training model.

Published originally in my Linkedin profile

Sterile discussions about competencies, Emotional Intelligence and others…

When “Emotional Intelligence” fashion arrived with Daniel Goleman, I was among the discordant voices affirming that the concept and, especially, the use of it, was nonsense. Nobody can seriously reject that personal features are a key for success or failure. If we want to call it Emotional Intelligence that’s fine. It’s a marketing born name not very precise but, anyway, we can accept it.

However, losing the focus is not acceptable…and some people lose the focus with statements like “80% of success is due to Emotional Intelligence, well above the percentage due to “classic” intelligence. We lose focus too with statements comparing competencies with academic degress and the role of each part in professional success. These problems should be analyzed in a different and simpler way: It’s a matter of sequence instead of percentage.

An easy example: What is more important for a surgeon to be successful? The academic degree or the skills shown inside the OR? Of course, this is a tricky question where the trick is highly visible. To enter the OR armed with an scalpel, the surgeon needs an academic recognition and/or a specific license. Hence, the second filter -skills- is applied over the ones who passed the first one -academic recognition- and we cannot compare in percentage terms skills and academic recognition.

Of course, this is an extreme situation but we can apply it to the concepts where some sterile discussions appear. Someone can perform well thank to Emotional Intelligence but the entrance to the field is guaranteed with intelligence in the most common used meaning. Could we say that, once passed an IQ threshold we should better improve our interaction skills than -if possible- improve 10 more IQ points? Possibly…but things don’t work that way, that is, we define the access level through a threshold value and performance with other criteria, always comparing people that share something: They all are above the threshold value. Then…how can I say “Emotional Intelligence is in the root of 80% of success”? It should be false but we can convert it into true by adding  “if the comparison is made among people whose IQ is, at least medium-high level”. The problem is that, with this addition, it is not false anymore but this kind of statement should be a simple-mindedness proof.

We cannot compare the relative importance of two factors if one of them is referred to job access while the other is referred to job performance once in the job. It’s like comparing bacon with speed but using percentages to appear more “scientific”.

Internet y su presunta omnisciencia: La próxima guerra está en la calidad de la información

Ray Kurzweil decía que la gran revolución que han traído los sistemas avanzados de información está en un simple hecho: Reproducir y transmitir la información tiene un coste virtualmente igual a cero. Se supone que eso significa romper una diferenciación clásica entre los que tienen acceso a la información y los que no la tienen puesto que, según Kurzweil, ahora todos lo tienen.

Por puro azar, estos últimos días he tenido que buscar información sobre distintos temas y, cómo no, he recurrido a hacer búsquedas más o menos avanzadas en Google y en sitios que se suponen especializados en proveer información. Tengo que anticipar que no se trataba de cuestiones filosóficas, religiosas o similares sino de preguntas que tienen una respuesta clara . Otra cosa es acceder a ella a través de la maraña de informaciones falsas o desfasadas. Esto ocurre incluso cuando se trata de asuntos directamente relacionados con Internet.

Ejemplo: Teléfono Nexus 5 con el compromiso de actualización por parte de Google a la última versión de Android. Lo que no dice Google es cuándo llega esa última versión y, los que somos poco pacientes, buscamos otras vías como, por ejemplo, descargar la actualización oficial de los sitios de Google. Esto requiere cierta manipulación en el teléfono como desbloquear el bootloader rootear el teléfono u otras piezas de la tecnoverborrea.

En cualquiera de estas opciones, Google ofrece varias páginas de resultados, incluyendo videos de Youtube. El problema está cuando se intenta poner en práctica y se ve que las instrucciones pueden ser desfasadas, incompletas o, simplemente, el teléfono no hace lo que, según las instrucciones leídas en Internet, tendría que hacer.

Lo curioso del caso es que, después de tratar diversas soluciones y en alguna de éllas llegar a bloquear el terminal, apareció una solución: Una herramienta software llamada Nexus Root Toolkit que permite al usuario hacer lo que quiera con el teléfono: Rootear, desrootear, bloquear o desbloquear el bootloader, cambiar la versión del sistema operativo…lo que sea.

¿Por qué llegar a esta herramienta supone una peregrinación y un ensayo y error de soluciones que supuestos o reales expertos van poniendo en Internet?

Otro ejemplo, quizás algo menos escandaloso porque no va al propio terreno en el que se supone que Internet debería tener información de primera calidad, está en la búsqueda de diferencias en el diseño entre dos tipos de avión y en temas muy específicos: La información existe pero encontrarla con un buscador o en un sitio de preguntas y respuestas tipo Quora es prácticamente imposible y al final lo más operativo es telefonear a alguien que se sabe que dispone de tal información…como en los viejos tiempos.

Kurzweil tiene razón: La gran revolución de la tecnología de la información es la desaparición del coste de multiplicar y transmitir información pero esa gratuidad virtual ha traído consigo un problema: Todo el mundo tiene un altavoz sobre cualquier tema y no sólo los que tengan algo que decir sobre él. Encontrar una señal válida entre una masa creciente de ruido es cada vez más difícil y el aumento de número de páginas o de velocidades de acceso no sólo no arreglan este problema sino que contribuyen a agravarlo.

Internet crece de una forma espectacular pero la calidad de la información que contiene no lo hace. Más bien lo contrario.

Windows Knowledge

Perhaps in a moment where Microsoft does not live its best moment, speaking about Windows could be seen as a paradox. However, Windows itself is a paradox of the kind of knowledge that many companies are pricing right now.

Why Windows? You open your computer to find a desktop. The desktop has binders and documents and you have even a trash bin. This environment allows working in the old-fashioned way. We move documents from one binder to another one. When a document is not useful anymore, we put it in the trash bin and everything seems to be perfect…as far as everything works as it’s supposed to work. What happens when we find a blue screen or, simply, the computer does not start?

At that moment, we find that everything was false. Desktop, binders, documents…? Everything is false. Everything is a part of a complex metaphor, good while everything works as expected but fully useless once something fails. What kind of real knowledge does the Windows user have? The Windows user has an operating knowledge that can be enough in more than 90% of the cases. That’s fine if the remaining 10% cannot bring unexpected and severe consequences but we see this kind of knowledge in more and more places, including critical ones.

When 2008 crisis started, someone said that many Banks and financial institutions had been behaving as Casinos. Other people, more statistically seasoned denied that telling that, had they been behaving as Casinos, the situation never should have been as it was because Casinos have probabilities in their favor. Other environments don’t have this advantage but they behave as if they have it with unforeseeable consequences.

BIG DATA: WILL IT DELIVER AS PROMISED?

 

The Big Data concept is still relatively new but the concept inside is very old: If you have more and more data, you can eliminate ambiguity and there are less requirements of hunches since data are self-explanatory.

That is a very old idea coming from I.T. people. However, reality always insisted in delaying the moment when that can be accomplished. There are two problems to get there:

  1. As data grow, it is more necessary a context analysis to decide which one are relevant and which others can be safely ignored.
  2. At the other side of the table, we could have people trying to misguide automatic decision supporting systems. Actually, the so-called SEO (Search Engine Optimization) could be properly renamed GAD (Google Algorithm Deception) to explain more clearly what it is intended to do.

Perhaps, by now, Big Data could be less prone to the second problem than anyone performing Web Analytics. Web has become the battlefield for a quiet fight:

By one side, the ones trying to get better positions for them and the positive news about them. These are also the ones who try to hide negative news throwing positive ones and repeating them to be sure that the bad ones remain hidden in search results.

By the other side, we have the Masters of Measurement. They try to get magic algorithms able to avoid tricks from the first ones, unless they decide paying for their services.

Big Data has an advantage over Web data: If a company can have its own data sources, they can be more reliable, more expensive to deceive and any attempt could be quite easily visible. Even though, this is not new: During the II World War, knowing how successful a bombing had been was not a matter of reading German newspapers or listening to German radio stations.

The practice known as content analysis used indirect indicators like funerals or burials information that could be more informative if and only if the enemy did not know that these data were used to get information. In this same context, before D-Day, some heavily defended places with rubber-made tanks tried to fool reconnaissance planes about the place where the invasion was to start. That practice has remained for a long time. Actually, it was used even in the Gulf War, adding to the rubber tanks heat sources aimed to deceive infrared detectors, who should get a similar picture to the one coming from running engines.

Deceiving Big Data will be harder than deceiving Internet data but, once known who is using specific data and what is doing with them, there will be always a way to do this. An easy example: Inflation indicators: A Government can decide changing the weight in the different variables or changing prices of Government-controlled prices to get a favorable picture. In the same way, if Big Data is used to give information to external parties, we should not need someone from outside trying to deceive the system. That should be done from inside.

Anyway, the big problem is about the first point: Data without a context are worthless…and the context could be moving faster than any algorithm designed to give meaning to the data. Many surprising outcomes have happened in places where all the information was available. However, that information has been correctly read only after a major disaster. For instance, emergence of new political parties could be seen but, if major players decided to dismiss them, it comes as a surprise for them, even though data were available. The problem was in the decision about what deserves to be analyzed and how to do it, not in the data themselves.

Other times, the problem comes from fast changes in the context that are not included in the variables to analyze. In the case of Spain, we can speak about the changes that 11M, and how it was managed by the different players, supposed in the election three days after. In another election, everybody had a clear idea about who was going to get a position that required an alliance. Good reasons advised an agreement and data showed that everybody was sure that the agreement was coming…but it wasn’t. One of the players was so sure that things were already done that tried to impose conditions that the other players saw as unacceptable. Consequence: The desired position was to the hands of a third player. Very recently, twopeople, both considered as growing stars, can have spoiled their options in minor incidents.

In short, we can have a huge amount of data but we cannot analyze all of them but the ones considered as relevant. At doing that, there is not an algorithm or an amount of data that can be a good replacement for an analysis performed by human experts. An algorithm or automatic system can be fooled, even by another automatic system designed to do that, context analysis can lose important variables that have been misjudged and sudden changes in the context cannot be anticipated by any automatic system.

Big Data can be helpful if rationally used. Otherwise, it will become another fad or worse: It could become a standard and nobody would dare deciding against a machine with a lot of data and an algorithm, even when they are wrong.

Frederick W. Taylor: XXI Century Release

Any motivation expert, from time to time, devotes a part of his time to throw some stones to Frederick W. Taylor. It seems, from our present scope, that there are good reasons for the stoning: Strict splitting between planning and performing is against any idea considering human beings as something more than faulty mechanisms.

However, if we try to get the perspective that Taylor could have a century ago, things could change: Taylor made unqualified workers able to manufacture complex products. These products were far beyond the understanding capacity of those manufacturing them.

From that point of view, we could say that Taylor and his SWO meant a clear advance and Taylor cannot be dismissed with a high-level theoretical approach out of context.

Many things have happened since Taylor that could explain so different approach: The education of average worker, at least in advanced societies, grew in an amazing way. The strict division between design and performance could be plainly justified in Taylor time but it could be nonsense right now.

Technology, especially the information related, not only advanced. We could say that it was born during the second half of the past century, well after Taylor. Advances have been so fast that is hard finding a fix point or a context to evaluate its contribution: When something evolves so fast, it modifies the initial context and that removes the reference point required to evaluate the real value.

At the risk of being simplistic, we could say that technology gives us “If…Then” solutions. As technology power increases, situations that can be confronted through an “If…Then” solution are more and more complex. Some time ago, I received this splendid parody of a call-center that shows clearly what can happen if people work only with “If…Then” recipes, coming, in this case, from a screen:

http://www.youtube.com/watch?v=GMt1ULYna4o

Technology evolution again puts the worker -now with an education level far superior to the one available in Taylor age- in a role of performer of routines and instructions. We could ask why so old model is still used and we could find some answers:

  • Economics: Less qualified people using technology can perform more complex tasks. That means savings in training costs and makes turnover also cheaper since people are easier to replace.
  • Knowledge Ownership: People have a brain that can store knowledge. Regretfully, from the perspective of a company, they have also feet that can be used to bring the brain to other places. In other words, knowledge stored by persons is not owned by companies and, hence, they could prefer storing knowledge in processes and Information Systems managing them.
  • Functionality: People commit more mistakes, especially in these issues hard to convert into routines and required going beyond stored knowledge.

These points are true but, when things are seen that way, there is something clear: The relation between a company and people working there is strictly economical. Arie de Geus, in The living organization, said that the relation between a person and a company is economic but considering it ONLY economic is a big mistake.

Actually, using If…Then model as a way to make people expendable can be a way to guarantee a more relaxed present situation…at the price of questionning the future. Let’s see why:

  • If…Then recipes are supplied by a short number of suppliers working in every market and, of course, having clients who compete among them. Once reduced the human factor to the minimum…where is it going to be the difference among companies sharing the same Information Systems model?
  • If people are given stricly operative knowledge…how can we advance in this knowledge? Companies outsource their ability to create new knowledge that, again, remains in the hands of their suppliers of Information Systems and their ability to store more “If…Then” solutions.
  • What is the real capacity of the organization to manage unforeseen contingencies, if they have not been anticipated in the system design or, even worse, contingencies coming from the growing complexity of the system itself?

This is the overview. Taylorism without Taylor is much worse than the original model since it’s not justified by the context. Companies perform better and better some things that they already knew how to manage and, at the same time, it is harder and harder for them improving at things that previously were poorly performed. People, under this model, cannot work as an emergency resource. To do this, they need knowledge far beyond the operative level and capacity to operate without being very constrained by the system. Very often they miss both.

Jens Rasmussen, expert in Organization and Safety, gave a golden rule that, regretfully, is not met in many places: Operator has to be able to run cognitively the program that the system is performing. Features of present Information Systems could allow us working under sub-optimized environments: Instead of an internal logic that only the designer can understand -and not always- things running and keeping the Rasmussen rule would be very different.

The rationale about training and turnover costs would remain but advantages from ignoring it are too important to dismiss them. The sentence of De Geus is real and, furthermore, it has a very serious impact about how our organizations are going to be in the next future.

 

A %d blogueros les gusta esto: