Categoría: Capital intelectual

La inteligencia artificial y el error de Kasparov

Hay quien sitúa el renacimiento de la inteligencia artificial en 1998, cuando una máquina llamada Deep Blue fue capaz de derrotar al campeón mundial de ajedrez, Kasparov.

No es cierto; Deep Blue puede calificarse de humano incompleto o, si se prefiere, de idiot savant, es decir, alguien que es capaz de procesar en muy poco tiempo una gran cantidad de información que le han suministrado sobre partidas de ajedrez y sus resultados. Dicho de otro modo, el aprendizaje de Deep Blue es demasiado humano para poder catalogarse seriamente como inteligencia artificial.

Unos años después apareció Alpha Zero con un planteamiento completamente distinto: Se le daban las reglas del juego e instrucciones de ganar y, a partir de ahí, entraba en un proceso de aprendizaje jugando contra sí mismo. Con toda probabilidad, las opciones que habría tenido Kasparov o cualquier otro jugador de su nivel contra Alpha Zero habrían sido nulas y, sin embargo, en 1998, un mal manejo por parte de Kasparov de sus opciones le llevó a una derrota que, entonces, podría haber sido evitada.

Kasparov no era consciente de una importante debilidad de la inteligencia artificial, entonces y ahora: La fase de aprendizaje requiere una potencia de procesamiento espectacular aplicada, alternativamente, sobre bases de datos de contenidos (caso de Deep Blue) o sobre contenidos autogenerados (caso de Alpha Zero); el proceso de aprendizaje genera un programa mucho más sencillo, con menos requerimientos de potencia y éste es el que se enfrenta, en nuestro caso, a Kasparov.

Quizás este detalle puede parecer irrelevante pero no lo es en absoluto: Kasparov estaba jugando contra un programa sofisticado y capacidad para generar un nivel de juego equivalente a un jugador de primera línea pero…sin capacidad de aprendizaje. La capacidad de aprendizaje estaba en otro sitio y Kasparov tuvo en su mano la posibilidad de permitir o no su utilización.

Al principio de la competición, se establecieron las reglas de ésta y, entre ellas, estaba la posibilidad de admitir asesores externos por ambas partes. Kasparov podía haber prescindido de ellos y, al hacerlo así, habría obligado a que Deep Blue tampoco los tuviera, es decir, se habría quedado enfrentado a la parte operativa de Deep Blue que, teniendo un buen nivel de juego, probablemente habría sido insuficiente para derrotar a Kasparov.

Sin embargo, éste aceptó que hubiera asesores externos y ¿quiénes podían ser los asesores externos de Deep Blue? Especialistas en tecnología de la información que podían hacer el enlace entre la parte generadora de aprendizaje y la parte usuaria de ese aprendizaje. De esta forma, situaciones de difícil salida por no haber sido previstas podían ser manejadas recurriendo al primo de Zumosol, es decir, al sistema capaz de aprender. Ese acceso no habría existido si se hubiera cerrado a ambas partes la posibilidad de tener asesores y Kasparov posiblemente se habría ahorrado una derrota frente a un programa mucho menos evolucionado que los que vinieron después.

En términos prácticos, el asunto no tiene mayor importancia. Quizás Kasparov habría vencido en 1998 pero habría sido derrotado unos años más tarde con la aparición de sistemas capaces de aprender de forma “no humana” como el mencionado Alpha Zero.

Sin embargo, a pesar de esa irrelevancia en términos prácticos, lo que le falló a Kasparov es la explotación de una debilidad de la inteligencia artificial que hoy sigue existiendo, es decir, la estricta separación entre la parte del sistema que aprende y la parte del sistema que utiliza lo aprendido.

Gary Klein señalaba como las personas, en su proceso para tomar una decisión, podían realizar simulaciones mentales que les daban información sobre si una opción funcionaría o no. Puede decirse que, en el caso de las personas, no hay una separación entre el proceso de aprendizaje y el proceso de ejecución sino que ambos van indisolublemente unidos.

En el caso de la inteligencia artificial, las cosas son distintas: El sistema dispone de una gran cantidad de opciones de respuesta pero, si la situación que afronta no está contemplada entre las opciones disponibles -cosa no descartable en entornos abiertos- no dispone de esa capacidad de simulación mental que le permita aprender y, a la vez, ejecutar una tarea que no se encontraba en su catálogo de soluciones previo.

Esta diferencia, vital pero poco conocida salvo para los especialistas, hace que en determinados entornos donde las consecuencias de un error pueden ser muy elevadas, la inteligencia artificial pueda ser cuestionable, salvo en un papel complementario.

Kasparov no supo, o no dio la relevancia adecuada, a este factor y le supuso una derrota que, tal vez, habría sido evitable. Hoy, esto sigue siendo ignorado con mucha frecuencia y, como resultado, se podría llegar a utilizar la inteligencia artificial en entornos para los que no está preparada.

Los requerimientos de tiempo, capacidad de procesamiento y acceso a ingentes bases de datos de un sistema capaz de aprender hace que su inclusión en el sistema que ejecuta las tareas, sea un juego de ajedrez o manejar un avión, sea inviable. El sistema que utiliza el producto del aprendizaje, mucho más ligero, puede tener preparada la respuesta para una enorme cantidad de situaciones pero, si la que se produce no está entre ellas, no cabe esperar respuesta.

Este aspecto, frecuentemente ignorado, debería estar muy presente en muchas de las decisiones que se toman sobre dónde utilizar la inteligencia artificial y dónde no hacerlo.

Anuncio publicitario

La inteligencia artificial como vehículo para la degradación del conocimiento

Mucho antes de la eclosión de la inteligencia artificial, era fácil ver que en un momento en que la acumulación de conocimiento había alcanzado sus mayores niveles, se estaba produciendo una degradación del conocimiento humano en muchos terrenos.

En la primera fase, la degradación se produce a través de una especialización a ultranza y, en la segunda, se produce pasando de un conocimiento en profundidad a uno puramente procedimental.

Gary Klein nos previno contra la “segunda singularidad”, es decir, el hecho de que antes de que un ordenador sea capaz de superar a un ser humano – la llamada “singularidad”- éste haya sufrido un proceso de degradación de conocimiento tal que facilite la llegada de la singularidad original.

En la adquisición de conocimiento funciona una ley muy conocida por los expertos en calidad:

El coste del conocimiento está sujeto a una ley de rendimientos marginales decrecientes.

Cualquiera que haya recurrido a un centro de atención telefónica, sabe muy bien lo que es hablar con un robot humano, es decir, alguien dotado de un conocimiento estrictamente procedimental y que, cuando cualquier situación excede de su escaso ámbito de competencia, transferirá la petición a un experto o, misteriosamente, se cortará la llamada.

Una parodia, no exenta de una buena carga de realidad, sobre esta falta de conocimiento y sus efectos puede encontrarse aquí:

La unión de un conocimiento estrictamente procedimental junto con el uso de la realidad virtual también podía encontrarse antes de la popularización de la inteligencia artificial en ejemplos como éste:

Por supuesto, recorrer el camino inverso y pasar de un conocimiento procedimental a un conocimiento a fondo y a la posesión de las habilidades asociadas a éste, es un paso mucho más costoso, y aquí es donde entraría en juego la inteligencia artificial porque muestra una alternativa precisamente para evitarlo:

Se trata de una opción de menor calidad pero también de mucho menor coste que tiene muchas posibilidades de profundizar en un proceso de degradación iniciado hace tiempo.

Un mundo dominado por la inteligencia artificial daría lugar a varios tipos de situaciones distintas:

  • Situaciones en que podríamos resolver problemas con rapidez, ahorrando trabajo y de una forma brillante.
  • Situaciones en que la respuesta obtenida fuera mediocre pero un análisis coste-beneficio la convirtiese en aceptable.
  • Situaciones en las que nos haría peticiones imposibles de recursos de memoria o capacidad de procesamiento o entraría en un bucle infinito sin llegar nunca a una solución.

Estos tres tipos de situación abren tres preguntas derivadas de ellas y de gran relevancia para la evolución del futuro conocimiento:

  • ¿Quién y para qué se va a molestar en aprender algo que una inteligencia artificial siempre hará mejor?
  • ¿Quién se va a molestar en aprender algo que una inteligencia artificial hará igual o peor, pero con un proceso de aprendizaje mucho más rápido y barato?
  • Asumiendo que la adquisición de conocimiento profundo es cara en tiempo y dinero ¿a quién le va a merecer la pena adquirir los conocimientos necesarios para resolver esas situaciones para las que la inteligencia artificial no tiene respuesta?

Por cualquiera de los tres caminos llegamos al mismo sitio: Degradación del conocimiento humano. Sin embargo, puesto que la inteligencia artificial no es capaz de generar sus propios objetivos, se necesita un conocimiento humano de un nivel alto -y no sólo relativo al funcionamiento de la inteligencia artificial- para ser capaz de proveer los objetivos adecuados.

Alguien podría pensar que la tarea de poner los objetivos es fácil y lo difícil es precisamente lo que hace la inteligencia artificial, es decir, aprender pero la realidad es muy distinta:

Asimov escribió una divertida novela, “Azazel”, en la que un demonio hacía una interpretación literal de todo lo que se le pedía y se presentaban situaciones imprevistas en las que, habitualmente, todo empezaba bien pero, gracias a la interpretación literal, quien le planteó el objetivo al demonio acababa deseando no haberlo hecho jamás.

Asimov plantea el problema en una forma divertida, pero la vida real nos ha dado numerosos ejemplos; así, se han producido casos como las acusaciones de racismo a Amazon por el sistema utilizado para el reclutamiento; al parecer, Amazon había identificado a sus trabajadores de mayor rendimiento para buscar perfiles parecidos y, al hacerlo así, había introducido involuntariamente diversos sesgos que fueron objeto de denuncia. En una actividad muy distinta, los taxis sin conductor de San Francisco están provocando numerosas quejas, especialmente por parte de servicios de emergencia, debido a una conducción cada vez más agresiva.

Hay distintos tipos de algoritmos de aprendizaje y, en ausencia de un “Master Algorithm”, como lo denominó Pedro Domingos, cada tipo de problema tiene un algoritmo que es más adecuado que los demás pero, supuesto que el algoritmo haya sido correctamente seleccionado, la definición de los objetivos y las limitaciones a la actuación son quienes definen la corrección de lo que se aprende y de las actuaciones resultantes.

Sin embargo, la definición de objetivos y sus limitaciones requiere un conocimiento profundo, ni accesible al sistema ni exclusivamente acerca del sistema; ese conocimiento es costoso de adquirir y ese coste es difícil de asumir, especialmente cuando hay sucedáneos de más bajo nivel que pueden dar una respuesta, aunque sea mediocre.

Se podría argumentar que una máquina podría fijarle objetivos a otra, de modo parecido a cómo se están utilizando sistemas de inteligencia artificial para interpretar a otros sistemas y, de esta forma, hacer comprensibles sus resultados a la persona que tiene que asegurarse de que el sistema aprende lo correcto.

 Tal vez, en cierto modo la opción de tener sistemas poniendo objetivos a otros sistemas sería una buena noticia, pero un eventual fallo en la formulación de objetivos se arrastraría a través de distintas generaciones de inteligencia artificial y, probablemente, no quedaría nadie capaz de corregir el error: La discusión estaría ya a un nivel inalcanzable para un humano cuyo conocimiento había sido degradado.

La situación, tal como está definida hoy, presenta dos problemas:

  • ¿Quién, por qué y para qué va a tratar de adquirir un conocimiento profundo sobre cualquier materia cuando se dispone de un conocimiento superficial infinitamente más barato y rápido de obtener?
  • En el supuesto de que quisiéramos adquirir ese nivel de conocimiento ¿vamos a tener la posibilidad de hacerlo o, al estar su adquisición mediatizada por sistemas de inteligencia artificial, nunca llegaremos al nivel de conocimiento profundo necesario para retar y mejorar los sistemas?

El problema no es la inteligencia artificial; éste es un proceso en marcha desde hace mucho tiempo. Sowell lo identificó correctamente, como demuestra la cita inicial, pero se detuvo en la especialización y no siguió profundizando en la espiral de degradación del conocimiento y sus efectos.

La inteligencia artificial sólo ha puesto el elemento tecnológico que se necesitaba para carta de naturaleza y, tal vez, para dotar de irreversibilidad al proceso.

Sterile discussions about competencies, Emotional Intelligence and others…

When «Emotional Intelligence» fashion arrived with Daniel Goleman, I was among the discordant voices affirming that the concept and, especially, the use of it, was nonsense. Nobody can seriously reject that personal features are a key for success or failure. If we want to call it Emotional Intelligence that’s fine. It’s a marketing born name not very precise but, anyway, we can accept it.

However, losing the focus is not acceptable…and some people lose the focus with statements like «80% of success is due to Emotional Intelligence, well above the percentage due to «classic» intelligence. We lose focus too with statements comparing competencies with academic degress and the role of each part in professional success. These problems should be analyzed in a different and simpler way: It’s a matter of sequence instead of percentage.

An easy example: What is more important for a surgeon to be successful? The academic degree or the skills shown inside the OR? Of course, this is a tricky question where the trick is highly visible. To enter the OR armed with an scalpel, the surgeon needs an academic recognition and/or a specific license. Hence, the second filter -skills- is applied over the ones who passed the first one -academic recognition- and we cannot compare in percentage terms skills and academic recognition.

Of course, this is an extreme situation but we can apply it to the concepts where some sterile discussions appear. Someone can perform well thank to Emotional Intelligence but the entrance to the field is guaranteed with intelligence in the most common used meaning. Could we say that, once passed an IQ threshold we should better improve our interaction skills than -if possible- improve 10 more IQ points? Possibly…but things don’t work that way, that is, we define the access level through a threshold value and performance with other criteria, always comparing people that share something: They all are above the threshold value. Then…how can I say «Emotional Intelligence is in the root of 80% of success»? It should be false but we can convert it into true by adding  «if the comparison is made among people whose IQ is, at least medium-high level». The problem is that, with this addition, it is not false anymore but this kind of statement should be a simple-mindedness proof.

We cannot compare the relative importance of two factors if one of them is referred to job access while the other is referred to job performance once in the job. It’s like comparing bacon with speed but using percentages to appear more «scientific».

Windows Knowledge

Perhaps in a moment where Microsoft does not live its best moment, speaking about Windows could be seen as a paradox. However, Windows itself is a paradox of the kind of knowledge that many companies are pricing right now.

Why Windows? You open your computer to find a desktop. The desktop has binders and documents and you have even a trash bin. This environment allows working in the old-fashioned way. We move documents from one binder to another one. When a document is not useful anymore, we put it in the trash bin and everything seems to be perfect…as far as everything works as it’s supposed to work. What happens when we find a blue screen or, simply, the computer does not start?

At that moment, we find that everything was false. Desktop, binders, documents…? Everything is false. Everything is a part of a complex metaphor, good while everything works as expected but fully useless once something fails. What kind of real knowledge does the Windows user have? The Windows user has an operating knowledge that can be enough in more than 90% of the cases. That’s fine if the remaining 10% cannot bring unexpected and severe consequences but we see this kind of knowledge in more and more places, including critical ones.

When 2008 crisis started, someone said that many Banks and financial institutions had been behaving as Casinos. Other people, more statistically seasoned denied that telling that, had they been behaving as Casinos, the situation never should have been as it was because Casinos have probabilities in their favor. Other environments don’t have this advantage but they behave as if they have it with unforeseeable consequences.

Human Resources and Mathematical Fictions

It is hard to find more discussed and less solved issues than how to quantify Human Resources. We have looked for tools to evaluate jobs, to evaluate performance and at what percentage objectives were met. Some people tried to quantify in percentage terms how and individual and a job fit and, even, many people tried to obtain the ROI over training. Someone recovered Q index, aimed to quantify speculative investments, to convert it into the main variable for Intellectual Capital measurement, etc..

 Trying to get everything quantified is so absurd as denying a priori any possibility of quantification. However, some points deserve to be clarified:

New economy is the new motto but measurement and control instruments and, above all, business mentality is defined by engineers and economists and, hence, organizations are conceived as machines that have to be designed, adjusted, repaired and measured. However, it is a common fact that rigor demanded about meeting objectives is not used in the definition of the indicators. That brought something that is called here Mathematical Fictions.

A basic design principle should be that any indicator can be more precise than the thing it tries to indicate whatever the number of decimal digits we could use. When someone insists in keeping a wrong indicator, consequences appear and they are never good:

  •  Management behavior is driver by an indicator that can be misguided due to sneaky type of the variable supposedly indicated. It is worth remembering what happened when some Governments decided that the main priority in Social Security was reducing the number of days in waiting lists instead of the fluffy “improving Public Health System”. A common misbehavior should be to give priority to less time consuming interventions to reduce the number of citizens delaying the most importan tones.
  • There is a development of measurement systems whose costs are not paid by the supposed improvement to get from them. In other words, control becomes an objective instead of a vehicle since control advantages do not cover costs of building and maintenance of the control. For instance, some companies trying to control abuse in photocopies ask for a form for every single photocopy making the control much more expensive than the controlled resource.
  • Mathematical fictions appear when some weight variables that, in the best situation, are only useful for a situation and lose its value if the situation changes. Attemps relative to Intellectual Capital are a good example but we commit the same error if we try to obtain percents of people-job adjustment to use them as to foresee success in a recruiting process.
  • Above all, numbers are a language that is valid for some terrains but not for others. Written information is commonly rejected with “smart talk trap” arguments but the real fact is that we can perceive fake arguments easier in written or verbal statements than if they come wrapped in numbers. People use to be far less exigent about indicators design than about written reports.
  • Even though we always try to use numbers as “objective” indicators, the ability to handle these numbers by many people is surprisingly low. We do not need to speak about the journalist that wrote that Galapagos Islands are hundreds of thousands of kilometers far from Ecuador coast or the common mistake between American billion or European billion. We can show two easy examples about how numbers can lose any objectivity due to bad use:

After the accident of Concorde in Paris, 2001, media reported that it was the safest plane in the world. If we consider that, at that time, only fourteen planes of the type were flying instead of the thousands of not-so-exclusive planes, it is not surprising that an accident never happened before and, hence, nobody can say from it to be the safest plane. The sample was very short to say that.

Another example: In a public statement, the Iberia airline said that travelling by plane is 22 times safer than doing it by car. Does it mean that a minute spent in a plane is 22 times safer than a minute spent in a car? Far from it. This statement can be true or false depending of another variable: Exposure time. A Madrid-Barcelona flight lasts seven times less than a trip by car. However, if we try to contrast one hour inside a plane with an hour inside a car, results could be very far from these 22 times.

The only objective of these examples is showing how numbers can mislead too and we are less prepared to detect the trick than when we have to deal with written language.

These are old problems but –we have to insist- that does not mean they are solved and, perhaps, we should to arrive to the Savater idea in the sense that we do not deal with problems but with questions. Hence, we cannot expect a “solution” but contingent answer that never will close forever the question.

If we work with this in mind, measurement should acquire a new meaning. If we have contingent measurements and we are willing to build them seriously and to change them when they become useless, we could solve some –not all of them- problems linked to measurement. However, problems will arise when measurement is used to inform third parties and that could limit the possibility to change.

An example from Human Resources field can clarify this idea:

Some years ago, job evaluation systems had a real crisis. Competencies models came from this crisis but they have problems to for measurement. However, knowing why job evaluation systems started to be displaced is very revealing:

Even though there are not big differences among the most popular job evaluation systems, we will use Know-How, Problem Solving and Accountability, using a single table to compare different jobs in these three factors is brilliant. However, it has some problems hard to avoid:

  • Reducing to a single currency, the point, all the ratings coming from the three factors implies the existence of a “mathematical artifact” to weight the ratings and, hence, priming some factors over others.
  • If, after that, there are gross deviations from market levels, exceptions were required and these go directly against one of the main values that justified the system: Fairness.

Although these problems, job evaluation systems left an interesting legacy not very used: Before converting ratings into points, that is, before starting mathematical fictions, we have to rate every single factor. We have there a high quality information, for instance, to plan professional paths. A 13 points difference does not explain anything but a difference between D and E, if they are clearly defined, are a good index for a Human Resources manager.

If that is so…why is unused this potential of the system? There is an easy answer: Because job evaluation systems have been used as a salary negotiation tool and that brings another problem: Quantifiers have a bad design and, furthermore, they have been used for goals different from the original one.

The use of mix comittees for salary bargaining, among other factors, has nullified the analytical potential of job evaluation systems. Once a job is rated in a way, it is hard to know if this rating is real or it comes from the vagaries of the bargaining process.

While job evaluation remained as an internal tool of Human Resources area, it worked fine. If a system started to work poorly, it could be ignored or changed. However, if this system starts to be a main piece in the bargaining, it losses these features and, hence, its use as a Human Resources tool dissapears.

Something similar happens if we speak about Balanced Scorecard or Intellectual Capital. If we analyze both models, we’ll find that there is only a different variable and a different emphasis: We could say, without bending too much the concepts, that the Kaplan and Norton model is equal to Intellectual Capital plus financial side but there is another difference more relevant:

Balanced Scorecard is conceived as a tool for internal control. That implies that changes are easy while Intellectual Capital was created to give information to third parties. Hence, measurement has to be more permanent, less flexible and…less useful.

Actually, there are many examples to be used where the double use of a tool nullifies at least another one. The same idea of “Double Accounting” implies criticism. However, pretending that a system designed to give information to third parties can be, at the same time and with the same criteria, an effective tool for control, is quite near to ScFi.

Competencies systems have too its own part of mathematical fiction. It is hard to créate a system able to capture all the competencies and to avoid overlapping among them. If this is already hard…how is it possible to weight variables to define job-occupant adjustment? How many times are we evaluating the same thing under different names? When can we weight a competence? Is this value absolute or should it depend on contingencies? Summarizing….is it not a mathematical nonsense aimed to get a look of objectivity and, just-in-case, to justify a mistake?

This is not a declaration against measurement and, even less, against mathematics but against the symplistic use of it. “Do it as simple as possible but no more” is a good idea that is often forgotten.

Many of the figures that we use, not only in Human Resources, are real fiction ornated with a supposed objectivity coming from the use of a numeric language whose drawbacks are quite serious. Numeric language can be useful to write a symphony but nobody would use it to compose poetry (except if someone decides to use the cheap trick of converting letters into numbers) and, however, there is a general opinion about numbers as universal language or, as Intellectual Capital starters said, “numbers are the commonly accepted currency in the business language”.

We need to show not only momentaneous situations but dynamics and how to explain them. That requires written explanations that, certainly, can misguide but, at least, we are better equipped to detect it than if it come wrapped in numbers.

Lessons from 11S about Technology

 

Long time ago, machines started to be stronger and more precise than people. That is not new but…are they smarter too? We can forget developments near to SciFi like artificial intelligence based in quantum computing or interaction among simple agents. Instead, we are going to deal with present technology, its role in an event like 11S and the conclusions that we can get from that.

 

 Let’s start with a piece of information: A first generation B747 plane required three/four people in a cockpit with more than 900 elements. A last generation B747 only requires two pilots and the number of elements inside the cockpit decreased in two thirds. Of course, this has been posible through I.T. introduction and, as a by-product, rhrough automation of tasks that, previously, had to be performed manually. The new plane appears as easier than the old one. However, the amount of tasks that the plane performs now on its own makes it a much more complex machine.

 

 Planes used in 11S could be considered as state-of-the-art planes at that time and this technological level made the fact possible, of course, together with a number of things far from technology. Something like 11S should have been hard with a less advanced plane. Handling old planes is harder and the collaboration of pilots in a mass-murder should have been required. Not an easy task getting the collaboration of someone in his own death under death threat.

 

The solution was making the pilot expendable and that, if the plane is flying, requires another pilot willing to take his own life. How is the training cost for that pilot? In money terms, a $120.000 figure could be more less adjusted if speak about training a professional pilot. However, this could not be hard to get for the people that organized and financed 11S. A barrier harder to pass is the time required for this training. Old planes were very complicated and their handling required a good amount of training to be acquired along several years. Should terrorists be so patient? Could they trust in the commitment of future self-killers along the years?

 

 Both questions could invite the organizers to reject the plans as unfeasible. However, technology played its role in a very easy way: Under normal situations, modern planes are easier to handle and, hence, they can be flown by people less knowledgeable and less expert. Coming from this point, situation appears under a different light: How long it takes for a rookie pilot getting the dexterity required to handle the plane at the level required by the objectives? Facts showed the answer: A technologically advanced passenger plane is easy to handle –at the level required- by a low-experienced pilot after an adaption through simulator training.

 

Let’s go back to the starting question: Machines are stronger and more precise than people. Are they smarter too? We could start discussing the different definitions about intelligence but, anyway, there is something that machines can do: Once a way to solve a problem is defined, that way can be programmed into a machine to get the problem automatically solved once and again. As a consequence, there is displacement of complexity from people to the machine, allowing modern and complex machines to be handled by people less able than former machines with more complex interfaces.

 

 Of course, there is an economic issue here: An important investment in technological design can be recovered if the number of machines sharing the design is high enough. Investment in design is made only once but it can drive to important savings in thousands of pilots training. At this moment, automation paradox appears: Modern designs produce more complex machines with a good part of the tasks automated. Automation makes these machines easier to handle under normal conditions than the previous ones. Hence, less trained people can operate machines that, internally, are very complex. Once complexity is hidden at interface level, less trained people can drive more complex machines and that is the place where automation payback is.

 

 The scaring question is this one: What happens in unforeseen situations and, hence, not included in technological design? If we speak about high risk activities, the manufacturer uses to have two answers to this questions: Redundancy and manual handling. However, both possibilities require a previous condition: The problem has to be identified as such in a clear and visible way. If not or if, even after being identified, the problem appears in a situation where there is not available time, people trained to operate the machine can find that the machine “becomes crazy” without any clue about the causes of the anomalous behavior.

 

 Furthermore, if the operator receives a full training, that is, not only related with interface but related with the knowledge of the principles of internal design, automation could not be justified due to increased training costs. We already know the alternative: The capacity to answer to an unforeseen event is seriously jeopardized. 11S is one of the most dramatic tests about how people with low training can perform tasks that, before, should have required more training. However, this is not an uncommon situation and it is nearer to our daily life than we could suspect.

 

Everytime we have a problem in the phone, an incidence with the Bank, an administrative problema in the gaz or electricity bill…we can start a process calling the Customer Service. How many times, after bouncing from one Department to other, someone tells us that we have to dial the number that we had dialed at the beginning? Hidden under these experiences, there is a technological development model based in complex machines and simple people. Is this a sustainable model? Technological development produce machines harder and harder to understand by their operators. In that way, we make better and better things that we already knew how to do and things that already were hard become harder and harder.

 

 11S was possible, among other things, as a consequence of a technological evolution model. This model is showing to be exhausted and requiring a course change. Rasmussen stated the requirements of this course change under a single condition: Operator has to be able to run cognitively the program that the machine is performing. This condition is not met and, in case of being mandatory, it could erase the economic viability driving to a double challenge: One of them should be technological making technology understandable to users beyond operating level under known conditions and the other one is organizational avoiding the loss of economic advantages..

 

Summarizing, performing better in things that we already performed well and, to do that, performing worse in things that we already were performing poorly is not a valid option. People require answer always, not only when automation and I.T. allow it. Cost is the main driver of the situation. Organizations do not answer unforeseen external events and, even worse, complexity itself can produce events from inside that, of course, do not have an answer neither.

 

 A technological model aimed to make easier the “what” hiding the “why” is limited by its own complexity and it is constraining in terms of human development. For a strictly economic vision, that is good news: We can work with less, less qualified and cheaper people. For a vision more centered in human and organizational development, results are not so clear. By one side, complexity puts a barrier preventing the technological solution of problems produced by technology. By other side, that complexity and the opacity of I.T. make the operators slaves without the opportunity to be freed by learning.

 

Discusiones bizantinas sobre competencias, inteligencias emocionales, titulaciones y otros

Cuando llegó la moda de la inteligencia emocional de la mano de Daniel Goleman, estuve entre las voces discordantes que afirmaban que el concepto y, sobre todo, el uso que se hacía del mismo no tenían ningún sentido. Que existen unas características personales que son determinantes para el éxito o fracaso es claro; si a eso se le quiere llamar inteligencia emocional sea. Es un nombre más marquetinero que preciso pero puede aceptarse.

Lo que ya no es aceptable es desbarrar…y se desbarra cuando se hacen afirmaciones como que el 80% del éxito es atribuible a la inteligencia emocional por encima de la inteligencia en el sentido que la hemos entendido siempre.  Se desbarra también cuando se dice que las competencias son mucho más importantes que las titulaciones en el éxito profesional…sin pretender ni en un caso ni en el otro negar la importancia de los factores a los que se les quiere dar tanta primacia. El problema es mucho más simple: Es una cuestión de secuencia y no de porcentaje.

Expliquémoslo con un ejemplo sencillo: ¿Qué es más importante para el éxito de un cirujano? ¿Su título académico o su habilidad y capacidades en el quirófano?…Por supuesto, la pregunta tiene trampa y, además, ésta es muy visible: Para entrar al quirófano armado de un bisturí, el cirujano necesita tener un título académico y, por tanto, el segundo filtro -habilidad y capacidades- se aplica sobre los que han pasado el primero -título- y, por tanto, no podemos establecer una comparación en términos de porcentaje entre titulación y capacidades: Elijo a las personas por sus capacidades entre los que tienen la titulación. ¿Cómo puedo hablar entonces de porcentajes relativos de importancia entre lo uno y lo otro?

Por supuesto, es un caso extremo pero aplicable a los conceptos a los que me refería con la idea de discusiones bizantinas: Alguien puede desempeñarse espléndidamente gracias a su inteligencia emocional pero la entrada al terreno de juego se la ganará con la inteligencia, en los términos que todos conocemos de toda la vida. ¿Podemos decir que para muchos puestos, una vez que se ha pasado un umbral de cociente intelectual -por utilizar una métrica conocida de todos-  es más favorable mejorar en la capacidad de interacción con los demás que añadir 10 puntos más de cociente intelectual? Probablemente sí pero si las cosas funcionan así, es decir, si el acceso lo defino mediante un valor umbral y el desempeño como una selección por otros criterios entre los que han superado ese umbral ¿puedo decir cosas como que la inteligencia emocional es responsable del 80% del éxito? No deja de ser una majadería si no le añado «entre aquéllos que tengan un nivel intelectual como mínimo medio-alto» y, si le hago este añadido, deja de ser una majadería para convertirse en una simpleza.

Idéntico razonamiento podemos aplicar a la disquisición sobre la importancia relativa de competencias y títulos: ¿Cuántas ofertas aparecen -o aparecían cuando las había- en que se pide «titulación superior»? Obsérvese que les da lo mismo que sea una ingeniería de caminos que una licenciatura en filatelia avanzada, en el caso de que tal cosa exista. Hay profesiones que exigen un título específico para el acceso -médico, farmacéutico, maquinista ferroviario…- y en éstas la discusión ya es absurda: Se selecciona por competencias entre los que tienen el título y, por tanto, no tiene sentido comparar la importancia relativa de uno y otro factor. Incluso entre los casos que no exigen explícitamente un título o habilitación profesional, suele darse un cierto sesgo haciendo que las condiciones de acceso no sean las mismas.

En conclusión, no podemos comparar la importancia relativa de dos factores cuando uno de ellos hace referencia al acceso al puesto mientras que el otro hace referencia al desempeño una vez que se está en el puesto. Es como comparar el tocino con la velocidad…y además, usando porcentajes para que quede más «científico».

¿Por qué el informe meteorológico del telediario no lo da un meteorólogo?

En una gran mayoría de los casos es así desde hace bastantes años como puede verse en este divertido video. Desde los tiempos de Mariano Medina y su «barco K» en las Azores a la actualidad, en que se buscan presentadores/as y se dan incluso casos como Minerva Piquero, que empezó presentando el informe meteorológico para continuar con una carrera como presentadora, ha llovido mucho…y el perfil de los que nos informaban de que llovía ha ido cambiando.

En realidad, no sólo en la televisión sino en las mismas organizaciones dedicadas a la previsión meteorológica, los meteorólogos han ido perdiendo peso en favor de sistemas cuyos resultados no pueden reproducir ni cuestionar. El aumento de bases y tipos de observación y el procesamiento de todos ellos ha dado lugar a un sistema que es materialmente imposible de ser replicado…en el caso de que el meteorólogo supiera cómo funciona el ordenador y conociera sus algoritmos para la preparación de la previsión.

Simplemente no puede y, si en su propio feudo el meteorólogo se ve desplazado ¿cómo no va a serlo en la televisión? La experiencia nos muestra que las previsiones meteorológicas, cuando son hechas para espacios breves de tiempo, aciertan mucho más que antes -es conocida la historia de un hombre del tiempo, Eugenio Martín Rubio, que se apostó y perdió el bigote en relación con una previsión- pero esas previsiones no pueden ser reproducidas y, de vez en cuando, los ordenadores hacen tonterías: Cuando se produce una situación excepcional que no estaba prevista, puede encontrarse que el algoritmo utilizado funciona bien casi siempre pero no en esa situación precisa pero, lamentablemente, no hay forma real de validar o invalidar una previsión.

La consecuencia es que han echado a los meteorólogos de la televisión pero lo más grave no es esto: También los han echado o los están echando de los sitios especializados en previsión meteorológica.

Web 3.0: Expectativas muy altas

Web 2.0 puede definirse como la explosión de las redes sociales en Internet. Google mismo ha sido desplazado por Facebook en Estados Unidos como el lugar con más visitas. Poco más se puede pedir del desarrollo de las redes. Sin embargo, cuando se comenzó a hablar de Web 2.0, algunos anticiparon la idea de “web semántica”, idea que poco a poco quedó aparcada y vuelve a emerger como la característica central de la Web 3.0 en ciernes.

Un pequeño repaso histórico puede servir para saber de dónde venimos y hacia dónde podría dirigirse la Web 3.0 y si, realmente, estamos en condiciones de que el concepto de “web semántica” sea una promesa que se puede mantener y bajo qué condiciones.

Tras una visita al museo de robótica del M.I.T. se comentaban aquí mismo las causas del fracaso de la inteligencia artificial, causas que van indisolublemente unidas al concepto de semántica: Searle, con su experimento de la “habitación china” pondría de manifiesto la enorme diferencia que existe entre operar un sistema y comprender un sistema y todavía hoy algunas de las viejas glorias de la Inteligencia Artificial no se han dado cuenta de ello:

La “habitación china” era una respuesta a Alan Turing mostrando cómo es posible engañar a un observador externo chinoparlante haciéndole creer que se sabe chino cuando, simplemente, se están siguiendo instrucciones sin saber sobre qué se está conversando.

Ahí está el gran fracaso de la inteligencia artificial o GOFAI, como es conocida por muchos. Esto no significa, sin embargo, que el desarrollo de una auténtica inteligencia artificial que vaya más allá de los number-crunchers al estilo de la máquina que derrotó a Kasparov jugando al ajedrez a pesar de no saber qué es el ajedrez pero la inteligencia artificial, tal como era concebida primero por Turing y después por Minsky y demás acompañantes es más que probablemente una vía muerta.

No entraré ahora a fondo en los motivos por lo que creo que ese modelo de inteligencia artificial es una vía muerta pero una síntesis de los motivos pasada por el Winzip daría lugar a una única palabra: Significado.

¿Podemos añadir significado a la web cuando no hemos sido capaces de añadírselo a la forma de operar de los ordenadores? La respuesta es afirmativa y hay dos formas: Una inmediata y otra posible en un futuro más o menos lejano.

Forma inmediata de añadir significado o, si se prefiere, inteligencia: Utilizar la inteligencia de los usuarios. Muchos lectores de este artículo pueden ser usuarios de Alexa, y si no lo son, hacen mal. Alexa introdujo una modificación que, paso a paso, va introduciendo también Google: Utilizar a los usuarios para añadir en las búsquedas el criterio que le puede faltar a una máquina por potente que sea el algoritmo.

Algo tan aparentemente sencillo como presentar en la barra de Alexa cuáles son las páginas que suele visitar la gente que visita ésa en la que estamos ahora mismo requiere un esfuerzo importante de ingeniería social parecido al realizado por Amazon –uno de los iniciadores de Alexa– quien, cuando se compra un libro en su web indica qué otros libros suelen comprar aquellos clientes de Amazon que han comprado ése y, basados en compras anteriores, cuando sale algo que está siendo comprado por otros que tienen un historial de compra parecido hacen sus recomendaciones.

¿Ordenador capaz de trabajar con significados? De ninguna manera: Uso inteligente de la información de los usuarios que son quienes aportan el significado. Esto nos da una pista sobre por dónde podría ir la Web 3.0: Aplicación de modelos tipo Alexa o Amazon sobre las redes sociales perfilando a sus usuarios y, en lugar del “personas que podrías conocer” de Linkedin, mostrar a “personas con intereses similares a los tuyos”. Bien implantado, puede ser un paso importante y, desde luego, muy lejos de la dirección en que se movía la célebre “GOFAI”, anagrama de Good, Old Fashioned Artificial Intelligence.

Una web semántica, por tanto, iría muy apoyada sobre la Web 2.0 y el hecho de que los ordenadores continuasen ciegos al significado diría poco sobre su potencial en el terreno semántico puesto que el significado lo aportarían los usuarios.

Si pensamos en el largo plazo, podría no ser la única solución pero dudo mucho de que ésta fuera en la dirección de la “GOFAI” y su particular “Esperando a Godot” en el que ordenadores más rápidos y con más capacidad conseguirán comportarse inteligentemente…a pesar de que dispositivos físicos mucho más lentos que los ordenadores actuales como las humildes neuronas los baten por goleada en algunos terrenos.

Posiblemente, el autor que más clara vio la evolución futura en el ámbito de la inteligencia artificial fue Jeff Hawkins y lo más recomendable es leer su libro “On Intelligence”. No es de esperar en modo alguno que la Web 3.0 llegue a los niveles que Jeff Hawkins establece como propios de una máquina inteligente pero, aún así y aún teniendo en cuenta la ceguera al significado, sí podría hacer honor al nombre de “web semántica”.

El futuro inmediato nos lo dirá.

Business Schools: Guilty or innocent?

NOTA.- Este artículo fue enviado para su publicación en un número especial a petición expresa de una escuela de negocios de India. Dado el tiempo transcurrido y al no haber recibido copia de la publicación, me tomo la libertad de publicarlo en el blog:

It is not the first time that the usefulness of business schools is questioned. Perhaps, the pioneer was McCormack and his “What they don’t teach you at Harvard Business School”. Nowadays, many people, as a consequence of the global financial crisis, is focusing mainly in ethical issues but the real old problem is at what extent an MBA can substitute years of practice.

Before dealing with the big issue, we should start with two minor ones: Case-based teaching and big companies management .vs. small companies management. The case-based teaching is often presented as a way to acquire condensed experience. A problem is presented in a few pages and the discussion in the classroom will provide the expected learning to the attendants. I cannot say that this process does not produce learning. However, a good part of the acquired learning can be about how to discuss or how to present the own reasoning in the best possible light. A few years ago, HBS published an article called “The smart talk’s trap” by Pfeffer and Sutton. They said that a particular type of talk is an especially insidious inhibitor of action: “smart talk”. The seeds of this talk are often sown in business schools and corporate life, where leadership potential is equated with the ability to speak intelligently.

Again, this is not new. In “Lord Chesterfield Letter’s”, Lord Chesterfield explains how he was asked to speak about something that he was fully ignorant. On the other side, the speaker had a deep knowledge but his “smart talk” made him win the debate, disregarding the fact of his lack of knowledge. There is a fact: Case-based teaching starts with a case written in a few pages. Many managers would like to have their common problems presented like that: A paper omitting the unnecessary, focusing on the important variables and explaining in clear terms what are the positions and the motivations of every one of the actors. Obviously, that should mean having more than a half of the problem already solved.

Corporate life implies political activity: Who can be trustable and useful allies? What are the positions? Are they rock-solid or are there divisions? Who do we have to attract to our positions? What are the rationale and the interests under every position? Hard challenge is this for any kind of training. However, case-based teaching, if its limitations are not clearly explained, can provide the self-confidence but not the abilities to navigate through these complex issues.

Another point is the management of big business and if its rules are the same for little businesses: Usually, the approach in MBAs could be defined as “If you can do something complex, for sure you can do something easier”. Therefore, programs are designed around big corporations since little companies are considered easier to manage. Unfortunately, this is not true. Big and little companies have different sets of management rules. The situation can be explained through a metaphor coming from the aviation field:

Perhaps not many people know today who Otto Lilienthal was. Lilienthal was one of the aviation pioneers and he used to build gliders that were controlled displacing his body to change the center of gravity. As he got more and more experience, he built faster and heavier gliders. His final moment arrived when he built a glider so heavy and so fast that his body weight was not enough to control it. Of course, that did not stop the progress of aviation. Simply, they learnt that heavier and faster planes required controls and indicators.

I suppose that it is already clear where the metaphor is pointing: To become bigger, we need to control through formal controls and indicators but…what if we are not or we do not plan to become bigger? What if our size is the right one for our business? Should be lacking formal controls and indicators a proof of lack of professionalism? Probably, the answer coming from many business schools and from many MBA graduates should be an emphatic “yes”. However, we do not control for the sake of control itself. Control is a cost as Fukuyama in “Trust: The Social Virtues and the Creation of Prosperity” established and, before him, Luhmann in “Vertrauen”. We cannot accept as a self-evident truth the requirement of formal controls to know what is happening in our company. Billions of little companies could say so and they are not willing to incur in an avoidable cost.

The wise behavior should be to adapt the level of control and formality to the requirements of the situation. Many little companies are unable to adapt to a situation that requires formal controls. They finish as Otto Lilienthal did. Many others can fall in love with the self-confidence of a MBA graduate and, after the insistence in formal controls, they can finish discovering that the business model that the MBA brings does not fit their necessities. These two problems are shared by many MBA programs, including the best among them. However, when someone tries to criticize business schools as a whole, there is something that could be deeply unfair and it is inside learning mechanisms themselves:

Ancient Greeks said that no pupil could be better than the master. Facts show that this is not true but facts do not show why. Actually, nobody can teach anybody. We can try to transmit contents or facts but, to be learnt, these contents have to be adapted to the mental structure of the pupil and this adaptation is far from being a mere copy. If something is very far from the mental structure, it could not be even perceived. If it is a little bit nearer, it could be rejected and, of course, if it is something very familiar or perceived like that, nothing happens. If we already know something, we cannot consider this as information.

The miracle happens when something is near to the border known-unknown. There is an adaptation of the content, of the mental structure or both. At that moment, since the new contents do not have the same value for pupil and for master, it is clear that some pupils could go far beyond their masters. Since this is a universal phenomenon linked with human beings, it does not seem fair to accuse business schools because they do not provide the creativity or, furthermore, the ethical behavior that should be expected from a top manager. Nobody, including business schools, has a recipe to create innovative and ethical people. It is true that the best of them have the possibility of recruiting people with higher potential and, since raw materials are better, no one should be surprised that the final results will be better too.

A highly-demanded B-school has a recruiting base that is not accessible for minor schools. The concept of recruiting base and its effects can be applied both, to teachers and attendants. If they have more people to choose, it can be expected that the best are going to be chosen and, hence, the results are going to be better. That can be a problem for the validity of rankings: Is the ranking validating the program contents or the recruiting process? From a scientific point of view, the variables used are not valid since the recruiting process has modified the conditions. If someone wants a valid experiment, the conditions should be very different:

  1. Select 20 candidates that you consider suitable for your MBA program.
  2. Reject 10 of them at random.
  3. Follow the 20 (rejected and accepted) over the years to know how they are doing professionally.
  4. Look for differences between both groups. If you find them, these are the differences that could be attributed to the B-school teaching.

Of course, nobody is going to perform an experiment like this because both, commercial interest and ethical behavior could be against it. However, if someone wants to validate the program avoiding the impact of the recruiting process, it is hard to find another possibility. Going back to the idea of knowledge acquisition and the value of knowledge, it should be interesting to reflect about what is the right moment for an MBA. It is common practice to convert the MBA in the “last year of college” and the results are very often disappointing. The same happens with many doctorates.

Anyone who has tried to learn a language or a new program to use in the computer knows something: We’ll never be able to learn until the moment that this language or program becomes necessary. Furthermore, the “If-I-had-know” syndrome, that happens when we suddenly find that our problem had a straight and easy solution, is a very good help for learning and to convert theory into practice. In other words, the mental structure of a brilliant student without experience could not make him suitable for an MBA. Again, the B-school cannot teach practice. The jump from theory to practice has to be made by the student and, to do so, the perception of a necessity coming from former practice could be a must.

Of course, B-schools are dealing with their own necessities as business and they try to use their knowledge as much as possible. That means accessing to segments of population wide but, perhaps, not fully suitable for a MBA program. B-schools are subject to criticism like any other thing. However, if we criticize the school because of the the distance between teaching or reality or the ethical behavior of some graduates, we could be losing the focus. Psychologists know that any general theory about how the mind works has always failed in the same point: Range of convenience. Nobody could say that behaviorism did not have a good point with the stimulus-answer theory. However, when they tried to explain the whole mind only with that mechanism, they failed. The same can be said of Gestalt, psychoanalysis, cognitive psychology and so on. All of them were good till the moment they went out of their range of convenience. The same can be applied to B-schools. They cannot provide real experience (case-teaching method is not real experience) but they can help to build knowledge over real experience of the attendants…if that experience exists. If not, the criticism about B-schools could be legitimate but it should be better if pointed to the business approach and the invitation to get out of their range of convenience as a way to grow.

Perhaps, they should narrow the focus or, alternatively, diversify their products to be adapted to different publics: Students, experienced executives, big companies, little companies, public administrations and so on. Everyone has specific requirements and the “one size fit all” policy could not be valid anymore.

Ethics is another issue. There is a book published by HBS whose title is very expressive: “Can ethics be taught?”. My personal answer should be: Not, if the teachings are against the environment. Making the B-school responsible for the ethical behavior of its graduates is a clear sign of optimism. Does someone, inside or outside a B-school, really think that the contents taught are powerful enough to alter the ethical behavior of the students? If these contents are aligned with the rewards structure that the student is going to find outside, yes; it could be. If not, any recall to ethical behavior should be considered as the product of a miscarried preacher.

There is a fact in business life: Any organization rewards something and, if it doesn’t, it does anyway. We can try to help student to understand this simple fact and why many organizations, without being conscious or worse, reward unethical behavior. Can we tell someone to manage thinking in the long term if the rewards are linked to short term results? Perhaps, it could be more useful to tell the top manager what are the unintended effects of their reward system but…what if the top manager is subject to the same short-term centered model?

One of my first assignments as a junior consultant implied to be in a team to assist to the re-organization of a company. Its managers wanted to be prepared for a free market environment coming from a monopolistic one. The project started with a president, two general managers and five functional managers. After the project, that was considered as a success, they had five general managers and 27 functional managers. It was a surprising idea about the meaning of “success” even for a junior as I was. What happened and why did they consider the results as a success? They started from the bottom and a supervisor was conscious that asking for more resources could mean to increase his level in the company. The boss could stop the process but the boss was subject to the same rationale. The process was repeated until it arrived to the top and everyone was happy with the results…everybody except the shareholders but, at that moment, it was a State-owned company and no one could foresee problems about future viability.

What should be the right advice from a B-school to a member of this company? Clearly, if that advice should have been against the rewards system, it should have been ignored. We can ask people for a behavior aligned with the interest of their company and against their own interest. However, if we have to do that, it seems quite clear that the rewards structure is wrong. Cases like Enron or others cannot be used to criticize B-school because of the behavior of their graduates. The right point to criticize should be the reward system: Who is the client of someone who has to write a report about the behavior of the one who is hiring him? That is the real problem, not the ethical teaching of the B-schools graduates.

Summarizing, criticizing B-schools can be legitimate but not all the criticisms are. A business policy driven to accept the wrong people to increase the market share can be criticized. A poor adaptation to different markets can be criticized too. Criticizing B-schools because of the ethical behavior of its graduates is unfair. The same could be said about the distance between theory and practice: It is there but, perhaps, we should have to focus more on the general learning process than in the contents of the MBA programs. The problem is deeper and wider and, even though it is a very old one, it still seems not to be fully understood.

Dr. José Sánchez-Alarcos

http://es.linkedin.com/in/sanchezalarcos

http://tinyurl.com/organizational-learning