Categoría: Capital intelectual

Sterile discussions about competencies, Emotional Intelligence and others…

When “Emotional Intelligence” fashion arrived with Daniel Goleman, I was among the discordant voices affirming that the concept and, especially, the use of it, was nonsense. Nobody can seriously reject that personal features are a key for success or failure. If we want to call it Emotional Intelligence that’s fine. It’s a marketing born name not very precise but, anyway, we can accept it.

However, losing the focus is not acceptable…and some people lose the focus with statements like “80% of success is due to Emotional Intelligence, well above the percentage due to “classic” intelligence. We lose focus too with statements comparing competencies with academic degress and the role of each part in professional success. These problems should be analyzed in a different and simpler way: It’s a matter of sequence instead of percentage.

An easy example: What is more important for a surgeon to be successful? The academic degree or the skills shown inside the OR? Of course, this is a tricky question where the trick is highly visible. To enter the OR armed with an scalpel, the surgeon needs an academic recognition and/or a specific license. Hence, the second filter -skills- is applied over the ones who passed the first one -academic recognition- and we cannot compare in percentage terms skills and academic recognition.

Of course, this is an extreme situation but we can apply it to the concepts where some sterile discussions appear. Someone can perform well thank to Emotional Intelligence but the entrance to the field is guaranteed with intelligence in the most common used meaning. Could we say that, once passed an IQ threshold we should better improve our interaction skills than -if possible- improve 10 more IQ points? Possibly…but things don’t work that way, that is, we define the access level through a threshold value and performance with other criteria, always comparing people that share something: They all are above the threshold value. Then…how can I say “Emotional Intelligence is in the root of 80% of success”? It should be false but we can convert it into true by adding  “if the comparison is made among people whose IQ is, at least medium-high level”. The problem is that, with this addition, it is not false anymore but this kind of statement should be a simple-mindedness proof.

We cannot compare the relative importance of two factors if one of them is referred to job access while the other is referred to job performance once in the job. It’s like comparing bacon with speed but using percentages to appear more “scientific”.

Windows Knowledge

Perhaps in a moment where Microsoft does not live its best moment, speaking about Windows could be seen as a paradox. However, Windows itself is a paradox of the kind of knowledge that many companies are pricing right now.

Why Windows? You open your computer to find a desktop. The desktop has binders and documents and you have even a trash bin. This environment allows working in the old-fashioned way. We move documents from one binder to another one. When a document is not useful anymore, we put it in the trash bin and everything seems to be perfect…as far as everything works as it’s supposed to work. What happens when we find a blue screen or, simply, the computer does not start?

At that moment, we find that everything was false. Desktop, binders, documents…? Everything is false. Everything is a part of a complex metaphor, good while everything works as expected but fully useless once something fails. What kind of real knowledge does the Windows user have? The Windows user has an operating knowledge that can be enough in more than 90% of the cases. That’s fine if the remaining 10% cannot bring unexpected and severe consequences but we see this kind of knowledge in more and more places, including critical ones.

When 2008 crisis started, someone said that many Banks and financial institutions had been behaving as Casinos. Other people, more statistically seasoned denied that telling that, had they been behaving as Casinos, the situation never should have been as it was because Casinos have probabilities in their favor. Other environments don’t have this advantage but they behave as if they have it with unforeseeable consequences.

Human Resources and Mathematical Fictions

It is hard to find more discussed and less solved issues than how to quantify Human Resources. We have looked for tools to evaluate jobs, to evaluate performance and at what percentage objectives were met. Some people tried to quantify in percentage terms how and individual and a job fit and, even, many people tried to obtain the ROI over training. Someone recovered Q index, aimed to quantify speculative investments, to convert it into the main variable for Intellectual Capital measurement, etc..

 Trying to get everything quantified is so absurd as denying a priori any possibility of quantification. However, some points deserve to be clarified:

New economy is the new motto but measurement and control instruments and, above all, business mentality is defined by engineers and economists and, hence, organizations are conceived as machines that have to be designed, adjusted, repaired and measured. However, it is a common fact that rigor demanded about meeting objectives is not used in the definition of the indicators. That brought something that is called here Mathematical Fictions.

A basic design principle should be that any indicator can be more precise than the thing it tries to indicate whatever the number of decimal digits we could use. When someone insists in keeping a wrong indicator, consequences appear and they are never good:

  •  Management behavior is driver by an indicator that can be misguided due to sneaky type of the variable supposedly indicated. It is worth remembering what happened when some Governments decided that the main priority in Social Security was reducing the number of days in waiting lists instead of the fluffy “improving Public Health System”. A common misbehavior should be to give priority to less time consuming interventions to reduce the number of citizens delaying the most importan tones.
  • There is a development of measurement systems whose costs are not paid by the supposed improvement to get from them. In other words, control becomes an objective instead of a vehicle since control advantages do not cover costs of building and maintenance of the control. For instance, some companies trying to control abuse in photocopies ask for a form for every single photocopy making the control much more expensive than the controlled resource.
  • Mathematical fictions appear when some weight variables that, in the best situation, are only useful for a situation and lose its value if the situation changes. Attemps relative to Intellectual Capital are a good example but we commit the same error if we try to obtain percents of people-job adjustment to use them as to foresee success in a recruiting process.
  • Above all, numbers are a language that is valid for some terrains but not for others. Written information is commonly rejected with “smart talk trap” arguments but the real fact is that we can perceive fake arguments easier in written or verbal statements than if they come wrapped in numbers. People use to be far less exigent about indicators design than about written reports.
  • Even though we always try to use numbers as “objective” indicators, the ability to handle these numbers by many people is surprisingly low. We do not need to speak about the journalist that wrote that Galapagos Islands are hundreds of thousands of kilometers far from Ecuador coast or the common mistake between American billion or European billion. We can show two easy examples about how numbers can lose any objectivity due to bad use:

After the accident of Concorde in Paris, 2001, media reported that it was the safest plane in the world. If we consider that, at that time, only fourteen planes of the type were flying instead of the thousands of not-so-exclusive planes, it is not surprising that an accident never happened before and, hence, nobody can say from it to be the safest plane. The sample was very short to say that.

Another example: In a public statement, the Iberia airline said that travelling by plane is 22 times safer than doing it by car. Does it mean that a minute spent in a plane is 22 times safer than a minute spent in a car? Far from it. This statement can be true or false depending of another variable: Exposure time. A Madrid-Barcelona flight lasts seven times less than a trip by car. However, if we try to contrast one hour inside a plane with an hour inside a car, results could be very far from these 22 times.

The only objective of these examples is showing how numbers can mislead too and we are less prepared to detect the trick than when we have to deal with written language.

These are old problems but –we have to insist- that does not mean they are solved and, perhaps, we should to arrive to the Savater idea in the sense that we do not deal with problems but with questions. Hence, we cannot expect a “solution” but contingent answer that never will close forever the question.

If we work with this in mind, measurement should acquire a new meaning. If we have contingent measurements and we are willing to build them seriously and to change them when they become useless, we could solve some –not all of them- problems linked to measurement. However, problems will arise when measurement is used to inform third parties and that could limit the possibility to change.

An example from Human Resources field can clarify this idea:

Some years ago, job evaluation systems had a real crisis. Competencies models came from this crisis but they have problems to for measurement. However, knowing why job evaluation systems started to be displaced is very revealing:

Even though there are not big differences among the most popular job evaluation systems, we will use Know-How, Problem Solving and Accountability, using a single table to compare different jobs in these three factors is brilliant. However, it has some problems hard to avoid:

  • Reducing to a single currency, the point, all the ratings coming from the three factors implies the existence of a “mathematical artifact” to weight the ratings and, hence, priming some factors over others.
  • If, after that, there are gross deviations from market levels, exceptions were required and these go directly against one of the main values that justified the system: Fairness.

Although these problems, job evaluation systems left an interesting legacy not very used: Before converting ratings into points, that is, before starting mathematical fictions, we have to rate every single factor. We have there a high quality information, for instance, to plan professional paths. A 13 points difference does not explain anything but a difference between D and E, if they are clearly defined, are a good index for a Human Resources manager.

If that is so…why is unused this potential of the system? There is an easy answer: Because job evaluation systems have been used as a salary negotiation tool and that brings another problem: Quantifiers have a bad design and, furthermore, they have been used for goals different from the original one.

The use of mix comittees for salary bargaining, among other factors, has nullified the analytical potential of job evaluation systems. Once a job is rated in a way, it is hard to know if this rating is real or it comes from the vagaries of the bargaining process.

While job evaluation remained as an internal tool of Human Resources area, it worked fine. If a system started to work poorly, it could be ignored or changed. However, if this system starts to be a main piece in the bargaining, it losses these features and, hence, its use as a Human Resources tool dissapears.

Something similar happens if we speak about Balanced Scorecard or Intellectual Capital. If we analyze both models, we’ll find that there is only a different variable and a different emphasis: We could say, without bending too much the concepts, that the Kaplan and Norton model is equal to Intellectual Capital plus financial side but there is another difference more relevant:

Balanced Scorecard is conceived as a tool for internal control. That implies that changes are easy while Intellectual Capital was created to give information to third parties. Hence, measurement has to be more permanent, less flexible and…less useful.

Actually, there are many examples to be used where the double use of a tool nullifies at least another one. The same idea of “Double Accounting” implies criticism. However, pretending that a system designed to give information to third parties can be, at the same time and with the same criteria, an effective tool for control, is quite near to ScFi.

Competencies systems have too its own part of mathematical fiction. It is hard to créate a system able to capture all the competencies and to avoid overlapping among them. If this is already hard…how is it possible to weight variables to define job-occupant adjustment? How many times are we evaluating the same thing under different names? When can we weight a competence? Is this value absolute or should it depend on contingencies? Summarizing….is it not a mathematical nonsense aimed to get a look of objectivity and, just-in-case, to justify a mistake?

This is not a declaration against measurement and, even less, against mathematics but against the symplistic use of it. “Do it as simple as possible but no more” is a good idea that is often forgotten.

Many of the figures that we use, not only in Human Resources, are real fiction ornated with a supposed objectivity coming from the use of a numeric language whose drawbacks are quite serious. Numeric language can be useful to write a symphony but nobody would use it to compose poetry (except if someone decides to use the cheap trick of converting letters into numbers) and, however, there is a general opinion about numbers as universal language or, as Intellectual Capital starters said, “numbers are the commonly accepted currency in the business language”.

We need to show not only momentaneous situations but dynamics and how to explain them. That requires written explanations that, certainly, can misguide but, at least, we are better equipped to detect it than if it come wrapped in numbers.

Lessons from 11S about Technology


Long time ago, machines started to be stronger and more precise than people. That is not new but…are they smarter too? We can forget developments near to SciFi like artificial intelligence based in quantum computing or interaction among simple agents. Instead, we are going to deal with present technology, its role in an event like 11S and the conclusions that we can get from that.


 Let’s start with a piece of information: A first generation B747 plane required three/four people in a cockpit with more than 900 elements. A last generation B747 only requires two pilots and the number of elements inside the cockpit decreased in two thirds. Of course, this has been posible through I.T. introduction and, as a by-product, rhrough automation of tasks that, previously, had to be performed manually. The new plane appears as easier than the old one. However, the amount of tasks that the plane performs now on its own makes it a much more complex machine.


 Planes used in 11S could be considered as state-of-the-art planes at that time and this technological level made the fact possible, of course, together with a number of things far from technology. Something like 11S should have been hard with a less advanced plane. Handling old planes is harder and the collaboration of pilots in a mass-murder should have been required. Not an easy task getting the collaboration of someone in his own death under death threat.


The solution was making the pilot expendable and that, if the plane is flying, requires another pilot willing to take his own life. How is the training cost for that pilot? In money terms, a $120.000 figure could be more less adjusted if speak about training a professional pilot. However, this could not be hard to get for the people that organized and financed 11S. A barrier harder to pass is the time required for this training. Old planes were very complicated and their handling required a good amount of training to be acquired along several years. Should terrorists be so patient? Could they trust in the commitment of future self-killers along the years?


 Both questions could invite the organizers to reject the plans as unfeasible. However, technology played its role in a very easy way: Under normal situations, modern planes are easier to handle and, hence, they can be flown by people less knowledgeable and less expert. Coming from this point, situation appears under a different light: How long it takes for a rookie pilot getting the dexterity required to handle the plane at the level required by the objectives? Facts showed the answer: A technologically advanced passenger plane is easy to handle –at the level required- by a low-experienced pilot after an adaption through simulator training.


Let’s go back to the starting question: Machines are stronger and more precise than people. Are they smarter too? We could start discussing the different definitions about intelligence but, anyway, there is something that machines can do: Once a way to solve a problem is defined, that way can be programmed into a machine to get the problem automatically solved once and again. As a consequence, there is displacement of complexity from people to the machine, allowing modern and complex machines to be handled by people less able than former machines with more complex interfaces.


 Of course, there is an economic issue here: An important investment in technological design can be recovered if the number of machines sharing the design is high enough. Investment in design is made only once but it can drive to important savings in thousands of pilots training. At this moment, automation paradox appears: Modern designs produce more complex machines with a good part of the tasks automated. Automation makes these machines easier to handle under normal conditions than the previous ones. Hence, less trained people can operate machines that, internally, are very complex. Once complexity is hidden at interface level, less trained people can drive more complex machines and that is the place where automation payback is.


 The scaring question is this one: What happens in unforeseen situations and, hence, not included in technological design? If we speak about high risk activities, the manufacturer uses to have two answers to this questions: Redundancy and manual handling. However, both possibilities require a previous condition: The problem has to be identified as such in a clear and visible way. If not or if, even after being identified, the problem appears in a situation where there is not available time, people trained to operate the machine can find that the machine “becomes crazy” without any clue about the causes of the anomalous behavior.


 Furthermore, if the operator receives a full training, that is, not only related with interface but related with the knowledge of the principles of internal design, automation could not be justified due to increased training costs. We already know the alternative: The capacity to answer to an unforeseen event is seriously jeopardized. 11S is one of the most dramatic tests about how people with low training can perform tasks that, before, should have required more training. However, this is not an uncommon situation and it is nearer to our daily life than we could suspect.


Everytime we have a problem in the phone, an incidence with the Bank, an administrative problema in the gaz or electricity bill…we can start a process calling the Customer Service. How many times, after bouncing from one Department to other, someone tells us that we have to dial the number that we had dialed at the beginning? Hidden under these experiences, there is a technological development model based in complex machines and simple people. Is this a sustainable model? Technological development produce machines harder and harder to understand by their operators. In that way, we make better and better things that we already knew how to do and things that already were hard become harder and harder.


 11S was possible, among other things, as a consequence of a technological evolution model. This model is showing to be exhausted and requiring a course change. Rasmussen stated the requirements of this course change under a single condition: Operator has to be able to run cognitively the program that the machine is performing. This condition is not met and, in case of being mandatory, it could erase the economic viability driving to a double challenge: One of them should be technological making technology understandable to users beyond operating level under known conditions and the other one is organizational avoiding the loss of economic advantages..


Summarizing, performing better in things that we already performed well and, to do that, performing worse in things that we already were performing poorly is not a valid option. People require answer always, not only when automation and I.T. allow it. Cost is the main driver of the situation. Organizations do not answer unforeseen external events and, even worse, complexity itself can produce events from inside that, of course, do not have an answer neither.


 A technological model aimed to make easier the “what” hiding the “why” is limited by its own complexity and it is constraining in terms of human development. For a strictly economic vision, that is good news: We can work with less, less qualified and cheaper people. For a vision more centered in human and organizational development, results are not so clear. By one side, complexity puts a barrier preventing the technological solution of problems produced by technology. By other side, that complexity and the opacity of I.T. make the operators slaves without the opportunity to be freed by learning.


Discusiones bizantinas sobre competencias, inteligencias emocionales, titulaciones y otros

Cuando llegó la moda de la inteligencia emocional de la mano de Daniel Goleman, estuve entre las voces discordantes que afirmaban que el concepto y, sobre todo, el uso que se hacía del mismo no tenían ningún sentido. Que existen unas características personales que son determinantes para el éxito o fracaso es claro; si a eso se le quiere llamar inteligencia emocional sea. Es un nombre más marquetinero que preciso pero puede aceptarse.

Lo que ya no es aceptable es desbarrar…y se desbarra cuando se hacen afirmaciones como que el 80% del éxito es atribuible a la inteligencia emocional por encima de la inteligencia en el sentido que la hemos entendido siempre.  Se desbarra también cuando se dice que las competencias son mucho más importantes que las titulaciones en el éxito profesional…sin pretender ni en un caso ni en el otro negar la importancia de los factores a los que se les quiere dar tanta primacia. El problema es mucho más simple: Es una cuestión de secuencia y no de porcentaje.

Expliquémoslo con un ejemplo sencillo: ¿Qué es más importante para el éxito de un cirujano? ¿Su título académico o su habilidad y capacidades en el quirófano?…Por supuesto, la pregunta tiene trampa y, además, ésta es muy visible: Para entrar al quirófano armado de un bisturí, el cirujano necesita tener un título académico y, por tanto, el segundo filtro -habilidad y capacidades- se aplica sobre los que han pasado el primero -título- y, por tanto, no podemos establecer una comparación en términos de porcentaje entre titulación y capacidades: Elijo a las personas por sus capacidades entre los que tienen la titulación. ¿Cómo puedo hablar entonces de porcentajes relativos de importancia entre lo uno y lo otro?

Por supuesto, es un caso extremo pero aplicable a los conceptos a los que me refería con la idea de discusiones bizantinas: Alguien puede desempeñarse espléndidamente gracias a su inteligencia emocional pero la entrada al terreno de juego se la ganará con la inteligencia, en los términos que todos conocemos de toda la vida. ¿Podemos decir que para muchos puestos, una vez que se ha pasado un umbral de cociente intelectual -por utilizar una métrica conocida de todos-  es más favorable mejorar en la capacidad de interacción con los demás que añadir 10 puntos más de cociente intelectual? Probablemente sí pero si las cosas funcionan así, es decir, si el acceso lo defino mediante un valor umbral y el desempeño como una selección por otros criterios entre los que han superado ese umbral ¿puedo decir cosas como que la inteligencia emocional es responsable del 80% del éxito? No deja de ser una majadería si no le añado “entre aquéllos que tengan un nivel intelectual como mínimo medio-alto” y, si le hago este añadido, deja de ser una majadería para convertirse en una simpleza.

Idéntico razonamiento podemos aplicar a la disquisición sobre la importancia relativa de competencias y títulos: ¿Cuántas ofertas aparecen -o aparecían cuando las había- en que se pide “titulación superior”? Obsérvese que les da lo mismo que sea una ingeniería de caminos que una licenciatura en filatelia avanzada, en el caso de que tal cosa exista. Hay profesiones que exigen un título específico para el acceso -médico, farmacéutico, maquinista ferroviario…- y en éstas la discusión ya es absurda: Se selecciona por competencias entre los que tienen el título y, por tanto, no tiene sentido comparar la importancia relativa de uno y otro factor. Incluso entre los casos que no exigen explícitamente un título o habilitación profesional, suele darse un cierto sesgo haciendo que las condiciones de acceso no sean las mismas.

En conclusión, no podemos comparar la importancia relativa de dos factores cuando uno de ellos hace referencia al acceso al puesto mientras que el otro hace referencia al desempeño una vez que se está en el puesto. Es como comparar el tocino con la velocidad…y además, usando porcentajes para que quede más “científico”.

¿Por qué el informe meteorológico del telediario no lo da un meteorólogo?

En una gran mayoría de los casos es así desde hace bastantes años como puede verse en este divertido video. Desde los tiempos de Mariano Medina y su “barco K” en las Azores a la actualidad, en que se buscan presentadores/as y se dan incluso casos como Minerva Piquero, que empezó presentando el informe meteorológico para continuar con una carrera como presentadora, ha llovido mucho…y el perfil de los que nos informaban de que llovía ha ido cambiando.

En realidad, no sólo en la televisión sino en las mismas organizaciones dedicadas a la previsión meteorológica, los meteorólogos han ido perdiendo peso en favor de sistemas cuyos resultados no pueden reproducir ni cuestionar. El aumento de bases y tipos de observación y el procesamiento de todos ellos ha dado lugar a un sistema que es materialmente imposible de ser replicado…en el caso de que el meteorólogo supiera cómo funciona el ordenador y conociera sus algoritmos para la preparación de la previsión.

Simplemente no puede y, si en su propio feudo el meteorólogo se ve desplazado ¿cómo no va a serlo en la televisión? La experiencia nos muestra que las previsiones meteorológicas, cuando son hechas para espacios breves de tiempo, aciertan mucho más que antes -es conocida la historia de un hombre del tiempo, Eugenio Martín Rubio, que se apostó y perdió el bigote en relación con una previsión- pero esas previsiones no pueden ser reproducidas y, de vez en cuando, los ordenadores hacen tonterías: Cuando se produce una situación excepcional que no estaba prevista, puede encontrarse que el algoritmo utilizado funciona bien casi siempre pero no en esa situación precisa pero, lamentablemente, no hay forma real de validar o invalidar una previsión.

La consecuencia es que han echado a los meteorólogos de la televisión pero lo más grave no es esto: También los han echado o los están echando de los sitios especializados en previsión meteorológica.

Web 3.0: Expectativas muy altas

Web 2.0 puede definirse como la explosión de las redes sociales en Internet. Google mismo ha sido desplazado por Facebook en Estados Unidos como el lugar con más visitas. Poco más se puede pedir del desarrollo de las redes. Sin embargo, cuando se comenzó a hablar de Web 2.0, algunos anticiparon la idea de “web semántica”, idea que poco a poco quedó aparcada y vuelve a emerger como la característica central de la Web 3.0 en ciernes.

Un pequeño repaso histórico puede servir para saber de dónde venimos y hacia dónde podría dirigirse la Web 3.0 y si, realmente, estamos en condiciones de que el concepto de “web semántica” sea una promesa que se puede mantener y bajo qué condiciones.

Tras una visita al museo de robótica del M.I.T. se comentaban aquí mismo las causas del fracaso de la inteligencia artificial, causas que van indisolublemente unidas al concepto de semántica: Searle, con su experimento de la “habitación china” pondría de manifiesto la enorme diferencia que existe entre operar un sistema y comprender un sistema y todavía hoy algunas de las viejas glorias de la Inteligencia Artificial no se han dado cuenta de ello:

La “habitación china” era una respuesta a Alan Turing mostrando cómo es posible engañar a un observador externo chinoparlante haciéndole creer que se sabe chino cuando, simplemente, se están siguiendo instrucciones sin saber sobre qué se está conversando.

Ahí está el gran fracaso de la inteligencia artificial o GOFAI, como es conocida por muchos. Esto no significa, sin embargo, que el desarrollo de una auténtica inteligencia artificial que vaya más allá de los number-crunchers al estilo de la máquina que derrotó a Kasparov jugando al ajedrez a pesar de no saber qué es el ajedrez pero la inteligencia artificial, tal como era concebida primero por Turing y después por Minsky y demás acompañantes es más que probablemente una vía muerta.

No entraré ahora a fondo en los motivos por lo que creo que ese modelo de inteligencia artificial es una vía muerta pero una síntesis de los motivos pasada por el Winzip daría lugar a una única palabra: Significado.

¿Podemos añadir significado a la web cuando no hemos sido capaces de añadírselo a la forma de operar de los ordenadores? La respuesta es afirmativa y hay dos formas: Una inmediata y otra posible en un futuro más o menos lejano.

Forma inmediata de añadir significado o, si se prefiere, inteligencia: Utilizar la inteligencia de los usuarios. Muchos lectores de este artículo pueden ser usuarios de Alexa, y si no lo son, hacen mal. Alexa introdujo una modificación que, paso a paso, va introduciendo también Google: Utilizar a los usuarios para añadir en las búsquedas el criterio que le puede faltar a una máquina por potente que sea el algoritmo.

Algo tan aparentemente sencillo como presentar en la barra de Alexa cuáles son las páginas que suele visitar la gente que visita ésa en la que estamos ahora mismo requiere un esfuerzo importante de ingeniería social parecido al realizado por Amazon –uno de los iniciadores de Alexa– quien, cuando se compra un libro en su web indica qué otros libros suelen comprar aquellos clientes de Amazon que han comprado ése y, basados en compras anteriores, cuando sale algo que está siendo comprado por otros que tienen un historial de compra parecido hacen sus recomendaciones.

¿Ordenador capaz de trabajar con significados? De ninguna manera: Uso inteligente de la información de los usuarios que son quienes aportan el significado. Esto nos da una pista sobre por dónde podría ir la Web 3.0: Aplicación de modelos tipo Alexa o Amazon sobre las redes sociales perfilando a sus usuarios y, en lugar del “personas que podrías conocer” de Linkedin, mostrar a “personas con intereses similares a los tuyos”. Bien implantado, puede ser un paso importante y, desde luego, muy lejos de la dirección en que se movía la célebre “GOFAI”, anagrama de Good, Old Fashioned Artificial Intelligence.

Una web semántica, por tanto, iría muy apoyada sobre la Web 2.0 y el hecho de que los ordenadores continuasen ciegos al significado diría poco sobre su potencial en el terreno semántico puesto que el significado lo aportarían los usuarios.

Si pensamos en el largo plazo, podría no ser la única solución pero dudo mucho de que ésta fuera en la dirección de la “GOFAI” y su particular “Esperando a Godot” en el que ordenadores más rápidos y con más capacidad conseguirán comportarse inteligentemente…a pesar de que dispositivos físicos mucho más lentos que los ordenadores actuales como las humildes neuronas los baten por goleada en algunos terrenos.

Posiblemente, el autor que más clara vio la evolución futura en el ámbito de la inteligencia artificial fue Jeff Hawkins y lo más recomendable es leer su libro “On Intelligence”. No es de esperar en modo alguno que la Web 3.0 llegue a los niveles que Jeff Hawkins establece como propios de una máquina inteligente pero, aún así y aún teniendo en cuenta la ceguera al significado, sí podría hacer honor al nombre de “web semántica”.

El futuro inmediato nos lo dirá.