Categoría: Ciencia y tecnología

“Cuarto milenio” o el atrevimiento de la ignorancia

Hace unos días tropecé con el programa “Cuarto Milenio” y me quedé a verlo cuando empezaron a hablar del armamento secreto del III Reich durante la fase final de la II Guerra Mundial. Tenían en el estudio a un supuesto experto que contó dos cosas, sólo dos, sobre las maravillas tecnológicas de la Alemania nazi y las dos contenían errores garrafales. A pesar de ello, vinieron después los tonos de asombro, los “¿qué habría pasado si la guerra dura más?”, etc.

Primer error: Los científicos alemanes de la época consiguieron un avión invisible al radar y, por ello, podía considerarse un precedente de los aviones stealth actuales. Pues bien, el avión citado –Go229– era muy meritorio por diseño -tipo ala delta- por prestaciones y por ligereza pero su invisibilidad al radar no la conseguía mediante sofisticados diseños en sus ángulos ni mediante el uso de materiales estudiados para que las ondas del radar no rebotasen sobre ellos. Simplemente, el avión estaba hecho básicamente de madera. Sin duda, una obra de arte tecnológica pero saltar gratuitamente setenta años en el tiempo y hablar de tecnologías que permitieran la invisibilidad al radar parecía sacar un poco las cosas de quicio ¿no?

Segundo error: Instalación alemana en la provincia de Lugo. Dos antenas contiguas -y atención al detalle de “contiguas”- de 150 metros de altura, supuestamente utilizadas por los submarinos alemanes para determinar su posición por triangulación. Veamos: La triangulación es una técnica muy antigua y se utilizaba ya en la II Guerra Mundial para descubrir donde se encontraban las emisoras clandestinas. Un aparato, llamado radiogoniómetro, tenía la capacidad de determinar la dirección desde la que venía una emisión pero no la distancia. Si disponemos de un segundo aparato en una posición distinta, el punto donde se cruzan las líneas es el lugar en que se encuentra la emisora.

 La utilización de dos antenas para que un submarino, mediante triangulación, pudiera determinar su posición implica que dichas antenas se encuentran distantes, no contiguas, de forma que el submarino pueda dibujar dos líneas con la dirección de emisión de cada una de ellas y, así, saber que en el cruce de ellas se encuentra el submarino. Si las dos antenas están juntas, no hay triangulación que valga.

Segundo problema: La Tierra es redonda y esto implica que la distancia que alcanza una emisora situada en tierra es escasa. En cierto modo, el problema se resuelve si quien tiene que detectar la señal es un avión y, especialmente, si está a diez kilómetros del suelo y puede, con ello, compensar el efecto de la redondez terrestre y conseguir un mayor alcance pero ¿un submarino? Además de necesitar que las antenas estuvieran distantes entre sí, el submarino está especialmente afectado por la forma de la Tierra y el alcance sería tan escaso que el submarino tendría que estar prácticamente en la costa para detectar las señales.

Lo dicho: La ignorancia es atrevida y, cuando se acompaña de la autosuficiencia exhibida por algunos programas de televisión, es además cargante.

Redes sociales: ¿La nueva censura?

Hay que ser muy cuidadoso en la distinción entre opinión pública y opinión publicada y, de no serlo, pueden producirse consecuencias que acaban derivando en una nueva modalidad de censura:

Nadie, absolutamente nadie, se comporta igual cuando está en su casa, cuando está con amigos, cuando está con un conjunto de colegas o cuando está hablando para un medio público. Las redes sociales, cuando son mal utilizadas, invitan a confundir ámbitos y a que todos acabemos comportándonos siempre como si tuviéramos una cámara delante y nuestras palabras fueran a ser publicadas inmediatamente.

Ayer mismo me llevé una desagradable sorpresa en ese terreno: En un congreso, excelente en su organización y temática, hubo personas que se dedicaron a transmitir por Twitter cosas que se estaban diciendo allí. Naturalmente, se trataba de un foro supuestamente profesional y supuestamente entre colegas donde las reglas sobre qué y cómo se dice no son las mismas que cuando se habla ante una cámara. En las menciones en Twitter no sólo se quedaron con lo más escandaloso y sin añadir matizaciones hechas en la reunión sino que se llegó a incluir comentarios privados realizados fuera de la reunión general y con un café delante.

Si ése es el uso que cabe esperar de redes como Twitter, nadie debería extrañarse de que las reuniones profesionales sufran un empobrecimiento derivado de la desconfianza sobre quién y cómo va a utilizar lo que se diga. Tampoco los que actúen de esta forma deberán extrañarse de que, una vez identificados, los demás hagan un muro de silencio a su alrededor y procuren evitarlos.

Si esto es lo que debemos esperar del futuro, versiones anteriores de la censura nos acabarán pareciendo una broma comparadas con lo que viene.

BIG DATA: WILL IT DELIVER AS PROMISED?

 

The Big Data concept is still relatively new but the concept inside is very old: If you have more and more data, you can eliminate ambiguity and there are less requirements of hunches since data are self-explanatory.

That is a very old idea coming from I.T. people. However, reality always insisted in delaying the moment when that can be accomplished. There are two problems to get there:

  1. As data grow, it is more necessary a context analysis to decide which one are relevant and which others can be safely ignored.
  2. At the other side of the table, we could have people trying to misguide automatic decision supporting systems. Actually, the so-called SEO (Search Engine Optimization) could be properly renamed GAD (Google Algorithm Deception) to explain more clearly what it is intended to do.

Perhaps, by now, Big Data could be less prone to the second problem than anyone performing Web Analytics. Web has become the battlefield for a quiet fight:

By one side, the ones trying to get better positions for them and the positive news about them. These are also the ones who try to hide negative news throwing positive ones and repeating them to be sure that the bad ones remain hidden in search results.

By the other side, we have the Masters of Measurement. They try to get magic algorithms able to avoid tricks from the first ones, unless they decide paying for their services.

Big Data has an advantage over Web data: If a company can have its own data sources, they can be more reliable, more expensive to deceive and any attempt could be quite easily visible. Even though, this is not new: During the II World War, knowing how successful a bombing had been was not a matter of reading German newspapers or listening to German radio stations.

The practice known as content analysis used indirect indicators like funerals or burials information that could be more informative if and only if the enemy did not know that these data were used to get information. In this same context, before D-Day, some heavily defended places with rubber-made tanks tried to fool reconnaissance planes about the place where the invasion was to start. That practice has remained for a long time. Actually, it was used even in the Gulf War, adding to the rubber tanks heat sources aimed to deceive infrared detectors, who should get a similar picture to the one coming from running engines.

Deceiving Big Data will be harder than deceiving Internet data but, once known who is using specific data and what is doing with them, there will be always a way to do this. An easy example: Inflation indicators: A Government can decide changing the weight in the different variables or changing prices of Government-controlled prices to get a favorable picture. In the same way, if Big Data is used to give information to external parties, we should not need someone from outside trying to deceive the system. That should be done from inside.

Anyway, the big problem is about the first point: Data without a context are worthless…and the context could be moving faster than any algorithm designed to give meaning to the data. Many surprising outcomes have happened in places where all the information was available. However, that information has been correctly read only after a major disaster. For instance, emergence of new political parties could be seen but, if major players decided to dismiss them, it comes as a surprise for them, even though data were available. The problem was in the decision about what deserves to be analyzed and how to do it, not in the data themselves.

Other times, the problem comes from fast changes in the context that are not included in the variables to analyze. In the case of Spain, we can speak about the changes that 11M, and how it was managed by the different players, supposed in the election three days after. In another election, everybody had a clear idea about who was going to get a position that required an alliance. Good reasons advised an agreement and data showed that everybody was sure that the agreement was coming…but it wasn’t. One of the players was so sure that things were already done that tried to impose conditions that the other players saw as unacceptable. Consequence: The desired position was to the hands of a third player. Very recently, twopeople, both considered as growing stars, can have spoiled their options in minor incidents.

In short, we can have a huge amount of data but we cannot analyze all of them but the ones considered as relevant. At doing that, there is not an algorithm or an amount of data that can be a good replacement for an analysis performed by human experts. An algorithm or automatic system can be fooled, even by another automatic system designed to do that, context analysis can lose important variables that have been misjudged and sudden changes in the context cannot be anticipated by any automatic system.

Big Data can be helpful if rationally used. Otherwise, it will become another fad or worse: It could become a standard and nobody would dare deciding against a machine with a lot of data and an algorithm, even when they are wrong.

Las redes de “phishing” cada vez más audaces

Alguien mira mi perfil de Linkedin y, puesto que soy Open Networker, entra en contacto conmigo y me envía un mensaje: En éste me habla de la supuesta implantación de una empresa de aviación ligera en España y de la necesidad de alguien que les asesore en el arranque en el área de Recursos Humanos.

Sorprendente: Le pido detalles como si se trata de aviación ligera o ultraligera, en qué parte de España se quieren implantar y algunos datos iniciales más a los que no me contesta y me pide un contacto por Skype. Tampoco tengo problema y, tras darle mi usuario, me encuentro con que alguien me pide el contacto y tiene una fotografía muy profesional, de un personaje encorbatado y de aspecto sajón.

Cuando mantenemos la conversación, me llevo dos sorpresas: La primera es que el acento no correspondía con la fotografía sino que, más bien, aunque era un inglés básicamente correcto, tenía un fuerte acento africano sin que, por mi parte, fuera capaz de discriminar mucho más.

La segunda sorpresa fue que ya no querían asesoramiento en el ámbito de Recursos Humanos sino “contables” porque la empresa tampoco iba a estar en España sino que tenía actividad comercial y el “contable” se encargaría de realizar los cobros. Ya va sonando conocido ¿verdad?

Parece que la ingeniería social no tiene límites cuando se trata de cazar incautos en el phishing.

Fake Internet leaders

In the last weeks, I have seen ads by companies who claim to be worldwide head-hunters, networks challenging Linkedin and some other Internet giants. A few moments ago, I was seeing the “Linkedin made in Spain”.

The homepage design can be carefully crafted and someone could think that the place is so important as they say to be. However, the imposture is so easy to discover that, surprisingly, people still try to fake its real importance in Internet.

An example: Trying to test if “Linkedin made in Spain” is so important as they say, I wrote its Web address in Alexa. These are the results compared with the real Linkedin:

Quantitative Results:

Captura Gonway cuantitativa

Captura Linkedin cuantitativo

The rank for Linkedin in Spain is 9th, from the same source.

Now, a qualitative vision from Alexa:

Qualitative Results:

Captura Alexa

Captura Alexa para Linkedin

These are the easy-to-obtain data. Please, before playing Internet tycoon remember: Fakes are easy to discover.

Artificial Intelligence (GOFAI): Story of an old mistake

Is there something surprising in this picture? Anyone familiar with robot arms can be surprised by the almost human looking of this hand, quite uncommon in robots used in manufacturing and surgery. A robot hand does not try to imitate the external shape of a human hand. It usually has two or three mobile pieces working as fingers. However, we have in the picture a copy of a human hand. Now, the second surprise: The picture is taken from a design in the M.I.T museum and whose author is Marvin Minsky.

Anything wrong with that? Minsky reigned for years in the I.A. activity in M.I.T. and one of his principles, as told by one of his successors, Rodney Brooks, was this: When we try to advance at building Intelligent Machines, we are not interested at all about human brain. This is the product of an evolution that, probably, left non-functional remains. Therefore, it is better starting from scratch instead of studying how a human brain works. This is not a literal sentence but an interpretation that can be made, from their own writings, about how Minsky and his team could think about Artificial Intelligence.

We cannot deny that, thinking this way, they have a point and, at the same time, an important confusion: Certainly, human brain can have some parts as a by-product from evolution and these parts could not add anything to its right performance. Moreover, these parts could have a negative contribution. However, this is not to be applied only to brain. Human eye has some features that could make an engineer designing a camera with the eye as guide to be immediatly fired. Even though, we do not complain about how the eye works, even if we compare with almost all Animal Kingdom but the comparison should be tricky: Human brain “fills” many of the deficiencies of the eye as, for instance, the blind spot and gives human sight features that are not related with the eye as, for instance, a very advanced ability to detect movements. That ability is related with specialized neurons much more than with the lens (the eye). The paradox: Human hand comes from the same evolutive process that the brain or eye and, hence, it is surprising for someone, who made a basic principle of dismissing human features, to design a hand that imitates a human one far beyond functional requirements.

Confusion in this old I.A. group is between shape and function. They are right: We can have evolutive remains but there is a fact: Neurons, far slower than electronic technology, are able to get results hard to reach for advanced technology in fields as shape recognition or others, apparently as trivial as catching a ball that comes flying. Usually, sportpeople are not seen as a paradigm of cerebral activity but the fact is that movements required to catch a ball, not to say in a highly precise way, are out of reach for advanced technology. Principle based in “evolutive remains” is clear but, if results coming from this pretended defective organ are, in some fields, much better than the ones that we can reach through technology…is it not worth trying to know how it works?

Waiting to have more storing room and more speed is a kind of “Waiting for Godot” or an excuse, since present technology is able to provide products much more faster than the humble neurons and storing capacity has very high limits and it is still growing. Do they need more speed to arrive before someone that is already much slower? Hard to explain.

The same M.I.T. museum where the hand is has a recording where a researcher, working with robot learning, surprises because of her humility: At the end of the recording when she speaks about her work with a robot, she confesses that they are missing something beyond speed or storing capacity. Certainly, something is missing: They could be in the same situation that the drunk looking for a key under a light, not because he has lost there but because that is the only place with light enough to look for something.

I.A. researchers did not stop to think in depth in the nature of intelligence or learning. However, they tried to produce them in their technological creations getting quite poor results, as well in their starting releases as in the ones with features like parallel processing, neuron networks or interaction among learning agents. Nothing remotely similar to an intelligent behavior nor a learning deserving that name.

Is that an impossible attempt? Human essentialists would say so. Rodney Brooks, one of the successors of Minsky sustains the opposite position based in a fact: Human essentialists always said: “There is a red line here impossible to trespass” and technological progress once and again forced them to put the supposed limit further. Brook was right but…this fact does not show at all that a limit, wherever it could be, does not exists as Brooks tries to conclude…that should be a jump hard to justify, especially after series of experiments that never got to show an intelligent behavior and some scientific knowledge in the past had to be changed by a new one. When scientists where convinced about the fat that Earth was flat, navigation already existed but techniques had to change radically after the real shape of Earth was common knowledge. Brooks could be in a similar situation: Perhaps technological progress does not have a known limit but it does not mean that his particular line in this technological progress has a future. It could be one of many cul-de-sac where science has come once and again.

As a personal opinion, I do not discard the feasibility of intelligence machines. However, I discard that they can be built if there is not a clear idea about the real nature of intelligence and what the learning mechanisms are. The not-very-scientific attitudes that the I.A. group showed against “dissidents” drove to dismiss people like Terry Winograd, once his findings made him uncomfortable or others, like Jeff Hawkins, were rejected from the beginning due to his interest about how the human brain works.These and other people like Kurzweil and others could start a much more productive way to study Artificial Intelligence than the old one.

The I.A. past exhibits too much arrogance and, as it happens in many academic institutions, a working style based in loyalties to specific persons and to a model that, simply, does not work. The hand of the picture shows something more: Contradictions with the own thinking. I do not know if an intelligent machine will be real but, probably, it will not come from a factory working with the principle of dismissing anything that they ignore.Finding the right way requires being more modest and being able to doubt about the starting paradigms.

Lessons from 11S about Technology

 

Long time ago, machines started to be stronger and more precise than people. That is not new but…are they smarter too? We can forget developments near to SciFi like artificial intelligence based in quantum computing or interaction among simple agents. Instead, we are going to deal with present technology, its role in an event like 11S and the conclusions that we can get from that.

 

 Let’s start with a piece of information: A first generation B747 plane required three/four people in a cockpit with more than 900 elements. A last generation B747 only requires two pilots and the number of elements inside the cockpit decreased in two thirds. Of course, this has been posible through I.T. introduction and, as a by-product, rhrough automation of tasks that, previously, had to be performed manually. The new plane appears as easier than the old one. However, the amount of tasks that the plane performs now on its own makes it a much more complex machine.

 

 Planes used in 11S could be considered as state-of-the-art planes at that time and this technological level made the fact possible, of course, together with a number of things far from technology. Something like 11S should have been hard with a less advanced plane. Handling old planes is harder and the collaboration of pilots in a mass-murder should have been required. Not an easy task getting the collaboration of someone in his own death under death threat.

 

The solution was making the pilot expendable and that, if the plane is flying, requires another pilot willing to take his own life. How is the training cost for that pilot? In money terms, a $120.000 figure could be more less adjusted if speak about training a professional pilot. However, this could not be hard to get for the people that organized and financed 11S. A barrier harder to pass is the time required for this training. Old planes were very complicated and their handling required a good amount of training to be acquired along several years. Should terrorists be so patient? Could they trust in the commitment of future self-killers along the years?

 

 Both questions could invite the organizers to reject the plans as unfeasible. However, technology played its role in a very easy way: Under normal situations, modern planes are easier to handle and, hence, they can be flown by people less knowledgeable and less expert. Coming from this point, situation appears under a different light: How long it takes for a rookie pilot getting the dexterity required to handle the plane at the level required by the objectives? Facts showed the answer: A technologically advanced passenger plane is easy to handle –at the level required- by a low-experienced pilot after an adaption through simulator training.

 

Let’s go back to the starting question: Machines are stronger and more precise than people. Are they smarter too? We could start discussing the different definitions about intelligence but, anyway, there is something that machines can do: Once a way to solve a problem is defined, that way can be programmed into a machine to get the problem automatically solved once and again. As a consequence, there is displacement of complexity from people to the machine, allowing modern and complex machines to be handled by people less able than former machines with more complex interfaces.

 

 Of course, there is an economic issue here: An important investment in technological design can be recovered if the number of machines sharing the design is high enough. Investment in design is made only once but it can drive to important savings in thousands of pilots training. At this moment, automation paradox appears: Modern designs produce more complex machines with a good part of the tasks automated. Automation makes these machines easier to handle under normal conditions than the previous ones. Hence, less trained people can operate machines that, internally, are very complex. Once complexity is hidden at interface level, less trained people can drive more complex machines and that is the place where automation payback is.

 

 The scaring question is this one: What happens in unforeseen situations and, hence, not included in technological design? If we speak about high risk activities, the manufacturer uses to have two answers to this questions: Redundancy and manual handling. However, both possibilities require a previous condition: The problem has to be identified as such in a clear and visible way. If not or if, even after being identified, the problem appears in a situation where there is not available time, people trained to operate the machine can find that the machine “becomes crazy” without any clue about the causes of the anomalous behavior.

 

 Furthermore, if the operator receives a full training, that is, not only related with interface but related with the knowledge of the principles of internal design, automation could not be justified due to increased training costs. We already know the alternative: The capacity to answer to an unforeseen event is seriously jeopardized. 11S is one of the most dramatic tests about how people with low training can perform tasks that, before, should have required more training. However, this is not an uncommon situation and it is nearer to our daily life than we could suspect.

 

Everytime we have a problem in the phone, an incidence with the Bank, an administrative problema in the gaz or electricity bill…we can start a process calling the Customer Service. How many times, after bouncing from one Department to other, someone tells us that we have to dial the number that we had dialed at the beginning? Hidden under these experiences, there is a technological development model based in complex machines and simple people. Is this a sustainable model? Technological development produce machines harder and harder to understand by their operators. In that way, we make better and better things that we already knew how to do and things that already were hard become harder and harder.

 

 11S was possible, among other things, as a consequence of a technological evolution model. This model is showing to be exhausted and requiring a course change. Rasmussen stated the requirements of this course change under a single condition: Operator has to be able to run cognitively the program that the machine is performing. This condition is not met and, in case of being mandatory, it could erase the economic viability driving to a double challenge: One of them should be technological making technology understandable to users beyond operating level under known conditions and the other one is organizational avoiding the loss of economic advantages..

 

Summarizing, performing better in things that we already performed well and, to do that, performing worse in things that we already were performing poorly is not a valid option. People require answer always, not only when automation and I.T. allow it. Cost is the main driver of the situation. Organizations do not answer unforeseen external events and, even worse, complexity itself can produce events from inside that, of course, do not have an answer neither.

 

 A technological model aimed to make easier the “what” hiding the “why” is limited by its own complexity and it is constraining in terms of human development. For a strictly economic vision, that is good news: We can work with less, less qualified and cheaper people. For a vision more centered in human and organizational development, results are not so clear. By one side, complexity puts a barrier preventing the technological solution of problems produced by technology. By other side, that complexity and the opacity of I.T. make the operators slaves without the opportunity to be freed by learning.