Artificial Intelligence (GOFAI): Story of an old mistake

Is there something surprising in this picture? Anyone familiar with robot arms can be surprised by the almost human looking of this hand, quite uncommon in robots used in manufacturing and surgery. A robot hand does not try to imitate the external shape of a human hand. It usually has two or three mobile pieces working as fingers. However, we have in the picture a copy of a human hand. Now, the second surprise: The picture is taken from a design in the M.I.T museum and whose author is Marvin Minsky.

Anything wrong with that? Minsky reigned for years in the I.A. activity in M.I.T. and one of his principles, as told by one of his successors, Rodney Brooks, was this: When we try to advance at building Intelligent Machines, we are not interested at all about human brain. This is the product of an evolution that, probably, left non-functional remains. Therefore, it is better starting from scratch instead of studying how a human brain works. This is not a literal sentence but an interpretation that can be made, from their own writings, about how Minsky and his team could think about Artificial Intelligence.

We cannot deny that, thinking this way, they have a point and, at the same time, an important confusion: Certainly, human brain can have some parts as a by-product from evolution and these parts could not add anything to its right performance. Moreover, these parts could have a negative contribution. However, this is not to be applied only to brain. Human eye has some features that could make an engineer designing a camera with the eye as guide to be immediatly fired. Even though, we do not complain about how the eye works, even if we compare with almost all Animal Kingdom but the comparison should be tricky: Human brain “fills” many of the deficiencies of the eye as, for instance, the blind spot and gives human sight features that are not related with the eye as, for instance, a very advanced ability to detect movements. That ability is related with specialized neurons much more than with the lens (the eye). The paradox: Human hand comes from the same evolutive process that the brain or eye and, hence, it is surprising for someone, who made a basic principle of dismissing human features, to design a hand that imitates a human one far beyond functional requirements.

Confusion in this old I.A. group is between shape and function. They are right: We can have evolutive remains but there is a fact: Neurons, far slower than electronic technology, are able to get results hard to reach for advanced technology in fields as shape recognition or others, apparently as trivial as catching a ball that comes flying. Usually, sportpeople are not seen as a paradigm of cerebral activity but the fact is that movements required to catch a ball, not to say in a highly precise way, are out of reach for advanced technology. Principle based in “evolutive remains” is clear but, if results coming from this pretended defective organ are, in some fields, much better than the ones that we can reach through technology…is it not worth trying to know how it works?

Waiting to have more storing room and more speed is a kind of “Waiting for Godot” or an excuse, since present technology is able to provide products much more faster than the humble neurons and storing capacity has very high limits and it is still growing. Do they need more speed to arrive before someone that is already much slower? Hard to explain.

The same M.I.T. museum where the hand is has a recording where a researcher, working with robot learning, surprises because of her humility: At the end of the recording when she speaks about her work with a robot, she confesses that they are missing something beyond speed or storing capacity. Certainly, something is missing: They could be in the same situation that the drunk looking for a key under a light, not because he has lost there but because that is the only place with light enough to look for something.

I.A. researchers did not stop to think in depth in the nature of intelligence or learning. However, they tried to produce them in their technological creations getting quite poor results, as well in their starting releases as in the ones with features like parallel processing, neuron networks or interaction among learning agents. Nothing remotely similar to an intelligent behavior nor a learning deserving that name.

Is that an impossible attempt? Human essentialists would say so. Rodney Brooks, one of the successors of Minsky sustains the opposite position based in a fact: Human essentialists always said: “There is a red line here impossible to trespass” and technological progress once and again forced them to put the supposed limit further. Brook was right but…this fact does not show at all that a limit, wherever it could be, does not exists as Brooks tries to conclude…that should be a jump hard to justify, especially after series of experiments that never got to show an intelligent behavior and some scientific knowledge in the past had to be changed by a new one. When scientists where convinced about the fat that Earth was flat, navigation already existed but techniques had to change radically after the real shape of Earth was common knowledge. Brooks could be in a similar situation: Perhaps technological progress does not have a known limit but it does not mean that his particular line in this technological progress has a future. It could be one of many cul-de-sac where science has come once and again.

As a personal opinion, I do not discard the feasibility of intelligence machines. However, I discard that they can be built if there is not a clear idea about the real nature of intelligence and what the learning mechanisms are. The not-very-scientific attitudes that the I.A. group showed against “dissidents” drove to dismiss people like Terry Winograd, once his findings made him uncomfortable or others, like Jeff Hawkins, were rejected from the beginning due to his interest about how the human brain works.These and other people like Kurzweil and others could start a much more productive way to study Artificial Intelligence than the old one.

The I.A. past exhibits too much arrogance and, as it happens in many academic institutions, a working style based in loyalties to specific persons and to a model that, simply, does not work. The hand of the picture shows something more: Contradictions with the own thinking. I do not know if an intelligent machine will be real but, probably, it will not come from a factory working with the principle of dismissing anything that they ignore.Finding the right way requires being more modest and being able to doubt about the starting paradigms.

Anuncios

Responder

Por favor, inicia sesión con uno de estos métodos para publicar tu comentario:

Logo de WordPress.com

Estás comentando usando tu cuenta de WordPress.com. Cerrar sesión / Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Cerrar sesión / Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Cerrar sesión / Cambiar )

Google+ photo

Estás comentando usando tu cuenta de Google+. Cerrar sesión / Cambiar )

Conectando a %s