Lo confieso: Siempre que se habla de “trabajo en equipo” me echo mano a la cartera porque, demasiado a menudo, en lugar de tratar de sacar lo mejor de todos los miembros se trata de cómo lograr la “aprobación por aclamación”, como ocurría en España en las cortes de Franco y en buena medida también ahora.
Un ejemplo sencillo ya comentado en este blog: El caso Nokia. Cualquier observador interesado, sin acceso a información privilegiada ni a costosos gabinetes encargados de producir laboriosos estudios con gran aparato estadístico, vio que la alianza con Microsoft, sin dejarse siquiera una puerta abierta a Android como hicieron otras marcas, era un error garrafal…cualquiera menos, naturalmente, los directivos de Nokia que tomaron la decisión. ¿Eran tontos o incompetentes? La verdad es que no lo creo; sin embargo, es muy probable que la dinámica de toma de decisiones de su organización haya hecho que se comportasen como si lo fueran.
No es el único caso de error fácilmente reconocible en el momento en que se produce -después es mucho más fácil y todos lo reconocemos- y debería ser una invitación a revisar los mecanismos de decisión de muchas organizaciones: La discrepancia está mal vista y, ante esto, muchos directivos “prudentes” prefieren acomodarse en su butaca de cubierta en el Titanic antes de correr el riesgo de perderla si exponen con claridad su desacuerdo.
Si la conducta esperable en un Comité de Dirección es la de un rebaño, y esta conducta se reproduce en los niveles inferiores de la organización, mejor que no llamemos a eso trabajo en equipo y menos aún que cantemos sus excelencias. Algo se está haciendo muy mal. El “Nos encontrábamos al borde del abismo pero hemos dado un paso al frente con decisión” parece la norma en muchas organizaciones…reflexión, poca y, si es grupal, ninguna pero eso sí, mucha decisión…aunque sea para vernos todos al fondo del abismo.
Is there something surprising in this picture? Anyone familiar with robot arms can be surprised by the almost human looking of this hand, quite uncommon in robots used in manufacturing and surgery. A robot hand does not try to imitate the external shape of a human hand. It usually has two or three mobile pieces working as fingers. However, we have in the picture a copy of a human hand. Now, the second surprise: The picture is taken from a design in the M.I.T museum and whose author is Marvin Minsky.
Anything wrong with that? Minsky reigned for years in the I.A. activity in M.I.T. and one of his principles, as told by one of his successors, Rodney Brooks, was this: When we try to advance at building Intelligent Machines, we are not interested at all about human brain. This is the product of an evolution that, probably, left non-functional remains. Therefore, it is better starting from scratch instead of studying how a human brain works. This is not a literal sentence but an interpretation that can be made, from their own writings, about how Minsky and his team could think about Artificial Intelligence.
We cannot deny that, thinking this way, they have a point and, at the same time, an important confusion: Certainly, human brain can have some parts as a by-product from evolution and these parts could not add anything to its right performance. Moreover, these parts could have a negative contribution. However, this is not to be applied only to brain. Human eye has some features that could make an engineer designing a camera with the eye as guide to be immediatly fired. Even though, we do not complain about how the eye works, even if we compare with almost all Animal Kingdom but the comparison should be tricky: Human brain “fills” many of the deficiencies of the eye as, for instance, the blind spot and gives human sight features that are not related with the eye as, for instance, a very advanced ability to detect movements. That ability is related with specialized neurons much more than with the lens (the eye). The paradox: Human hand comes from the same evolutive process that the brain or eye and, hence, it is surprising for someone, who made a basic principle of dismissing human features, to design a hand that imitates a human one far beyond functional requirements.
Confusion in this old I.A. group is between shape and function. They are right: We can have evolutive remains but there is a fact: Neurons, far slower than electronic technology, are able to get results hard to reach for advanced technology in fields as shape recognition or others, apparently as trivial as catching a ball that comes flying. Usually, sportpeople are not seen as a paradigm of cerebral activity but the fact is that movements required to catch a ball, not to say in a highly precise way, are out of reach for advanced technology. Principle based in “evolutive remains” is clear but, if results coming from this pretended defective organ are, in some fields, much better than the ones that we can reach through technology…is it not worth trying to know how it works?
Waiting to have more storing room and more speed is a kind of “Waiting for Godot” or an excuse, since present technology is able to provide products much more faster than the humble neurons and storing capacity has very high limits and it is still growing. Do they need more speed to arrive before someone that is already much slower? Hard to explain.
The same M.I.T. museum where the hand is has a recording where a researcher, working with robot learning, surprises because of her humility: At the end of the recording when she speaks about her work with a robot, she confesses that they are missing something beyond speed or storing capacity. Certainly, something is missing: They could be in the same situation that the drunk looking for a key under a light, not because he has lost there but because that is the only place with light enough to look for something.
I.A. researchers did not stop to think in depth in the nature of intelligence or learning. However, they tried to produce them in their technological creations getting quite poor results, as well in their starting releases as in the ones with features like parallel processing, neuron networks or interaction among learning agents. Nothing remotely similar to an intelligent behavior nor a learning deserving that name.
Is that an impossible attempt? Human essentialists would say so. Rodney Brooks, one of the successors of Minsky sustains the opposite position based in a fact: Human essentialists always said: “There is a red line here impossible to trespass” and technological progress once and again forced them to put the supposed limit further. Brook was right but…this fact does not show at all that a limit, wherever it could be, does not exists as Brooks tries to conclude…that should be a jump hard to justify, especially after series of experiments that never got to show an intelligent behavior and some scientific knowledge in the past had to be changed by a new one. When scientists where convinced about the fat that Earth was flat, navigation already existed but techniques had to change radically after the real shape of Earth was common knowledge. Brooks could be in a similar situation: Perhaps technological progress does not have a known limit but it does not mean that his particular line in this technological progress has a future. It could be one of many cul-de-sac where science has come once and again.
As a personal opinion, I do not discard the feasibility of intelligence machines. However, I discard that they can be built if there is not a clear idea about the real nature of intelligence and what the learning mechanisms are. The not-very-scientific attitudes that the I.A. group showed against “dissidents” drove to dismiss people like Terry Winograd, once his findings made him uncomfortable or others, like Jeff Hawkins, were rejected from the beginning due to his interest about how the human brain works.These and other people like Kurzweil and others could start a much more productive way to study Artificial Intelligence than the old one.
The I.A. past exhibits too much arrogance and, as it happens in many academic institutions, a working style based in loyalties to specific persons and to a model that, simply, does not work. The hand of the picture shows something more: Contradictions with the own thinking. I do not know if an intelligent machine will be real but, probably, it will not come from a factory working with the principle of dismissing anything that they ignore.Finding the right way requires being more modest and being able to doubt about the starting paradigms.
It is hard to find more discussed and less solved issues than how to quantify Human Resources. We have looked for tools to evaluate jobs, to evaluate performance and at what percentage objectives were met. Some people tried to quantify in percentage terms how and individual and a job fit and, even, many people tried to obtain the ROI over training. Someone recovered Q index, aimed to quantify speculative investments, to convert it into the main variable for Intellectual Capital measurement, etc..
Trying to get everything quantified is so absurd as denying a priori any possibility of quantification. However, some points deserve to be clarified:
New economy is the new motto but measurement and control instruments and, above all, business mentality is defined by engineers and economists and, hence, organizations are conceived as machines that have to be designed, adjusted, repaired and measured. However, it is a common fact that rigor demanded about meeting objectives is not used in the definition of the indicators. That brought something that is called here Mathematical Fictions.
A basic design principle should be that any indicator can be more precise than the thing it tries to indicate whatever the number of decimal digits we could use. When someone insists in keeping a wrong indicator, consequences appear and they are never good:
- Management behavior is driver by an indicator that can be misguided due to sneaky type of the variable supposedly indicated. It is worth remembering what happened when some Governments decided that the main priority in Social Security was reducing the number of days in waiting lists instead of the fluffy “improving Public Health System”. A common misbehavior should be to give priority to less time consuming interventions to reduce the number of citizens delaying the most importan tones.
- There is a development of measurement systems whose costs are not paid by the supposed improvement to get from them. In other words, control becomes an objective instead of a vehicle since control advantages do not cover costs of building and maintenance of the control. For instance, some companies trying to control abuse in photocopies ask for a form for every single photocopy making the control much more expensive than the controlled resource.
- Mathematical fictions appear when some weight variables that, in the best situation, are only useful for a situation and lose its value if the situation changes. Attemps relative to Intellectual Capital are a good example but we commit the same error if we try to obtain percents of people-job adjustment to use them as to foresee success in a recruiting process.
- Above all, numbers are a language that is valid for some terrains but not for others. Written information is commonly rejected with “smart talk trap” arguments but the real fact is that we can perceive fake arguments easier in written or verbal statements than if they come wrapped in numbers. People use to be far less exigent about indicators design than about written reports.
- Even though we always try to use numbers as “objective” indicators, the ability to handle these numbers by many people is surprisingly low. We do not need to speak about the journalist that wrote that Galapagos Islands are hundreds of thousands of kilometers far from Ecuador coast or the common mistake between American billion or European billion. We can show two easy examples about how numbers can lose any objectivity due to bad use:
After the accident of Concorde in Paris, 2001, media reported that it was the safest plane in the world. If we consider that, at that time, only fourteen planes of the type were flying instead of the thousands of not-so-exclusive planes, it is not surprising that an accident never happened before and, hence, nobody can say from it to be the safest plane. The sample was very short to say that.
Another example: In a public statement, the Iberia airline said that travelling by plane is 22 times safer than doing it by car. Does it mean that a minute spent in a plane is 22 times safer than a minute spent in a car? Far from it. This statement can be true or false depending of another variable: Exposure time. A Madrid-Barcelona flight lasts seven times less than a trip by car. However, if we try to contrast one hour inside a plane with an hour inside a car, results could be very far from these 22 times.
The only objective of these examples is showing how numbers can mislead too and we are less prepared to detect the trick than when we have to deal with written language.
These are old problems but –we have to insist- that does not mean they are solved and, perhaps, we should to arrive to the Savater idea in the sense that we do not deal with problems but with questions. Hence, we cannot expect a “solution” but contingent answer that never will close forever the question.
If we work with this in mind, measurement should acquire a new meaning. If we have contingent measurements and we are willing to build them seriously and to change them when they become useless, we could solve some –not all of them- problems linked to measurement. However, problems will arise when measurement is used to inform third parties and that could limit the possibility to change.
An example from Human Resources field can clarify this idea:
Some years ago, job evaluation systems had a real crisis. Competencies models came from this crisis but they have problems to for measurement. However, knowing why job evaluation systems started to be displaced is very revealing:
Even though there are not big differences among the most popular job evaluation systems, we will use Know-How, Problem Solving and Accountability, using a single table to compare different jobs in these three factors is brilliant. However, it has some problems hard to avoid:
- Reducing to a single currency, the point, all the ratings coming from the three factors implies the existence of a “mathematical artifact” to weight the ratings and, hence, priming some factors over others.
- If, after that, there are gross deviations from market levels, exceptions were required and these go directly against one of the main values that justified the system: Fairness.
Although these problems, job evaluation systems left an interesting legacy not very used: Before converting ratings into points, that is, before starting mathematical fictions, we have to rate every single factor. We have there a high quality information, for instance, to plan professional paths. A 13 points difference does not explain anything but a difference between D and E, if they are clearly defined, are a good index for a Human Resources manager.
If that is so…why is unused this potential of the system? There is an easy answer: Because job evaluation systems have been used as a salary negotiation tool and that brings another problem: Quantifiers have a bad design and, furthermore, they have been used for goals different from the original one.
The use of mix comittees for salary bargaining, among other factors, has nullified the analytical potential of job evaluation systems. Once a job is rated in a way, it is hard to know if this rating is real or it comes from the vagaries of the bargaining process.
While job evaluation remained as an internal tool of Human Resources area, it worked fine. If a system started to work poorly, it could be ignored or changed. However, if this system starts to be a main piece in the bargaining, it losses these features and, hence, its use as a Human Resources tool dissapears.
Something similar happens if we speak about Balanced Scorecard or Intellectual Capital. If we analyze both models, we’ll find that there is only a different variable and a different emphasis: We could say, without bending too much the concepts, that the Kaplan and Norton model is equal to Intellectual Capital plus financial side but there is another difference more relevant:
Balanced Scorecard is conceived as a tool for internal control. That implies that changes are easy while Intellectual Capital was created to give information to third parties. Hence, measurement has to be more permanent, less flexible and…less useful.
Actually, there are many examples to be used where the double use of a tool nullifies at least another one. The same idea of “Double Accounting” implies criticism. However, pretending that a system designed to give information to third parties can be, at the same time and with the same criteria, an effective tool for control, is quite near to ScFi.
Competencies systems have too its own part of mathematical fiction. It is hard to créate a system able to capture all the competencies and to avoid overlapping among them. If this is already hard…how is it possible to weight variables to define job-occupant adjustment? How many times are we evaluating the same thing under different names? When can we weight a competence? Is this value absolute or should it depend on contingencies? Summarizing….is it not a mathematical nonsense aimed to get a look of objectivity and, just-in-case, to justify a mistake?
This is not a declaration against measurement and, even less, against mathematics but against the symplistic use of it. “Do it as simple as possible but no more” is a good idea that is often forgotten.
Many of the figures that we use, not only in Human Resources, are real fiction ornated with a supposed objectivity coming from the use of a numeric language whose drawbacks are quite serious. Numeric language can be useful to write a symphony but nobody would use it to compose poetry (except if someone decides to use the cheap trick of converting letters into numbers) and, however, there is a general opinion about numbers as universal language or, as Intellectual Capital starters said, “numbers are the commonly accepted currency in the business language”.
We need to show not only momentaneous situations but dynamics and how to explain them. That requires written explanations that, certainly, can misguide but, at least, we are better equipped to detect it than if it come wrapped in numbers.
Long time ago, machines started to be stronger and more precise than people. That is not new but…are they smarter too? We can forget developments near to SciFi like artificial intelligence based in quantum computing or interaction among simple agents. Instead, we are going to deal with present technology, its role in an event like 11S and the conclusions that we can get from that.
Let’s start with a piece of information: A first generation B747 plane required three/four people in a cockpit with more than 900 elements. A last generation B747 only requires two pilots and the number of elements inside the cockpit decreased in two thirds. Of course, this has been posible through I.T. introduction and, as a by-product, rhrough automation of tasks that, previously, had to be performed manually. The new plane appears as easier than the old one. However, the amount of tasks that the plane performs now on its own makes it a much more complex machine.
Planes used in 11S could be considered as state-of-the-art planes at that time and this technological level made the fact possible, of course, together with a number of things far from technology. Something like 11S should have been hard with a less advanced plane. Handling old planes is harder and the collaboration of pilots in a mass-murder should have been required. Not an easy task getting the collaboration of someone in his own death under death threat.
The solution was making the pilot expendable and that, if the plane is flying, requires another pilot willing to take his own life. How is the training cost for that pilot? In money terms, a $120.000 figure could be more less adjusted if speak about training a professional pilot. However, this could not be hard to get for the people that organized and financed 11S. A barrier harder to pass is the time required for this training. Old planes were very complicated and their handling required a good amount of training to be acquired along several years. Should terrorists be so patient? Could they trust in the commitment of future self-killers along the years?
Both questions could invite the organizers to reject the plans as unfeasible. However, technology played its role in a very easy way: Under normal situations, modern planes are easier to handle and, hence, they can be flown by people less knowledgeable and less expert. Coming from this point, situation appears under a different light: How long it takes for a rookie pilot getting the dexterity required to handle the plane at the level required by the objectives? Facts showed the answer: A technologically advanced passenger plane is easy to handle –at the level required- by a low-experienced pilot after an adaption through simulator training.
Let’s go back to the starting question: Machines are stronger and more precise than people. Are they smarter too? We could start discussing the different definitions about intelligence but, anyway, there is something that machines can do: Once a way to solve a problem is defined, that way can be programmed into a machine to get the problem automatically solved once and again. As a consequence, there is displacement of complexity from people to the machine, allowing modern and complex machines to be handled by people less able than former machines with more complex interfaces.
Of course, there is an economic issue here: An important investment in technological design can be recovered if the number of machines sharing the design is high enough. Investment in design is made only once but it can drive to important savings in thousands of pilots training. At this moment, automation paradox appears: Modern designs produce more complex machines with a good part of the tasks automated. Automation makes these machines easier to handle under normal conditions than the previous ones. Hence, less trained people can operate machines that, internally, are very complex. Once complexity is hidden at interface level, less trained people can drive more complex machines and that is the place where automation payback is.
The scaring question is this one: What happens in unforeseen situations and, hence, not included in technological design? If we speak about high risk activities, the manufacturer uses to have two answers to this questions: Redundancy and manual handling. However, both possibilities require a previous condition: The problem has to be identified as such in a clear and visible way. If not or if, even after being identified, the problem appears in a situation where there is not available time, people trained to operate the machine can find that the machine “becomes crazy” without any clue about the causes of the anomalous behavior.
Furthermore, if the operator receives a full training, that is, not only related with interface but related with the knowledge of the principles of internal design, automation could not be justified due to increased training costs. We already know the alternative: The capacity to answer to an unforeseen event is seriously jeopardized. 11S is one of the most dramatic tests about how people with low training can perform tasks that, before, should have required more training. However, this is not an uncommon situation and it is nearer to our daily life than we could suspect.
Everytime we have a problem in the phone, an incidence with the Bank, an administrative problema in the gaz or electricity bill…we can start a process calling the Customer Service. How many times, after bouncing from one Department to other, someone tells us that we have to dial the number that we had dialed at the beginning? Hidden under these experiences, there is a technological development model based in complex machines and simple people. Is this a sustainable model? Technological development produce machines harder and harder to understand by their operators. In that way, we make better and better things that we already knew how to do and things that already were hard become harder and harder.
11S was possible, among other things, as a consequence of a technological evolution model. This model is showing to be exhausted and requiring a course change. Rasmussen stated the requirements of this course change under a single condition: Operator has to be able to run cognitively the program that the machine is performing. This condition is not met and, in case of being mandatory, it could erase the economic viability driving to a double challenge: One of them should be technological making technology understandable to users beyond operating level under known conditions and the other one is organizational avoiding the loss of economic advantages..
Summarizing, performing better in things that we already performed well and, to do that, performing worse in things that we already were performing poorly is not a valid option. People require answer always, not only when automation and I.T. allow it. Cost is the main driver of the situation. Organizations do not answer unforeseen external events and, even worse, complexity itself can produce events from inside that, of course, do not have an answer neither.
A technological model aimed to make easier the “what” hiding the “why” is limited by its own complexity and it is constraining in terms of human development. For a strictly economic vision, that is good news: We can work with less, less qualified and cheaper people. For a vision more centered in human and organizational development, results are not so clear. By one side, complexity puts a barrier preventing the technological solution of problems produced by technology. By other side, that complexity and the opacity of I.T. make the operators slaves without the opportunity to be freed by learning.
Once upon a time, all the planes had 4 people in the flightdeck: Captain, first officer, flight engineer and radio. We can go beyond in history to find the navigator but this can be enough to go to the point. Once pilots started to be proficient enough in English, radio-operator could be redundant. Actually, the operator was making transcriptions from pilots to ATC and from ATC to pilots and every transcription is an opportunity for a mistake. With all my professional respect for radio-operators, their absence in the flight-deck should not be a big issue…assuming that the pilots were proficient enough in the language used to communicate with ATC.
The situation of Flight Engineer was not the same. All the long-haul planes had one and his missions were very well-defined. The surveillance of the engines, fuel consumption, change of fuel tanks to keep center of gravity in the right values and, of course, the analysis of any technical problem in flight were the tasks of the flight engineer. The first big plane without a flight-engineer was the Airbus A310. Boeing complained heavily but, once they saw that it should be hard to modify this practice, they launched their 757/767 without a flight engineer too. From that moment, every single plane, whatever the size and range, coming from Airbus or Boeing came without a place for a flight engineer.
Of course, both manufacturers promised that it was not against safety because all of the functions of the flight engineer should be performed by automatic systems and warnings under any abnormal situation. Everyone takes for granted that flight-deck are designed for two people and, if an accident happens, nobody is going to trace it back to the absence of a flight engineer. Proofs?
Let’s speak about Swissair 111: An uncontrolled fire in the flight-deck finished with the crash of a MD-11 full loaded of passengers. The MD11 was the successor of DC10 and, except for the winglets of the first one, they were not easy to distinguish at a first glance. However, it was an important difference between them: The DC10 had a flight engineer and the MD11 did not have one. At the beginning, in flight SW111, the pilots had to look for radio-frequences, runway orientation and lenght and all the information that they needed to land the plane in a fully unknown airport. Of course, at the same time, they had to keep an eye over the emergency and the development of the fire onboard.
Had the same situation happened in a DC10, one of the pilots could fly the plane, the other one could navigate and communicate and the flight engineer could be in charge of the emergency. Nobody can be sure that it should have been enough to save the plane but it seems to be a much more rational way to manage the emergency and the plane. However, it is surprising that nobody addressed this issue in the investigation of the accident; the absence of a flight engineer was considered as a part of the environment that passed unquestionned…even though a few years before the standard crew was composed of three people in the flight-deck.
It’s not the only case even though it can be the clearest. Another plane, an Airbus A330 of Air Transatt, landed in Azores Islands without fuel onboard and with both engines stopped. We can speak about human error from the pilots but if, instead of an automatic system throwing out fuel to keep the center of gravity in the right place, they should have a flight engineer…should the outcome be the same? Again, nobody tried to trace back this accident to the absence of a flight engineer.
Of course, the present designs should make a flight engineer fully useless, hence nobody can think that the situation can be solved including a flight engineer in modern planes. Simply, there is not place for him, not a physical one and more important, the design does not give a role to him. Probably, that is why nobody tried to raise the issue of the flight engineer; giving the flight engineer a place -a useful one- should mean a radical change of the design in modern planes. It seems that nobody is willing to take that step for the sake of safety and, when an accident happens, it seems better to look to other place instead of raising uncomfortable issues.
Many people had been warning about a wrong technological development years ago; you can find an example in the frame of this blog under the name “Improving Air Safety through Organizational Learning”. However, the development model did not change at all and nobody paid attention. Statistical information, in a high level analysis, could justify this behavior but, in the last years, many things have happened that can be seen as a serious warning hard to ignore. The so-called “black swans”, meaning facts impossible to forecast, start to become a full flock. Some cases:
- Economic crisis: We can speak about greed, opportunism and many other things. The fact is that the financial system was so complex that it was impossible to have a real surveillance and many people understand parts of it but ignore secondary effects in different parts of the same system. Perhaps, the best explanation about what happened is a humoresque one: http://youtu.be/mzJmTCYmo9g Opening any newspaper can show how hard is an agreement about the diagnostics and, hence, about the solutions. Depending on the expert one asks, forecasts are going to be different. Even, with emergent issues like BitCoin, there is not an agreement among experts about what the consequences are going to be.
- AF447: This air accident should mark a change: Experts with interest in the market went running to show that, at the end, everything was coming from a sensor and a human error. Is that right? If a faulty sensor and a tired pilot are enough to crash a plane…something is seriously wrong about air safety. Journalists ready to help manufacturers and regulators emphasized the time required by the pilot to go back to the cockpit. Have you ever seen the layout of a long-haul plane? Perhaps, you have observed a door in the middle of the plane where crewmembers come and go through. This door allows crewmembers to access the bunks that are downstairs. Now, watch in hand, try to find how long it takes to arrive from the bunk to the cockpit. Why bunks were removed from a position near to cockpit? The reality is that crewmembers got confused and a system that does not give enough information about what happens is, at least, questionnable because under unplanned events, it can produce confusion and inability to find the right action to perform.
- Cyberattacks: About two years ago, Iranian said to be able to get control over a U.S. drone forcing to land. The feasibility of that was rejected by U.S. officers but, only days ago, something happened inviting to think that it was a real possibility: http://www.businessweek.com/articles/2013-04-12/hacking-an-airplane-with-only-an-android-phone If a mobile phone is enough to get control over a manned plane…can we seriously affirm that it is not possible with more advanced technology over an unmanned one? Aviation is not the only activity where things like these can happen: http://arstechnica.com/information-technology/2013/04/the-spammer-who-logged-into-my-pc-and-installed-microsoft-office/ It’s said months ago that something bigger than 11S could come from powerful cyberattacks that could make useless vital installations. Even some technology managers are worried about the possibility of an undetected cyberattack that could convert a sophisticated weapon into something useless at the critical moment. Beyond cyberattacks, the feasibility of an EMP (http://en.wikipedia.org/wiki/Electromagnetic_pulse ) with similar but longer term effects is another real and present danger.
- Food fraud: Is it so hard identifying the kind of meat -if so- that hamburgers have inside? Perhaps it is hard for users but…also for regulators? Why every other day a new scandal appears with products containing something that it is not in the declared composition? Not only horse or moose meat but dogs…what else?
- Aviation safety: Beyond AF447…is the average passenger informed and concerned about what is the real safety level? Does the passenger know that, when crossing an Ocean in a twin plane, if one of the engines fails, the plane is certified to fly for hours with the remaining engine? Manufacturing processes are the ones that are certified or, as some people say, different? http://www.thedailybeast.com/newsweek/2012/03/19/is-boeing-s-737-an-airplane-prone-to-problems.html . What about the air quality onboard? http://www.achooallergy.com/air-quality-airplane-cabins.asp What about the practice in some airlines of having a flight student as a first officer paying to fly in a plane with passengers?…
In short, technology has kept the same track and it is more and more hard to understand and check. Regulators cannot be trusted -independently of their knowledge or professional attitude- if final user cannot check their work. If users cannot check how the regulators protect their interests, the expected results should be that the regulators should take care of their own interest, that not always could be the same that the ones of users.
The old Rassmussen rule, “the operator has to be able to run cognitively the program that the system is running”, is not followed time ago. However, as times goes by, we find that the problem goes beyond operators. Designers themselves understand specific parts without a clear understanding of the full product and, hence, unable to foresee consequences coming from interactions among different parts of the system. AF447 was a big warning light but it is not the only one and, perhaps, it is not even the worst one. It should be the right time for an assessment of technology development model. Otherwise, consequences will be worse and worse.
De una vez para otra, siempre olvido la facilidad que tienen en Estados Unidos para llamar imbécil al que no vive allí con las cosas que deberían ser más sencillas en la vida:
Sacar un billete de metro en una ciudad desconocida es una aventura, entrar por una boca para después buscar la dirección adecuada puede ser un error importante porque, en muchas partes, eso hay que decidirlo antes de entrar ya que la entrada sólo lo es para UNA de las direcciones. Poner en marcha un microondas debería ser muy fácil pero resulta que, si no está ajustado el reloj, no funciona. Por supuesto, este detalle no está avisado en ningún sitio pero, eso sí, hay sitios en Internet especializados en almacenar manuales de usuario y que, por el módico precio de 12 dólares, están dispuestos a suministrar un PDF con el manual correspondiente.
Un simple ventilador puede ser todo un reto para el que busque un botón de puesta en marcha o una forma sencilla de hacerlo funcionar utilizando el regulador de potencia. Conseguir un café decente es prácticamente imposible porque hay tal variedad de marcas -y todas afirman ser las mejores- que encontrar el adecuado por ensayo y error puede suponer 100 años de tragar un café infame y, al final, se acaba haciendo la aparente paletada de traerse el café de España en la maleta. ¿Qué decir de esos grifos que tanto gustan a ingleses y americanos donde ducharse con el agua con su temperatura y caudal adecuados roza lo milagroso para quien los utiliza por primera vez?
Sólo encuentro una excepción a esa regla general: El tráfico. Hace mucho tiempo llegué a la conclusión de que en Estados Unidos, las señales de tráfico estaban hechas para idiotas mientras que en España estaban hechas por idiotas y, desde el punto de vista del usuario, prefiero con mucho la primera opción. Aún así, teniendo en cuenta la cantidad de gente que vive en medio del bosque, es imposible sobrevivir sin un GPS porque, en caso contrario, una vez que uno se haya extraviado en el bosque no va a encontrar ni al lobo de Caperucita para poder preguntarle el camino.