Artificial Intelligence, Privacy and Google: An explosive mix

A few days ago, I knew about a big company using AI in the recruiting processes. Not a big deal since it has become a common practice and if something can be criticized, is the fact that AI commits the same error that human recruiters: The false-negative danger; a danger that goes far beyond recruiting.

The system can have more and more filters, learning from them but…what about the candidates that were eliminated despite they could have been successful? Usually, we will never know about them and, hence, we cannot learn from our errors rejecting good candidates. We know of cases like Clark Gable or Harrison Ford that were rejected by Hollywood but, usually, following the rejected candidates is not feasible and, then, the system -and the human recruiter- learns to improve the filters but it’s not possible learning from the wrong rejections.

That’s a luxury that companies can afford while it affects to jobs where the supply of candidates and the demand for jobs is clearly favorable. However, this error is much more serious if the same practice applies to highly demanded profiles. Eliminating a good candidate is, in this case, much more expensive but both, AI system and human recruiters, don’t have the opportunity to learn from this error type.

Only companies like Google, Amazon or Facebook with an impressive amount of information about many of us could afford to learn from the failures of the system. It’s true that, as a common practice, many recruiters “google” the names of the candidates to get more information, but these companies keep much more information about us than the offered when someone searches for us in Google.

Then, since these companies have a feature that cannot be reached by other companies, including big corporations, we could expect them to be hired in the next future to perform recruiting, evaluate insurance or credit proposals, evaluation about potential business partners and many others.

These companies having so much information about us can share Artificial Intelligence practices with their potential clients but, at the same time, they keep a treasure of information that their potential clients cannot have.

Of course, they have this information because we voluntarily give it in exchange for some advantages in our daily life. At the end of the day, if we are not involved in illegitimate or dangerous activities and we don’t have the intention of doing it, is it so important having a Google, Amazon, Facebook or whoever knowing what we do and even listening to us through their “personal assistants”? Perhaps it is:

The list of companies sharing this feature -almost unlimited access to personal information- is very short. Then, any outcome from them could affect us in many activities, since any company trying to get an advantage from these data is not going to find many potential suppliers.

Now, suppose a dark algorithm that nobody knows exactly how it works deciding that you are not a good option to get a job, insurance, a credit…whatever. Since the list of suppliers of information is very short, you will be rejected once and again, even though nobody will be able to give you a clear explanation of the reasons for the rejection. The algorithm could have chosen an irrelevant action that happens to keep a high correlation with a potential problem and, hence, you would be rejected.

Should this danger be disregarded? Companies like Amazon had their own problems with their recruiting systems when, unintendedly, they introduced racial and genre biases. There is not a valid reason to suppose that this cannot happen again with much more data and affecting many more activities.

Let me share a recent personal experience related to this error type: Recently, I was blacklisted by a supplier of information security. The apparent reason was having a page, opening automatically every time I opened Chrome, without further action. That generated repeated accesses; it was read by the website as an attempt to spy and they blacklisted me. The problem was that the supplier of information security had many clients: Banks, transportation companies…and even my own ISP. All of them denied my access to their websites based on the wrong information.

This is, of course, a minor sample of what can happen if Artificial Intelligence is applied over a vast amount of data by the few companies that have access to them and, because of reasons that nobody could explain, we must confront rejection in many activities; a rejection that would be at the same time universal and unexplained.

Perhaps the future was not defined in «1984» by Orwell but in “The process” by Kafka.

Anuncio publicitario

Deja una respuesta

Por favor, inicia sesión con uno de estos métodos para publicar tu comentario:

Logo de WordPress.com

Estás comentando usando tu cuenta de WordPress.com. Salir /  Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Salir /  Cambiar )

Conectando a %s