Artificial Intelligence and Equality: The Real Threat From Robots
By Alex Calvo
The World Economic Forum in Davos is always a good source of headlines. This year, media interest went beyond financial and economic issues, extending to the ultimate impact of robots on the future of humankind. In a five-member panel, Stuart Russell, a world leading expert on AI (artificial intelligence) and robotics, predicted that AI would overtake humans “within my children's lifetime”, adding that it was imperative to ensure that computers kept serving human needs, rather than being instead a threat to our species. In order to do so, Professor Russell believes that it is necessary to guarantee that robots have the same values as we humans.
Assuming that AI and robotics will keep progressing, and there is no reason to doubt they will, it is clear that sooner or later we will face the prospect of machines which are more intelligent than their creators. Furthermore, this may also result in they being self-aware. Once they enjoy this dual characteristic of self-awareness and above-human (or even just human-level) intelligence, the question arises, as rightly pointed out by Professor Russell, of how to ensure they do not act against humans. Are “human values” the answer? There are strong reasons to doubt it.
While there is no universal definition of “human values”, and in fact sometimes different people, organizations, and countries, will defend completely opposite ideas, a look at history shows how “equality”, at least in the sense of equality before the law, is a powerful drive and attractive call to arms. It is very difficult to make anybody accept a subordinate status for long. The 100th anniversary of the Great War is a powerful reminder of this. While the conflict did not result in the end of colonialism, the experience of being called upon to fight the metropolis' war and furthermore, of engaging allegedly superior white soldiers in the battlefield, led many colonial subjects to question the implicit racial hierarchies of the day, and ultimate contributed to the downfall of European empires. More generally, while slavery has had its share of intellectual defenders, time and again its victims have wondered while, sharing the same nature of their masters, they should remain under them.
Comparisons with slavery and colonialism are relevant because the history of human technological progress is the history of building increasingly complex machines to serve us. From the home to the battlefield, the world is full of all sort of mechanic and electronic devices designed to make our life easier, carrying out dangerous and difficult jobs or simply doing them more quickly and efficiently. Because these machines, including present-day computers, are neither intelligent (in the human sense of the word) nor self-aware, the question does not arise whether it is just to use them as slaves. They cannot pose the question, and while humans theoretically could, the historical answer, grounded in different philosophical and religious traditions, is that nature is at our service, even more so man-made objects.[1] Therefore, nobody talks about machine rights, worries about a tool's working hours, or seeks equality between humans and inanimate devices.[2]
Now, let us imagine that a robot is as intelligent as a human being and aware of his own existence. A dual question arises: first of all, why should he accept being our slave? Second, how could we justify keeping him as our inferior? It is most unlikely he would renounce liberty in the name of human progress and comfort, and imbuing him with the “human values” that Professor Russell suggests would only makes matters worse in this regard. Is it not a fundamental human value to seek equality, understood as the same set of basic freedoms?[3] How human is it to treat as an inferior someone equally intelligent as other members of the political community?[4]
Having posed this fundamental question, it is necessary to make it clear that the resulting threat from intelligent, self-aware, robots does not require that they engage in any violence against humans. Simply by virtue of being equals, they would demand and ultimately obtain the same degree of civil and political rights, giving them a say in the future of any community and country where they may be. Furthermore, once recognized as equals, there is no reason whey they should keep working for us as essentially slaves, and simply by taking their own decisions and exchanging goods and services on a market, as opposed to a command, basis, their economic impact would be very different. After centuries of employing technological progress to better our lives, we would now be in a position where this same progress endangered them. We must thus agree with Professor Russell but not necessarily with his solution. Human values would not prevent this, but rather accelerate the trend. Intelligent, self-aware, robots cannot be our inferiors, and it is very much doubtful whether they may even be our equals. More likely they would be our superiors, making it thus necessary to publicly debate now what the limits to AI research and development should be.
Alex Calvo is a student at Birmingham University's MA in Second World War Studies program. He is the author of ‘The Second World War in Central Asia: Events, Identity, and Memory’, in S. Akyildiz and R. Carlson ed., Social and cultural Change in Central Asia: The Soviet Legacy (London: Routledge, 2013) and tweets at Alex__Calvo , his work can be found at https://nagoya-u.academia.edu/AlexCalvo
[1] Things are of course more complex than that, as clear in the controversies prompted by the impact of economic development on the environment. However, few would argue that nature has a right not to be at our service, with most proponents of environmental protection either seeing it in terms of ultimately preserving human life and health or seeking a balance between current needs and those of future generations.
[2] On the other hand, in the case of animals, which have a measure of intelligence, there is indeed a range of movements to protect some of their rights, having led to legislation in different countries. However, although people such as Vegans believe that we should not exploit them, the majority position is that animal use may be regulated but not banned per se.
[3] Economic equality is very different, in the sense that it has strong supporters and equally keen detractors, and we cannot thus call it a fundamental human value as civil and political equality is.
[4] A possible response would be to restrict membership in the political community to humans, biologically defined. However, given artificial intelligence and self-awareness it is most unlikely that robots would accept this. Furthermore, even from a human perspective it may not meet with universal approval, with some voices stressing our roots (a view that may be supported, among others, by some religious traditions) while others stressed capabilities, not origins.
Comments
Post a Comment