Will our computer overlords be good or evil? (Hobbes vs. Rousseau)

Will our computer overlords be good or evil? (Hobbes vs. Rousseau)

Scientists and writers who speculate about the future of Artificial Intelligence (AI) and how it will impact our society usually foresee scenarios based on two extremes: utopias, where technological progress improves the quality of life for all; and dystopias, where the development of machine learning leads to repressive societies and much human misery, and perhaps even our extinction.

Our expectations of the future are in large part based on the assumption that machine learning will continue to grow exponentially in the coming years. Machine learning is the enhancement of autonomous forms of artificial intelligence not programmed by their designers, which together with deep learning—the non-linear association of data and information –will create cyborgs capable of knowing, innovating, and even sensing. This is a plausible argument because experience shows that reality has always surpassed fiction and that human ingenuity has created products that exceed our imagination.

The question on many people’s minds is whether the computer overlords we’re creating will be intrinsically good or evil.

Among the defenders of the prevailing goodness of artificial intelligence are Ray Kurzweill, director of engineering at Google, Peter Diamandis, founder of Singularity University, whose motto is “the best way to predict the future is to create it yourself,” and Peter Thiel, co-founder of Paypal.

Among those who fear its potential results are Bill Gates, founder of Microsoft, Elon Musk, creator of Tesla, or Stephen Hawking.

Max Tegman of MIT explains that AGI (artificial general intelligence) is evolving at a speed that not even its creators could foresee. In the fictional story that opens his latest book, Life 3.0, a group of scientists create a prodigious machine, Prometheus, whose intellectual capacity grows exponentially as it functions. In the early stages, Prometheus delivers global political and economic control for its creators, given the way that data and information control and influence key decisions and value distribution. To keep Prometheus under control, its owners can disconnect it and thus prevent its access to external networks and uncontrolled development. However, through learning, Prometheus manages to overcome these barriers and achieve full autonomy from its owners, eventually taking over the world.

Tegman’s dystopian vision is disturbing and recalls similar episodes in literature and cinema, such as HAL (Heuristically programmed Algorithmic computer) from Arthur C. Clarke’s novel 2001 A Space Odyssey. HAL is the central computer in charge of managing all the vital functions of the Discovery spacecraft, and its behavior changes during the voyage.

What will make these machines become good, normal, or evil intelligent beings? I believe it will depend on the moral disposition, or beliefs, of their creators. Since they are products created by human beings, they will try to project their image or likeness—to borrow from Genesis—and will want to reproduce themselves intellectually through their inventions.

If humans want to propagate themselves through their works, and we are interested in anticipating whether the result will be good, bad, or otherwise, perhaps we should take into account two great visions that philosophers have formulated about human nature.

For Jean-Jacques Rousseau, the father of contractualism, humans are “good by nature” and they only become corrupted when they enter society. His philosophy fueled the myth of the “good savage,” the belief that humans who have grown up outside of civilization are innocent and pure. This model was recreated in novels such as Tarzan of the Apes or The Jungle Book, where the state of nature is the fullness of human life, and integration into society is a source of frustration. Society, in Rousseau’s opinion, curtails the freedoms of individuals and increases inequality.

Rousseau’s distrust of the benefits of the community can be explained by his life, which was a sequence of fiascos and contradictions. For example, he handed his own children to the care of a foundling hospital in the belief that his wife’s family could not provide them with better education and argued constantly not just with his detractors, but also with his friends.

If we trust that the innate goodness of people is projected through our creations, AI can also be good. In Kazuo Ishiguro’s latest novel, Klara and the Sun is an automaton who provides companionship, affection, and true friendship for children, and is even willing to die for her owner.

-At the opposite extreme, for Thomas Hobbes, “man is a wolf to man” and it is only through law and the state’s monopoly of power that human survival is guaranteed. The alternative to society is disorder and violence.

Hobbes’ arguments were rooted in history. He lived through the nine years of the English civil war that began in 1642 and fled to Amsterdam with other supporters of King Charles I, where he wrote his classic work Leviathan.

If humans have an inclination towards perversion or evil, so will their creations. HAL is regarded as the archetypal evil robot who over time takes over the mission and decides to dispense with the crew.

Hal’s behavior is alarming, given that one of the key attributes of any form of intelligence, artificial or otherwise, is survival, and in case of conflict with other life forms would opt for its own survival. Technically, moreover, they would be tougher than fragile human beings.

A third view could be that technology is morally neutral. Robots are neither angels nor demons. As Daniela Rus, Director of the MIT Computer Science and Artificial Intelligence Lab, explains, “It is important for people to understand that AI is nothing more than a tool. Like any other tool, it is neither intrinsically good nor bad. It is solely what we choose to do with it. I believe we can do extraordinarily positive things with AI, but it is not a given that it will happen.”

Perhaps the best way to decide whether the robots we create are good or evil is first to establish if they are able to exercise free will. In the absence of a free will, there is no intentionality, and thus it would be incorrect to adjudicate responsibility. But if machine learning continues to evolve at the expected speed, AI devices may reach autonomy, think for themselves, and therefore become responsible for the actions, as humans are.

From an analysis of two different interpretations of human nature, provided by Rousseau and Hobbes, we should ask which is the closest to reality. For example, why not ask yourself about the legacy you would like to leave behind. When you think about who should replace you in your position, would you try to select better people than yourself?

Share this post

Leave a Reply

Your email address will not be published. Required fields are marked *


×