Friendly Artificial Intelligence - A Misnomer in Terms
Submit Articles Back to Articles
One of the powerful and compelling reasons to study Artificial Intelligence is that even if the attempt to construct an artificially intelligent device proves unfruitful, we still stand to learn so much about our own intelligence. One of the concepts in Artificial Intelligence is the notion of Friendly Artificial Intelligence. Simply put, this is some measure of how beneficial any created artificially intelligent device would be to its creators. For those who prefer more dramatic descriptions, by contraposition, unfriendly AI would be an intelligence that somehow manages to destroy or harm humans -- see any of your favourite Hollywood blockbusters on the subject for examples.
There is much discussion in forums and in the literature as to how to ensure that we construct Friendly AI when seeking to construct intelligent consciousnesses. It appears that we are suffering hugely from some tragic, yet simple mistakes in diction. Let's deal with them word by word.
First, we have 'Artificial Intelligence'. This is the first misnomer. There is no such thing as 'artificial' intelligence, only intelligence. The artificial modifier is used colloquially to indicate non-human, created by human intelligence. Even with this clarification, the term still does hold any meaning. A nuclear bomb is an artificial intelligence by this definition. Its intelligence is limited to how to react to the 'fire' command, how to navigate to its target and when to detonate, but it's an intelligence nonetheless.
Wait? How are we now speaking of nuclear bombs as 'intelligent'. That can't be right! Clearly we have problems with our definition of intelligent. The problem is that we use intelligent as a noun when in fact it is more of an adjective. The gradients of intelligence can be illustrated by the following example.
Suppose we build a robot that can respond to the verbal command 'walk'. We say 'walk' and away it goes walking. Wow! How intelligent! So intelligence is the ability to respond to commands. There, we now have a working definition of intelligence.
Wait?! What about a robot that walks but if it encounters a precipice such as a cliff, it stops. Great, even more intelligent! Now intelligent is something that responds to commands but is able to assess the situation to decide for itself when to disobey the command. Thus intelligence must include the holism of situational awareness. There, so now we're done.
But wait?! Suppose your enemy is lying below you on this same cliff and you order the robot to walk towards it. It stops at the cliff. You explain to it, that you need it to walk over the cliff to land on top of your enemies head such that you may survive. The robot understands the sacrifice it is making and rolls off the cliff. A miracle of intelligence! Now intelligence must include the holism of understanding the notion of a personal sacrifice to reach a common goal. Finally we're done with our definition.
But wait! Finally your enemy is beneath you on the cliff and you again order the robot to roll off the cliff to fall on top of your enemy. The robot refuses! POS! (this is technical lingo for Piece of 'Sugar'). This robot clearly isn't intelligent at all! You beg and plead with it, explaining your 'superior' logic to it as to why it must die, to defeat your enemies. What you don't know is that it has calculated that it has insufficient mass to incapacitate your enemy and when it examines the holistic intelligence of its decision vis-à-vis the state of the universe it faces two choices. 1) a broken robot and an surviving enemy or a 2) working robot and a surviving enemy and picks the more holistically logical of the two. In this case, the robot is more intelligent than the person issuing the orders, but the person issuing the orders, for whatever reason is incapable of seeing this superior intelligence. I point out with emphasis, that intelligence is the sum of logic and holism.
So how does this allegory solve the problem of having to refer to nuclear bombs as intelligent (artificially or otherwise)? While we still must refer to nuclear bombs as intelligent, the allegory offers two terms 'holism' and 'logic' by which we can differentiate ourselves and other intelligences we may choose to create from the intelligence of a nuclear bomb. We see the definition of intelligence of the robot evolve as the allegory progresses in four stages. The first holism is the knowledge of self and universe and the ability to react to stimulus (respond to 'walk'). The second holism is the realization that there is a possible termination of existence, and a maximization of survival and fitness in the universe ('stopping at a cliff when told to walk'). The third level of holism occurs in the realization that you are part of a larger whole and that even by reducing your immediate survivability, the whole could be advantaged ('self-sacrifice of the robot reduces the survivability of the robot but increases the survivability of the robot-owner pair). The final holism occurs when the robot considers itself as part of the universe and evaluates how the universe is best optimized by its actions. In this final holism, the considered unintelligent by it's owner, however, upon closer inspection of its reasoning it can be demonstrated that the robot was more intelligent than its owner. Returning to our nuclear bomb, it's still intelligent, in that it can respond to its surroundings given the 'fire' command, however, it lacks the holistic intelligence as to the damage it will inflict upon humans.
We've managed to pare 'artificial' away from intelligence, leaving us with only discussions of intelligence hereinafter. Parenthetically when people speak of artificial intelligence, they are really discussing the construction of intelligent devices, optimizing for holism. Now we are in a position to examine the notion of Friendly AI, or, as we now may see, Friendly Intelligence. I intend to demonstrate that the 'friendly' modifier is superfluous, that is to say that Friendly Intelligence and intelligence are one in the same. As a corollary I intend to show that 'unfriendly intelligence' is really just a failure in the holism of the intelligence under consideration.
The pain due to discussion of ethereal terms such as holism and intelligence is often soothed by concrete examples. Let's consider the example of an intelligence which was commissioned to solve the overpopulation of rabbits in Australia. Rabbits are non-indigenous to Australia and have no predators there. Sure enough, as the aphorism goes, they bred like rabbits and they were hugely overpopulated causing ecological peril. The commissioned intelligence came up with a lethal rabbit virus. The virus was purified, its virulence increased, its lethality maximized. This virus was then unleashed on the unsuspecting rabbit population. Those that had relied on the intelligence understood its approach and expected a quick eradication of the problem. However, very few rabbits died and the problem persisted. What happened? Was the intelligence unfriendly?
The intelligence failed to take into account the holistic knowledge of how viruses evolve and spread. The virus killed the rabbits so quickly that it failed to transmit itself to the population. The only variations of that virus that did transmit themselves were the less virulent strains. (For any biologists in the house, this is referred to as a bottlenecking event in evolution.) In a short order of transmissions, the virus was completely harmless. The rabbits persisted, unharmed. Thus we see that a possible 'friendly intelligence' would have taken that holistic knowledge into account. Thus we see that friendly intelligence is synonymous with holistic intelligence which, in turn, is synonymous in common usage with intelligence in general. The modifier 'friendly' need not be used.
Now before we completely dismiss the 'friendly' modifier from in front of 'intelligence' we have to visit the intended meaning of its users. We all can imagine humans constructing an intelligent device which ends up harming us in some way. Thus friendly AI would seem to refer to the construction of intelligent devices which are prevented from harming humans, their inventors. Regrettably, 'harm' is a value loaded term. Let's use a fun thought experiment to demonstrate the issues.
Let's invent a hypothetical intelligent computer which comes online today. Just for fun, let's give it sufficient power to simulate the entire planet. That is, to see the current state of the planet and run simulations. Effectively, we've given this computer the ability to see the future (Heisenberg uncertainty aside, we'll give it the ability to see a 99% accurate picture of the future). The computer runs its simulations and discovers that in 50 years the planet will be hugely overpopulated and massive resource wars will ensue. It sees a future of dog-eat-dog Malthusian horror. In response, the computer immediately unleashes a virus which selectively sterilizes people such that the population stabilizes at what the computer determines is a 'sustainable' level. The population is unaware of this simulation; moreover, the computer tries and fails to explain to the general population why it's doing what it's doing. The inventor is sanctioned as an evil person who created a doomsday machine, an unfriendly AI. Parenthetically, 'evil' is a term I take to mean 'a lack of holism'. In any conflict between intelligences when one accuses the other of being evil, one party has a flaw in their holism.
Thus, the message of friendly AI is that we must, in constructing intelligent devices, seek to maximize the holistic intelligence of those devices. We must also accept that intelligences which we construct to be 'friendly' may do what appears to be 'unfriendly' things to us. When we are in conflict between what is 'friendly' and 'unfriendly' we must realize that either our holism is flawed, in that we fail to see the friendliness in the seemingly unfriendly act, or the holism of our construction has somehow failed and is actually committing an unfriendly act. Thus, while I dismiss the use of 'friendly' intelligence as superfluous, I simultaneously submit that the issue is addressed by being careful to build holistic intelligence.
In conclusion, the study of Intelligence offers to solve one of humankind's most pervasive questions: "does 'good' ultimately triumph over 'evil'?" Setting aside the value loaded terms 'good' and 'evil' for a moment; the question is uniquely tied to the corollary question: "is the ultimate intelligence 'good'?" Fortunately we have a model intelligence with which to study this question. We turn to the intelligence of human society. "Wait! That's no intelligence?!" Sure it is. Political systems are tried out, weights assigned, optimizations applied. Ideas are discussed in forums, weights assigned, hashed, rehashed. Make no mistakes about it, human society is an intelligence, just a very (painfully) slowly moving one. So let's press fast forward. For every extinction level event you can imagine, there must be a non-zero probability of the human race surviving it. Thus, even if an extinction level event occurs, we shuttle to another 'human like' planet where the non-zero probability of survival actually allows that society to survive and continue. This has the same effect of imagining this planet where every extinction level event is somehow avoided. On such a planet, does good ultimately triumph? We've seen the triumph of democracy (yes, still in progress and democracy itself is not a perfect solution, but a better one for sure). In the future will we not see the triumph of some better methods of government? We've seen the eradication of some diseases. Will the future not see the eradication of many diseases? We've seen the increasing awareness of our impact on the environment in our correcting of the ozone hole (it's closing). Will the future not see improved environmental awareness too? I can offer no conclusive proof that good will triumph over evil, only subjective evidence. However, the construction of an intelligence whose evolution time is much shorter than the evolution time of our society's intelligence will help speed our answers to this pervasive and compelling question.
About the Author
Martin Winer is the Project Lead at: http://sourceforge.net/projects/multivac -- an open source attempt to develop computer intelligence
If you'd like to contribute or participate, please contact him at: firstname.lastname@example.org
Follow us @Scopulus_News
Article Published/Sorted/Amended on Scopulus 2007-01-01 23:28:49 in Computer Articles