Are superintelligent machines a danger to humanity?


Home
What's New
AI Products
Alzheimer's, beat it
Android Eyes
Android Fingers
Android Hands
Animatronic Products
Animatronic Sites
Asimov's Laws
Baby Androids
Bipedal Projects
Books
Business Plan
Competitions
Conferences
Digital Gyro Board
Domestic robots
Education
Engineers Recommended
Entertainment robots
Future of Androids
Global Warming Fix
Globes of planets
Greatest Android Projects
Gyro/Accelerometer board
Haptic Sensor
Head Projects
Historical Projects
In the Movies
Kill Viruses/Trojans
Live to 100
Mecha Projects
NASA Projects
Planetary Globes
Personal projects
Philosophy of Androids
PRODUCTS
Robo-prize $5M
Robotics Sites
Secret Projects
Smaller projects
Sub-assembly projects
Superintelligence
Suppliers Recommended
Tactile Sensor
Touch Sensor
Valerie Android
Video cameras (smallest)
What's New
Home

Are superintelligent machines a danger to humanity?

As usual, before one can begin such a discussion, we need to understand the terms we are using. There are three terms in our question which need examination: superintelligent, machines, and danger.

Superintelligent: Intelligence is the ability to learn and to solve problems - not to be confused with knowledge which is simply the regurgitation of facts. So superintelligence could be taken to be a greatly superior ability to solve problems - or perhaps even to recognize problems. What I mean by this is that you can hardly begin to solve a problem if you don't even recognize that a problem exists. For example, how long was it after Newton's law of gravity was published before humans recognized that it didn't fully explain some observed phenomena (apart from simply bad observational data).

Machines: This may refer to individual androids or machines or groups of machines working together. It seems likely that sooner or later the androids will utilize the internet to work together to solve problems in the same way that thousands of PCs have been utilized to work on the SETI problem. By use of the internet, androids will be able to share their discoveries and to assign sub-problems to other androids or machines or groups of machines. Also by use of the internet, what one android knows could be known by all within a few seconds assuming they have sufficient memory space locally to hold the information.

Danger: Practically anything can pose "a danger" to individuals, but few things pose "a danger" to humanity. Among these are such things as: nuclear war (1), a giant meteor colliding with the Earth (4), AIDS (2), a pandemic of smallpox or anthrax (1), an unknown new pathogen (2), global pollution (3), elimination of oxygen from the oceans (4), other unknown super weapons (3), and so on. The numbers in parentheses indicate my estimation of the threat level where each higher number indicates a threat level at least 10 times lower. The actual threat posed by a level one (1) "threat" is hard to estimate. Many people believe that the Cuban missile crisis brought the world close to nuclear war. I blame both sides for the Cuban missile crisis, but I doubt that the actual threat ever was higher than perhaps 1 in 1000. I believe it is currently much less - perhaps 1 in 1,000,000.

A comparison of humans and androids

I think it is useful to compare the two "species" to help understand our primary question. What are the basic needs, desires, and fears of humans vis-a-vis androids?


Need/ desire/ fear  Human    Android (or machine)
food                yes        no
water               yes        no
air                 yes        yes (needed for pneumatics but not for breathing)
electricity         no         yes (critical need)
sex                 yes        no
love                yes        no
sleep               yes        no
family              yes        no
friends             yes        no
pleasure            yes        perhaps (but its not clear what qualifies)
leisure             yes        no  (androids will likely have lots of leisure time)
clothes             yes        perhaps  (only for humans' benefit)
money               yes        yes (once they become autonomous)
power               often      perhaps (power over other androids or humans in their care)
sickness            yes        no
injury              yes        perhaps (they may be temporarily out of operation)
death               yes        no

Immediately you notice that androids (or machines) have few of the basic needs or desires that humans have. Perhaps the most important is the fact that we humans eventually die whereas the androids will not die. What a tremendous advantage they have - androids are immortal.

Is superintelligence dangerous per se? (No)

It is not obvious that the behavior of a superintelligence can be predicted with confidence.  But will it be dangerous?  It seems to me that the evidence points to the contrary.  The evidence is of course the behavior of human geniuses.  Take for example Isaac Newton, Albert Einstein, or Stephen Hawking.  Each of these men was (is) a genius with raw intelligence far above the average human being.  However, none of these men became dangerous in any sense.  I believe that having very high intelligence generally means that you are able to solve the problems which you face more easily than other people.  We live in a (mostly) civilized world.  People learn very quickly that simply trying to obtain what you want through crime is very unlikely to be successful in the long run.  And if you are superintelligent you will be able to obtain what you want without having to resort to criminal behavior or violence.  I doubt that you could find any person who is both a Nobel prize winner and a criminal. 

Conversely, who are the really dangerous people?   They are generally dictators or tyrants in positions of great political power who use their position to cause millions of people to die.   The obvious examples are people like Mao Zedong, Joseph Stalin, Adolph Hitler, Pol Pot, Kim Jong Il, and Saddam Hussain.  They clearly did NOT use superintelligence to accomplish those horrors.   Today however, there are super weapons available (or on the drawing board) which allow the possibility of much greater horrors.  Namely the killing of significant percentages of all people on Earth.  The most obvious example is nuclear war which is becoming more and more likely with each additional (unstable) member of the nuclear club.

How will superintelligence arise in machines (or androids)?

It seems likely that the first person or group to accomplish this will make a lot of money from it.  Since this superintelligence will be able to solve difficult problems (or easy ones), many people will be willing to pay a lot of money for the solution to their problems.  Imagine aircraft companies who are looking for an advanced engine which would allow them to fly at Mach 6 or Mach 10.  How much would that answer be worth?   I can assure you it would be worth many millions.  But perhaps it is impossible.  Perhaps there is no such engine.  What about an underwater breathing apparatus which would allow humans to roam freely on the ocean bottom regardless of the depth of water?  What about the perpetual motion machine?  It has been thought to be impossible for decades, but is it really impossible?  How many alien civilizations exist in the universe?  Where are they?   How far away is the closest one?  What caused the big bang?   Does God exist?  Clearly a superintelligence could be extremely useful. 

There are hundreds of researchers around the world who are working on artificial intelligence.  But, they have not yet produced a viable superintelligence.  There are a couple of tantalizing examples - such as the Deep Blue chess program which beat world chess champion, Gary Kasparov, in a 6 game match in the spring of 1997..  This chess program plays extremely high level chess; however, it is really just a giant search routine.  Deep Blue searches over 200 million moves per second..  It evaluates each position and makes its choice of move based on the evaluation algorithm set up by its authors.  But it this superintelligence?  NO, clearly it isn't.  A human grandmaster evaluates fewer than 1000 positions in selecting his moves.  Thus it is clear that humans have a vastly superior search and evaluation scheme than a brute force method which searches millions of positions.

Some people believe that speed alone will produce superintelligence.  I do not.  There are many other factors which are required to produce superintelligence.  For example, you need an intimate knowledge of the subject matter related to the problem you are trying to solve.  You must be widely read and have a true understanding of the problem.  This is true because the solution may hinge upon a fact or idea from a related but significantly different field or even a totally different field.  Often a key fact or idea is missing from the available information.  The real genius is able to think up this missing idea or missing piece of the puzzle.   This is perhaps the key area where speed will be nearly useless.  In order to create a superintelligence  we must understand the process of thinking up and generating new hypotheses.   But, how can we know if the answer we are searching for is actually possible?   What about the problem of interstellar travel? The distance between galaxies is so tremendous that we currently have no possible way to reach the closest spiral galaxy outside the Milky Way (Andromeda which is 2.2 million light years away).   Do "worm-holes" actually exist?   Can space be folded as in the Dune novel?  These are questions which humans have struggled with for decades and have not solved.   Perhaps no solution exists.

Many years ago a lot of work in the field of AI was devoted to a General Problem Solver.  There was a program written at MIT which solved freshman calculus problems.  But, solving calculus problems is a relatively limited area and one which is unlikely to produce any new inventions or innovations (what do I know).

I believe that Einstein's theory of relativity provides a good example of how intelligence works. Before 1900 most people thought that Newton's theory of gravity was correct.  But, there were a few things which Newton's theory did not account for.  One of those was the precession of the orbit of Mercury.  This was a small discrepancy and even today I could not explain it very well myself.  Suffice it to say that the rate of precession of the orbit of Mercury could not be accounted for with Newtonian physics.  There were also the so-called Lorentz transforms. The Lorentz transforms which form the heart of Einstein's 1905 special theory of relativity were previously deduced from very different conceptual bases first by Voigt in 1887 and later by Lorentz.  Finally, there was the speed of light.  When men first tried to measure the speed of light, they discovered that it was so fast that they could not measure it.  Some thought it was instantaneous.  By 1880, the speed of light had been measured to within 1% of it currently accepted value.  Einstein is famous for his "thought experiments".   This I think is the key to intelligence.  How do you come up with "thought experiments"?  How do you gather together all of the above information (and more) and come up with a new theory called "The Theory of Relativity"?  Why is the speed of light an upper limit?   Why can't you go twice the speed of light?  It will not be easy to build a superintelligence.   One of my favorite aphorisms is "all the easy stuff has already been done".

When will superintelligence arise in machines (or androids)?

My prediction is within 10 years.  As mentioned above there are hundreds of researchers working on this problem all over the world.  Computers are getting more and more powerful which allows search algorithms to explore more and more possibilities - and gives an apparent increase in intelligence.  But currently nobody seems to have the "real" solution.  Or if they do, they are not letting on to it.  Researchers are lothe to reveal all their secrets - and who can blame them, because the solution to this problem is likely to be very valuable.  They want to file patents or simply keep the field to themselves for as long as possible.  Why would you give away a secret which will earn millions for you?

Conflicts arising between androids and humans

How could conflicts arise between androids and humans? The most obvious seems to be the strong likelihood that androids will displace humans from their jobs. The result may be attempts by some humans to destroy androids. Androids themselves may not be able to prevent themselves from being damaged - but since they are immortal, they can be repaired and thus resurrected. However, the androids' owners will not want their androids to be damaged since they purchased the androids and they represent a significant investment. This may lead to a heightened fear in some androids but probably not a significant problem for humans in general. But, since androids have TV-eyes, they will undoubtedly record good pictures of the humans who tried to destroy them. Therefore, those individuals will be prosecuted and will be removed from the threat pool.

What about the androids' point of view. If the android is operational at all, that means that his primary "needs" are being satisfied (air for pneumatics & electricity). It seems to me that androids will not need family or friends as we do. In fact it appears that only if you assume that androids develop feelings and emotions that you could suppose that they would have need of friends or family or love.

Why then would one or more androids decide that they wanted to attack or otherwise cause harm to humans? Clearly their primary needs for air and electricity are being provided by humans. One would think that this would engender gratitude not enmity. As we have seen above, they have very few needs. Perhaps we could list some human behaviors which might possibly be considered dangerous (planet threatening) by androids or machines.

Dangerous human behaviors

     1. Polluting the lakes, rivers, and oceans.
     2. Polluting the air.
     3. Polluting the land.
     4. Overpopulating the world
     5. Eliminating many entire species of animals.
     6. Your favorite cause goes here.

We humans of course also recognize these behaviors as undesirable - and in many cases we are trying to stop them. While it is obvious that eliminating humans will stop all of these behaviors, it is not clear how some renegade androids would first, come to that conclusion and second, how they might try to accomplish it. It would seem to me that there are many solutions to these problems which are less drastic than eliminating humans. Furthermore, if androids or machines develop superintelligence they should also be able to suggest other solutions, perhaps many other solutions. Since most humans want the problems solved, it seems reasonable that if androids came up with better solutions than humans have thought of, we would certainly give their solutions a try.

How would androids or machines try to eliminate humans?

It seems clear that simply trying to eliminate us one by one through some kind of combat would be a poor plan since word would spread fast and we would simply pull the plug on them. Conversely, it would appear to me that the most likely form of attack would be multiple pathogens. Since androids are not vulnerable to such pathogens, they could come in contact with them without fear. They could disseminate them widely and attack many population centers around the world at the same time thereby infecting and killing millions or perhaps billions - perhaps everyone. Obviously a human terrorist organization might come to the same conclusion and if they have sufficient suicidal zealots available they may be able to accomplish this horrendous goal.  Let us hope they do not try.

 

Comments?   Email me at crwillis@androidworld.com