Isaac Asimov Page


Home
What's New
AI Products
Alzheimer's, beat it
Android Eyes
Android Fingers
Android Hands
Animatronic Products
Animatronic Sites
Asimov's Laws
Baby Androids
Bipedal Projects
Books
Business Plan
Competitions
Conferences
Digital Gyro Board
Domestic robots
Education
Engineers Recommended
Entertainment robots
Future of Androids
Global Warming Fix
Globes of planets
Greatest Android Projects
Gyro/Accelerometer board
Haptic Sensor
Head Projects
Historical Projects
In the Movies
Kill Viruses/Trojans
Live to 100
Mecha Projects
NASA Projects
Planetary Globes
Personal projects
Philosophy of Androids
PRODUCTS
Robo-prize $5M
Robotics Sites
Secret Projects
Smaller projects
Sub-assembly projects
Superintelligence
Suppliers Recommended
Tactile Sensor
Touch Sensor
Valerie Android
Video cameras (smallest)
What's New
Home
    All you ever wanted to know about Isaac Asimov.  Find it on the  Isaac Asimov Home Page.
 
              asimov.jpg (20368 bytes)      Isaac Asimov   1920 - 1992
              (* NEW *) Here is another excellent Asimov page by Roland Saekow

              called "The Asimov Vault"

          


Some of Isaac Asimov's most popular books


Remarks on Asimov's Three Laws of Robotics

Isaac Asimov, who, in my opinion, is the greatest writer of all time, published his three laws in a short story called "Runaround" which was published by Street and Smith Publications, Inc. in 1942.  The three laws were stated as follows:

  • A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Several readers have asked about Asimov's Zeroth law. Finally, I have found time to include it. The following paragraph was quoted from Rodger Clark's page on Asimov's laws:

Asimov detected as early as 1950, a need to extend the first law, which protected individual humans, so that it would protect humanity as a whole. Thus, his calculating machines "have the good of humanity at heart through the overwhelming force of the First Law of Robotics" (emphasis added). In 1985 he developed this idea further by postulating a "zeroth" law that placed humanity's interests above those of any individual while retaining a high value on individual human life.

Zeroth law: A robot may not injure humanity, or, through inaction, allow humanity to come to harm.


Notwithstanding my great admiration for Asimov, I do not agree with his laws (nor do many other roboticists or sci-fi writers - such as Robert J. Sawyer).

Consider the first law. The first law precludes several very important "careers" for which the android is well suited - namely: soldier, policeman, and security guard.

It is clear that the military will be quite interested in having android soldiers in order to spare the lives of our human soldiers - and very likely to reduce the numbers of human soldiers needed by the military thus reducing the national defense budget.

Androids would also be well suited as policemen. My major complaint against policemen is that they shoot first and ask questions later. Hardly a year goes by in which we don't hear of some cop killing a child who threatened him with a toy gun. I find that behavior unacceptable. We also very often hear of policemen who kill people who are unarmed simply because the cop "thought he was reaching for a gun". I find that behavior unacceptable too. Rarely are the cops prosecuted and, who knows if the cop simply wanted to kill that person for some reason - such as his skin color.  Since an android is not alive, he can't be killed and therefore he NEED NOT shoot first.

Androids would also make good security guards. Again, they need not shoot first. In addition, they don't need sleep and won't get tired or hungry. They will always attend to their duties and not "goof off." Androids don't need to eat so you will never find them sitting in a doughnut shop. They can't be distracted by common events which might affect humans - such as sporting events or pretty girls.

Consider now, the second law. While I generally agree with it, I believe it needs to be reworded. Perhaps it could be written as: A robot must obey the orders given to it by its owner or other human beings or androids designated by the owner. In its original form we would encounter such bizarre situations as humans hijacking androids from other projects and putting them on their own projects. You would be unable to count on your android finishing his assigned tasks because other humans would order them to do something else. Another consideration for which I currently have no solution, is whether the android should follow orders which are immoral or unethical or illegal. Of course, I don't want androids to engage in illegal activities, but it will be very difficult to prevent that from happening. Violence (as in the first law) should be pretty easy to avoid, but immoral or unethical activities will be much harder to prevent. Perhaps these questions are best left to be determined by a special conference. I once had a friend who thought that as long as something was legal, it was acceptable behavior. I do not subscribe to that view.

And finally, let's consider the third law. This is clearly the weakest of the three laws. We humans of course protect our existence as the number one priority. But, for the vast majority of us, this simply amounts to avoiding any of the many ways you can injure yourself or be injured by others or get into traffic accidents. Common sense dictates that the androids would do the same. Therefore, the third law is not really needed, because the androids will never make the careless mistakes humans make and therefore their existence will never be in danger. However, the case of emergency behavior must be planned for. These are situations where the property or family of the owner are in danger. These situations will be covered in the "startup procedure" of every android. The owner will simply define a list of people and/or property for which the android will be responsible. The android will then attempt to save those people and/or property in an emergency situation.

There will no doubt be many people who will hate androids because the androids will displace them from their jobs. Some will hate them enough to try to destroy them. What then will an android do when a human being sets out to destroy him? It appears that according to the first law, the android will not be able to defend himself because he may "injure" the human who is trying to destroy him. If I were an android owner, I would not want anti-android people destroying him. First, the android is expensive and second he/she will be doing useful work for me. I may have become emotionally attached to the android too. It would be like a member of the family. In short, I believe the android should be allowed to defend itself against anyone other than the owner and any other humans designated by the owner. On the other hand, I do not think it would be a good idea for the android to become a "hit man" or to otherwise attack humans on the orders of its owner or his designated operators - unless of course, the android is a soldier or the other human is attacking the owner or one of his designated operators.  It may be difficult, however, for the android to determine if it is being asked to attack someone (see first example given below).

Finally, let me leave the reader with a few thought experiments in which the original three laws would have difficulty.

  • Suppose that you have packed a briefcase with explosives and a remote controlled detonator. Then you call your android and give him the briefcase and a sealed letter. You then direct the android to walk across the street carrying the briefcase and hand the letter to Mr. Jones who is standing over there. The android follows your orders and walks across the street. As he hands the letter to Mr. Jones, you explode the bomb by remote control.
  • An android comes upon two humans fighting. What does he do? By interfering, he may injure one of them, but by not interfering, they may injure each other.
  • Several androids are working around a downtown building when the fire alarm sounds. There are people in the building who may be burned up. Do the androids all run into the building to try to save the humans - and end up burned up in the fire? How will the androids know if they have rescued all the occupants?  Do they extract dead bodies before they get burned up?
  • Several androids are walking across a bridge when they see a human jump off the bridge into the water below. Do all the androids jump off the bridge to try to rescue the human? What if the androids themselves can't swim or are too heavy to float?

       I would be interested in your comments. Please send email to:

       crwillis@androidworld.com