Mar 31, 2014
Why Asimov’s Three Laws Of Robotics Can’t Protect Us
Posted by Seb in category: robotics/AI
George Dvorsky — i09
It’s been 50 years since Isaac Asimov devised his famous Three Laws of Robotics — a set of rules designed to ensure friendly robot behavior. Though intended as a literary device, these laws are heralded by some as a ready-made prescription for avoiding the robopocalypse. We spoke to the experts to find out if Asimov’s safeguards have stood the test of time — and they haven’t.
First, a quick overview of the Three Laws. As stated by Asimov in his 1942 short story “Runaround”:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.