3 Laws SafeEven Robot Laws Have Problems

Anyone who has cable has seen the movie I Robot and some of us have actually read Asimov's stories. A recurring theme was the three laws of robotics, meant to keep sentient robots from turning on us humans. In other words, in the future Asimov foresaw people were smart enough to predict the SkyNet problem and take steps to prevent it. People continue to laud Asimov for these 3 laws:
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2) A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.

3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The problem is that the first law doesn't work. Basically, carried to its logical conclusion, it allows and commands robots to do what they tried to do in the movie: herd us all into safe areas and not allow any harm to come to us (we'd basically become pets).

So, what's the solution? Anyone?

0 comments to “3 Laws SafeEven Robot Laws Have Problems”

Leave a comment on: 3 Laws SafeEven Robot Laws Have Problems

Popular entries

 

Web world of law online 3 Laws SafeEven Robot Laws Have Problems © 2012