Menu

Blog

Archive for the ‘supercomputing’ category: Page 94

Dec 22, 2006

UK Government Report Talks Robot Rights

Posted by in categories: robotics/AI, supercomputing

In an important step forward for acknowledging the possibility of real AI in our immediate future, a report by the UK government that says robots will have the same rights and responsibilities as human citizens. The Financial Times reports:

The next time you beat your keyboard in frustration, think of a day when it may be able to sue you for assault. Within 50 years we might even find ourselves standing next to the next generation of vacuum cleaners in the voting booth. Far from being extracts from the extreme end of science fiction, the idea that we may one day give sentient machines the kind of rights traditionally reserved for humans is raised in a British government-commissioned report which claims to be an extensive look into the future. Visions of the status of robots around 2056 have emerged from one of 270 forward-looking papers sponsored by Sir David King, the UK government’s chief scientist.

The paper covering robots’ rights was written by a UK partnership of Outsights, the management consultancy, and Ipsos Mori, the opinion research organisation. “If we make conscious robots they would want to have rights and they probably should,” said Henrik Christensen, director of the Centre of Robotics and Intelligent Machines at the Georgia Institute of Technology. The idea will not surprise science fiction aficionados.

It was widely explored by Dr Isaac Asimov, one of the foremost science fiction writers of the 20th century. He wrote of a society where robots were fully integrated and essential in day-to-day life.In his system, the ‘three laws of robotics’ governed machine life. They decreed that robots could not injure humans, must obey orders and protect their own existence – in that order.

Robots and machines are now classed as inanimate objects without rights or duties but if artificial intelligence becomes ubiquitous, the report argues, there may be calls for humans’ rights to be extended to them.It is also logical that such rights are meted out with citizens’ duties, including voting, paying tax and compulsory military service.

Mr Christensen said: “Would it be acceptable to kick a robotic dog even though we shouldn’t kick a normal one? There will be people who can’t distinguish that so we need to have ethical rules to make sure we as humans interact with robots in an ethical manner so we do not move our boundaries of what is acceptable.”

The Horizon Scan report argues that if ‘correctly managed’, this new world of robots’ rights could lead to increased labour output and greater prosperity. “If granted full rights, states will be obligated to provide full social benefits to them including income support, housing and possibly robo-healthcare to fix the machines over time,” it says.

But it points out that the process has casualties and the first one may be the environment, especially in the areas of energy and waste.

Human-level AI could be invented within 50 years, if not much sooner. Our supercomputers are already approaching the computing power of the human brain, and the software end of things is starting to progress steadily. It’s time for us to start thinking about AI as a positive and negative factor in global risk.

Page 94 of 94First8788899091929394