Mar 13, 2009
Q&A: The robot wars have arrived
Posted by Seb in categories: defense, engineering, futurism, military, robotics/AI
Jetfuel powerpack, armour… shoulder turret?
By Lewis Page • Get more from this author
Posted in Science, 27th February 2009 12:18 GMT
Free whitepaper – Data center projects: standardized process
Continue reading “Lockheed offers ready-to-go supersoldier exoskeleton” »
NewScientist — March 10, 2009, by A. C. Grayling
IN THIS age of super-rapid technological advance, we do well to obey the Boy Scout injunction: “Be prepared”. That requires nimbleness of mind, given that the ever accelerating power of computers is being applied across such a wide range of applications, making it hard to keep track of everything that is happening. The danger is that we only wake up to the need for forethought when in the midst of a storm created by innovations that have already overtaken us.
We are on the brink, and perhaps to some degree already over the edge, in one hugely important area: robotics. Robot sentries patrol the borders of South Korea and Israel. Remote-controlled aircraft mount missile attacks on enemy positions. Other military robots are already in service, and not just for defusing bombs or detecting landmines: a coming generation of autonomous combat robots capable of deep penetration into enemy territory raises questions about whether they will be able to discriminate between soldiers and innocent civilians. Police forces are looking to acquire miniature Taser-firing robot helicopters. In South Korea and Japan the development of robots for feeding and bathing the elderly and children is already advanced. Even in a robot-backward country like the UK, some vacuum cleaners sense their autonomous way around furniture. A driverless car has already negotiated its way through Los Angeles traffic.
Continue reading “How long do we have? — Regulate armed robots before it's too late” »
How About You?
I’ve just finished reading Cormac McCarthy’s The Road at the recommendation of my cousin Marie-Eve. The setting is a post-apocalyptic world and the main protagonists — a father and son — basically spend all their time looking for food and shelter, and try to avoid being robbed or killed by other starving survivors.
It very much makes me not want to live in such a world. Everybody would probably agree. Yet few people actually do much to reduce the chances of of such a scenario happening. In fact, it’s worse than that; few people even seriously entertain the possibility that such a scenario could happen.
People don’t think about such things because they are unpleasant and they don’t feel they can do anything about them, but if more people actually did think about them, we could do something. We might never be completely safe, but we could significantly improve our odds over the status quo.
Continue reading “I Don't Want To Live in a Post-Apocalyptic World” »
November 14, 2008
Computer History Museum, Mountain View, CA
http://ieet.org/index.php/IEET/eventinfo/ieet20081114/
Organized by: Institute for Ethics and Emerging Technologies, the Center for Responsible Nanotechnology and the Lifeboat Foundation
A day-long seminar on threats to the future of humanity, natural and man-made, and the pro-active steps we can take to reduce these risks and build a more resilient civilization. Seminar participants are strongly encouraged to pre-order and review the Global Catastrophic Risks volume edited by Nick Bostrom and Milan Cirkovic, and contributed to by some of the faculty for this seminar.
Continue reading “Global Catastrophic Risks: Building a Resilient Civilization” »
The Singularity Institute for Artificial Intelligence has announced the details of The Singularity Summit 2008. The event will be held October 25, 2008 at the Montgomery Theater in San Jose, California. Previous summits have featured Nick Bostrom, Eric Drexler, Douglas Hofstadter, Ray Kurzweil, and Peter Thiel.
Keynote speakers include Ray Kurzweil, author of The Singularity is Near, and Justin Rattner, CTO of Intel. At the Intel Developer Forum on August 21, 2008, Rattner explained why he thinks the gap between humans and machines will close by 2050. “Rather than look back, we’re going to look forward 40 years,” said Rattner. “It’s in that future where many people think that machine intelligence will surpass human intelligence.”
Other featured speakers include:
You can find a comprehensive list of other upcoming Singularity and Artificial Intelligence events here.
Newsweek is reporting the results of a scientific study by researchers at Carnegie Mellon who used MRI technology to scan the brains of human subjects. The subjects were shown a series of images of various tools (hammer, drill, pliers, etc). The subjects were then asked to think about the properties of the tools and the computer was tasked with determining which item the subject was thinking about. To make the computer task even more challenging, the researchers excluded information from the brain’s visual cortex which would have made the problem a simpler pattern recognition exercise in which decoding techniques are already known. Instead, they focused the scanning on higher level cognitive areas.
The computer was able to determine with 78 percent accuracy when a subject was thinking about a hammer, say, instead of a pair of pliers. With one particular subject, the accuracy reached 94 percent.
Planning for the first Lifeboat Foundation conference has begun. This FREE conference will be held in Second Life to keep costs down and ensure that you won’t have to worry about missing work or school.
While an exact date has not yet been set, we intend to offer you an exciting line up of speakers on a day in the late spring or early summer of 2008.
Several members of Lifeboat’s Scientific Advisory Board (SAB) have already expressed interest in presenting. However, potential speakers need not be Lifeboat Foundation members.
If you’re interested in speaking, want to help, or you just want to learn more, please contact me at [email protected].
DARPA (the defense advanced research projects agency) is the R&D arm of he US military for far-reaching future technology. What most people do not realize is how much revolutionary medical technology comes out of this agency’s military R&D programs. For those in need of background, you can read about the Army & DARPA’s future soldier Landwarrior program and its medtech offshoots as well as why DARPA does medical research and development that industry won’t. Fear of these future military technologies runs high with a push towards neural activation as a weapon, direct brain-computer interfaces, and drones. However, the new program has enormous potential for revolutionary medical progess as well.
It has been said technology is neutral, it is the application that is either good or evil. (It is worth a side-track to read a discussion on this concept)
The Areas of Focus for DARPA in 2007 and Forward Are:
the potential for the destructive use of these technologies is obvious, for a a complete review of these projects and the beneficial medical applications of each visit docinthemachine.com
In an important step forward for acknowledging the possibility of real AI in our immediate future, a report by the UK government that says robots will have the same rights and responsibilities as human citizens. The Financial Times reports:
The next time you beat your keyboard in frustration, think of a day when it may be able to sue you for assault. Within 50 years we might even find ourselves standing next to the next generation of vacuum cleaners in the voting booth. Far from being extracts from the extreme end of science fiction, the idea that we may one day give sentient machines the kind of rights traditionally reserved for humans is raised in a British government-commissioned report which claims to be an extensive look into the future. Visions of the status of robots around 2056 have emerged from one of 270 forward-looking papers sponsored by Sir David King, the UK government’s chief scientist.The paper covering robots’ rights was written by a UK partnership of Outsights, the management consultancy, and Ipsos Mori, the opinion research organisation. “If we make conscious robots they would want to have rights and they probably should,” said Henrik Christensen, director of the Centre of Robotics and Intelligent Machines at the Georgia Institute of Technology. The idea will not surprise science fiction aficionados.
It was widely explored by Dr Isaac Asimov, one of the foremost science fiction writers of the 20th century. He wrote of a society where robots were fully integrated and essential in day-to-day life.In his system, the ‘three laws of robotics’ governed machine life. They decreed that robots could not injure humans, must obey orders and protect their own existence – in that order.
Continue reading “UK Government Report Talks Robot Rights” »