Archive for the ‘robotics/AI’ category: Page 1721
Mar 21, 2019
Without Humans, A.I. Can Wreak Havoc
Posted by Derick Lee in categories: government, robotics/AI
As the World Wide Web marks its 30th birthday on Tuesday, public discourse is dominated by alarm about Big Tech, data privacy and viral disinformation. Tech executives have been called to testify before Congress, a popular campaign dissuaded Amazon from opening a second headquarters in New York and the United Kingdom is going after social media companies that it calls “digital gangsters.” Implicit in this tech-lash is nostalgia for a more innocent online era.
Let’s not let artificial intelligence put society on autopilot.
Mar 20, 2019
Robotic ‘gray goo’
Posted by Caycee Dee Neely in categories: engineering, particle physics, robotics/AI
Up until now, the ability to make gray goo has been theoretical. However, the scientists at the Columbia University School of Engineering and Applied Science have made a significant breakthrough. The individual components are computationally simple but can exhibit complex behavior.
Current robots are usually self-contained entities made of interdependent subcomponents, each with a specific function. If one part fails, the robot stops working. In robotic swarms, each robot is an independently functioning machine.
In a new study published today in Nature, researchers at Columbia Engineering and MIT Computer Science & Artificial Intelligence Lab (CSAIL), demonstrate for the first time a way to make a robot composed of many loosely coupled components, or “particles.” Unlike swarm or modular robots, each component is simple, and has no individual address or identity. In their system, which the researchers call a “particle robot,” each particle can perform only uniform volumetric oscillations (slightly expanding and contracting), but cannot move independently.
Mar 19, 2019
The implausibility of intelligence explosion
Posted by Pat Maechler in categories: business, climatology, existential risks, robotics/AI, sustainability
In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI):
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
Decades later, the concept of an “intelligence explosion” — leading to the sudden rise of “superintelligence” and the accidental end of the human race — has taken hold in the AI community. Famous business leaders are casting it as a major risk, greater than nuclear war or climate change. Average graduate students in machine learning are endorsing it. In a 2015 email survey targeting AI researchers, 29% of respondents answered that intelligence explosion was “likely” or “highly likely”. A further 21% considered it a serious possibility.
Continue reading “The implausibility of intelligence explosion” »
Mar 19, 2019
Online polygraph separates truth from lies using just text-based cues
Posted by Genevieve Klien in categories: information science, robotics/AI
Imagine a future where electronic text messaging is tracked by an intelligent algorithm that can identify truth from lies. A new study from two US researchers suggests this kind of online polygraph is entirely possible, with early experiments showing a machine learning algorithm can separate truth from lies based just on text cues over 85 percent of the time.
Mar 18, 2019
A.I.-generated text is supercharging fake news. This is how we fight back
Posted by Quinn Sena in categories: futurism, robotics/AI
Last month, OpenAI announced a text generating A.I. so scarily accurate they claimed it would be dangerous to release it. Now researchers have developed a tool to help spot text written by bots. Here’s what it means for the future of fake news in an age of smart machines.
Mar 18, 2019
Stanford University launches the Institute for Human-Centered Artificial Intelligence
Posted by Victoria Generao in category: robotics/AI
Three fundamental beliefs guide Stanford’s new Institute for Human-Centered Artificial Intelligence, co-directed by John Etchemendy and Fei-Fei Li : #AI technology should be inspired by human intelligence; the development of AI must be guided by its human impact; and applications of AI should enhance and augment humans, not replace them.
The new institute will focus on guiding artificial intelligence to benefit humanity.
Mar 18, 2019
Water-resistant electronic skin with self-healing abilities created
Posted by Caycee Dee Neely in categories: biological, robotics/AI
Another step towards organic ships?
Inspired by jellyfish, researchers have created an electronic skin that is transparent, stretchable, touch-sensitive, and repairs itself in both wet and dry conditions. The novel material has wide-ranging uses, from water-resistant touch screens to soft robots aimed at mimicking biological tissues.
Mar 17, 2019
Artificial Intelligence Creates a New Generation of Machine Learning
Posted by James Christian Smith in categories: employment, robotics/AI
CEO and founder of R2ai, Yiwen Huang, talks to Interesting Engineering in an exclusive interview about how he started a company where AI creates Machine Learning models and how AI is not going to replace but enhance humans’ jobs in the future.
R2ai’s Founder and CEO, Yiwen Huang, tells interesting Engineering in an interview how he goes from a lab to creating an AI that creates AI. And how AI is not going to replace but to augment jobs in the future.
Mar 16, 2019
Japan to back int’l efforts to regulate AI-equipped ‘killer robots’
Posted by Michael Lance in categories: government, policy, robotics/AI
Japan is hoping to play a lead role in crafting international rules on what has been called lethal autonomous weapons systems or LAWS.
Japan is planning to give its backing to international efforts to regulate the development of lethal weapons controlled by artificial intelligence at a UN conference in Geneva late this month, government sources said Saturday.
It would mark a departure from Japan’s current policy. The government was already opposed to the development of so-called killer robots that could kill without human involvement. But it had called for careful discussions when it comes to rules so as to make sure that commercial development of AI would not be hampered.
Continue reading “Japan to back int’l efforts to regulate AI-equipped ‘killer robots’” »