Archive for the ‘robotics/AI’ category: Page 1729
Feb 26, 2019
Alphabet invests in a start-up using beams of light on chips for super-fast A.I.
Posted by Klaus Baldauf in category: robotics/AI
A start-up called Lightmatter is working on a chip for artificial intelligence that will draw on optical technology that has previously been used to quickly send information around data centers.
Feb 25, 2019
Superintelligence as a Service is Coming and It Can Be Safe AGI
Posted by Quinn Sena in categories: materials, robotics/AI
Drexler and the Oxford Future of Humanity Institute proposing that artificial intelligence is mainly emerging as cloud-based AI services and a 210-page paper analyzes how AI is developing today.
AI development is developing automation of many tasks and automation of AI research and development will enable acceleration of AI improvement.
Accelerated AI improvement would mean the emergence of asymptotically comprehensive, superintelligent-level AI services that—crucially—can include the service of developing new services, both narrow and broad, guided by concrete human goals and informed by strong models of human (dis)approval. The concept of comprehensive AI services (CAIS) provides a model of flexible, general intelligence in which agents are a class of service-providing products, rather than a natural or necessary engine of progress in themselves.
Continue reading “Superintelligence as a Service is Coming and It Can Be Safe AGI” »
Feb 25, 2019
Reconstructing meaning from bits of information
Posted by Xavier Rosseel in categories: biotech/medical, robotics/AI
Modern theories of semantics posit that the meaning of words can be decomposed into a unique combination of semantic features (e.g., “dog” would include “barks”). Here, we demonstrate using functional MRI (fMRI) that the brain combines bits of information into meaningful object representations. Participants receive clues of individual objects in form of three isolated semantic features, given as verbal descriptions. We use machine-learning-based neural decoding to learn a mapping between individual semantic features and BOLD activation patterns. The recorded brain patterns are best decoded using a combination of not only the three semantic features that were in fact presented as clues, but a far richer set of semantic features typically linked to the target object. We conclude that our experimental protocol allowed us to demonstrate that fragmented information is combined into a complete semantic representation of an object and to identify brain regions associated with object meaning.
Feb 25, 2019
Robot ‘GOD’: AI version of Buddhist deity to preach in Japanese temple
Posted by Genevieve Klien in category: robotics/AI
A JAPANESE robot has been created to preach the teachings of Buddha in colloquial language at the Kodaiji Temple in the ancient city of Kyoto.
Feb 24, 2019
Self-Driving Cars Might Kill Auto Insurance as We Know It
Posted by Genevieve Klien in categories: robotics/AI, transportation
Without humans to cause accidents, 90% of risk is removed. Insurers are scrambling to prepare.
Feb 24, 2019
My robotic team at CRC competition( −600 $ of budget)
Posted by Genevieve Klien in category: robotics/AI
Feb 24, 2019
NASA greenlights SpaceX crew capsule test to ISS
Posted by Genevieve Klien in categories: robotics/AI, space travel
NASA on Friday gave SpaceX the green light to test a new crew capsule by first sending an unmanned craft with a life-sized mannequin to the International Space Station.
“We’re go for launch, we’re go for docking,” said William Gerstenmaier, the associate administrator with NASA Human Exploration and Operations.
A Falcon 9 rocket from the private US-based SpaceX is scheduled to lift off, weather permitting, on March 2 to take the Crew Dragon test capsule to the ISS.
Continue reading “NASA greenlights SpaceX crew capsule test to ISS” »
Artificial intelligence experts, ethicists and diplomats debated autonomous weapons. Christopher Intagliata reports.
Think killer robots. What comes to mind? Maybe…this guy?