Toggle light / dark theme

Last summer, the National Security Commission on Artificial Intelligence asked to hear original, creative ideas about how the United States would maintain global leadership in a future enabled by artificial intelligence. RAND researchers stepped up to the challenge.


“Send us your ideas!” That was the open call for submissions about emerging technology’s role in global order put out last summer by the National Security Commission on Artificial Intelligence (NSCAI). RAND researchers stepped up to the challenge, and a wide range of ideas were submitted. Ten essays were ultimately accepted for publication.

The NSCAI, co-chaired by Eric Schmidt, the former chief executive of Alphabet (Google’s parent company), and Robert Work, the former deputy secretary of defense, is a congressionally mandated, independent federal commission set up last year “to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies by the United States to comprehensively address the national security and defense needs of the United States.”

The commission’s ultimate role is to elevate awareness and to inform better legislation. As part of its mission, the commission is tasked with helping the Department of Defense better understand and prepare for a world where AI might impact national security in unexpected ways.

Rice University computer scientists have overcome a major obstacle in the burgeoning artificial intelligence industry by showing it is possible to speed up deep learning technology without specialized acceleration hardware like graphics processing units (GPUs).

Computer scientists from Rice, supported by collaborators from Intel, will present their results today at the Austin Convention Center as a part of the machine learning systems conference MLSys.

Many companies are investing heavily in GPUs and other specialized hardware to implement deep learning, a powerful form of artificial intelligence that’s behind digital assistants like Alexa and Siri, facial recognition, product recommendation systems and other technologies. For example, Nvidia, the maker of the industry’s gold-standard Tesla V100 Tensor Core GPUs, recently reported a 41% increase in its fourth quarter revenues compared with the previous year.

Google has released a neural-network-powered chatbot called Meena that it claims is better than any other chatbot out there.

Data slurp: Meena was trained on a whopping 341 gigabytes of public social-media chatter—8.5 times as much data as OpenAI’s GPT-2. Google says Meena can talk about pretty much anything, and can even make up (bad) jokes.

Why it matters: Open-ended conversation that covers a wide range of topics is hard, and most chatbots can’t keep up. At some point most say things that make no sense or reveal a lack of basic knowledge about the world. A chatbot that avoids such mistakes will go a long way toward making AIs feel more human, and make characters in video games more lifelike.

Some forms of autonomous vehicle watch the road ahead using built-in cameras. Ensuring that accurate camera orientation is maintained during driving is, therefore, in some systems key to letting these vehicles out on roads. Now, scientists from Korea have developed what they say is an accurate and efficient camera-orientation estimation method to enable such vehicles to navigate safely across distances.


A fast camera-orientation estimation algorithm that pinpoints vanishing points could make self-driving cars safer.

John Wallace

Facebook Icon

Over the last century, scientists have developed methods to map the structures within the Earth’s crust, in order to identify resources such as oil reserves, geothermal sources, and, more recently, reservoirs where excess carbon dioxide could potentially be sequestered. They do so by tracking seismic waves that are produced naturally by earthquakes or artificially via explosives or underwater air guns. The way these waves bounce and scatter through the Earth can give scientists an idea of the type of structures that lie beneath the surface.

the photo series by vintner and fletcher illustrates three gradual stages of transhumanism from ‘testing ground’, ‘patient zero’ to ‘humanity 2.0’. at the lowest tier, ‘testing ground’ looks into individuals who have created wearable technology to expand their human abilities, improving everything from concentration to mental health.‘patient zero’ studies those who have taken permanent action to become half human and half robot. in the final chapter, ‘humanity 2.0’, the transhumanist subjects focus on life extension and immortality.