Mar 11, 2021
DeepMind is building a team of A.I. researchers in New York
Posted by Genevieve Klien in category: robotics/AI
The co-founder of Facebook AI Research is now helping rival DeepMind to build a team in New York.
The co-founder of Facebook AI Research is now helping rival DeepMind to build a team in New York.
Circa 2010
About 48 kilometers off the eastern coast of the United States, scientists from Rutgers, the State University of New Jersey, peered over the side of a small research vessel, the Arabella. They had just launched RU27, a 2-meter-long oceanographic probe shaped like a torpedo with wings. Although it sported a bright yellow paint job for good visibility, it was unclear whether anyone would ever see this underwater robot again. Its mission, simply put, was to cross the Atlantic before its batteries gave out.
Unlike other underwater drones, RU27 and its kin are able to travel without the aid of a propeller. Instead, they move up and down through the top 100 to 200 meters of seawater by adjusting their buoyancy while gliding forward using their swept-back wings. With this strategy, they can go a remarkably long way on a remarkably small amount of energy.
Continue reading “Remotely Piloted Underwater Glider Crosses the Atlantic” »
Geoscientists at Sandia National Laboratories used 3D-printed rocks and an advanced, large-scale computer model of past earthquakes to understand and prevent earthquakes triggered by energy exploration.
Injecting water underground after unconventional oil and gas extraction, commonly known as fracking, geothermal energy stimulation and carbon dioxide sequestration all can trigger earthquakes. Of course, energy companies do their due diligence to check for faults—breaks in the earth’s upper crust that are prone to earthquakes—but sometimes earthquakes, even swarms of earthquakes, strike unexpectedly.
Researchers have published a study revealing their successful approach to designing much quieter propellers.
The Australian research team used machine learning to design their propellers, then 3D printed several of the most promising prototypes for experimental acoustic testing at the Commonwealth Scientific and Industrial Research Organisation’s specialized ‘echo-free’ chamber.
Results now published in Aerospace Research Central show the prototypes made around 15dB less noise than commercially available propellers, validating the team’s design methodology.
From Cerebras Systems’ AI supercomputer to OpenAI’s natural language processor GPT-3, these are the companies pushing machine learning to the edge.
Imagine this: In the far, far future, long after you’ve died, you’ll eventually come back to life. So will everyone else who ever had a hand in the history of human civilization. But in this scenario, returning from the dead is the relatively normal part. The journey home will be a hell of a lot weirder than the destination.
Here’s how it will go down: A megastructure called a Dyson Sphere will provide a superintelligent artificial agent (AI) with the enormous amounts of power it needs to collect as much historical and personal data about you, so it can rebuild your exact digital copy. Once it’s finished, you’ll live your whole life (again) in a simulated reality, and when the time comes for you to die (again), you’ll be transported into a simulated afterlife, à la Black Mirror’s “San Junipero,” where you’ll get to hang out with your friends, family, and favorite celebrities forever.
Continue reading “A Dyson Sphere Could Bring Humans Back From the Dead, Researchers Say” »
Open AI, the research company founded by Elon Musk, has just discovered that their artificial neural network CLIP shows behavior strikingly similar to a human brain. This find has scientists hopeful for the future of AI networks’ ability to identify images in a symbolic, conceptual and literal capacity.
While the human brain processes visual imagery by correlating a series of abstract concepts to an overarching theme, the first biological neuron recorded to operate in a similar fashion was the “Halle Berry” neuron. This neuron proved capable of recognizing photographs and sketches of the actress and connecting those images with the name “Halle Berry.”
Now, OpenAI’s multimodal vision system continues to outperform existing systems, namely with traits such as the “Spider-Man” neuron, an artificial neuron which can identify not only the image of the text “spider” but also the comic book character in both illustrated and live action form. This ability to recognize a single concept represented in various contexts demonstrates CLIP’s abstraction capabilities. Similar to a human brain, the capacity for abstraction allows a vision system to tie a series of images and text to a central theme.
Scientists have taken a major step forward in harnessing machine learning to accelerate the design for better batteries: Instead of using it just to speed up scientific analysis by looking for patterns in data, as researchers generally do, they combined it with knowledge gained from experiments and equations guided by physics to discover and explain a process that shortens the lifetimes of fast-charging lithium-ion batteries.
It was the first time this approach, known as “scientific machine learning,” has been applied to battery cycling, said Will Chueh, an associate professor at Stanford University and investigator with the Department of Energy’s SLAC National Accelerator Laboratory who led the study. He said the results overturn long-held assumptions about how lithium-ion batteries charge and discharge and give researchers a new set of rules for engineering longer-lasting batteries.
The research, reported today in Nature Materials, is the latest result from a collaboration between Stanford, SLAC, the Massachusetts Institute of Technology and Toyota Research Institute (TRI). The goal is to bring together foundational research and industry know-how to develop a long-lived electric vehicle battery that can be charged in 10 minutes.
Non-rigid point set registration is the process of finding a spatial transformation that aligns two shapes represented as a set of data points. It has extensive applications in areas such as autonomous driving, medical imaging, and robotic manipulation. Now, a method has been developed to speed up this procedure.
In a study published in IEEE Transactions on Pattern Analysis and Machine Intelligence, a researcher from Kanazawa University has demonstrated a technique that reduces the computing time for non-rigid point set registration relative to other approaches.
Previous methods to accelerate this process have been computationally efficient only for shapes described by small point sets (containing fewer than 100000 points). Consequently, the use of such approaches in applications has been limited. This latest research aimed to address this drawback.
In recent years, videogame developers and computer scientists have been trying to devise techniques that can make gaming experiences increasingly immersive, engaging and realistic. These include methods to automatically create videogame characters inspired by real people.
Most existing methods to create and customize videogame characters require players to adjust the features of their character’s face manually, in order to recreate their own face or the faces of other people. More recently, some developers have tried to develop methods that can automatically customize a character’s face by analyzing images of real people’s faces. However, these methods are not always effective and do not always reproduce the faces they analyze in realistic ways.
Researchers at Netease Fuxi AI Lab and University of Michigan have recently created MeInGame, a deep learning technique that can automatically generate character faces by analyzing a single portrait of a person’s face. This technique, presented in a paper pre-published on arXiv, can be easily integrated into most existing 3D videogames.