Toggle light / dark theme

In their decades-long chase to create artificial intelligence, computer scientists have designed and developed all kinds of complicated mechanisms and technologies to replicate vision, language, reasoning, motor skills, and other abilities associated with intelligent life. While these efforts have resulted in AI systems that can efficiently solve specific problems in limited environments, they fall short of developing the kind of general intelligence seen in humans and animals.

In a new paper submitted to the peer-reviewed Artificial Intelligence journal, scientists at U.K.-based AI lab DeepMind argue that intelligence and its associated abilities will emerge not from formulating and solving complicated problems but by sticking to a simple but powerful principle: reward maximization.

Titled “Reward is Enough,” the paper, which is still in pre-proof as of this writing, draws inspiration from studying the evolution of natural intelligence as well as drawing lessons from recent achievements in artificial intelligence. The authors suggest that reward maximization and trial-and-error experience are enough to develop behavior that exhibits the kind of abilities associated with intelligence. And from this, they conclude that reinforcement learning, a branch of AI that is based on reward maximization, can lead to the development of artificial general intelligence.

A team at the Viterbi School of Engineering at the University of Southern California have created something that could turn the tide in how fast vaccines come into existence.

They created an AI framework that can significantly speed-up the analysis of COVID vaccine candidates and also find the best preventative medical therapies. This is at a time when more and more COVID mutations are emerging, bringing existing vaccine efficiencies into question.

Virologists are concerned that the mutations will evolve past the first vaccines. The UK even set up a genomic consortium to look solely at where these mutations are cropping up. In the global picture, while some poorer countries wait for access to the vaccine, they become sitting ducks for highly infectious mutations.

This small-scale humanoid is designed to do parkour over challenging terrains.


For a long time, having a bipedal robot that could walk on a flat surface without falling over (and that could also maybe occasionally climb stairs or something) was a really big deal. But we’re more or less past that now. Thanks to the talented folks at companies like Agility Robotics and Boston Dynamics, we now expect bipedal robots to meet or exceed actual human performance for at least a small subset of dynamic tasks. The next step seems to be to find ways of pushing the limits of human performance, which it turns out means acrobatics. We know that IHMC has been developing their own child-size acrobatic humanoid named Nadia, and now it sounds like researchers from Sangbae Kim’s lab at MIT are working on a new acrobatic robot of their own.

We’ve seen a variety of legged robots from MIT’s Biomimetic Robotics Lab, including Cheetah and HERMES. Recently, they’ve been doing a bunch of work with their spunky little Mini Cheetahs (developed with funding and support from Naver Labs), which are designed for some dynamic stuff like gait exploration and some low-key four-legged acrobatics.

Army researchers have developed a pioneering framework that provides a baseline for the development of collaborative multi-agent systems.

The framework is detailed in the survey paper “Survey of recent multi-agent learning algorithms utilizing centralized training,” which is featured in the SPIE Digital Library. Researchers said the work will support research in reinforcement learning approaches for developing collaborative multi-agent systems such as teams of robots that could work side-by-side with future soldiers.

“We propose that the underlying information sharing mechanism plays a critical role in centralized learning for multi-agent systems, but there is limited study of this phenomena within the research community,” said Army researcher and computer scientist Dr. Piyush K. Sharma of the U.S. Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory. “We conducted this survey of the state-of-the-art in reinforcement learning algorithms and their information sharing paradigms as a basis for asking fundamental questions on centralized learning for multi-agent systems that would improve their ability to work together.”