DeepMind says that it has developed an AI model, called RoboCat, that can perform a range of tasks across different models of robotic arms. That alone isn’t especially novel. But DeepMind claims that the model is the first to be able to solve and adapt to multiple tasks and do so using different, real-world robots.
“We demonstrate that a single large model can solve a diverse set of tasks on multiple real robotic embodiments and can quickly adapt to new tasks and embodiments,” Alex Lee, a research scientist at DeepMind and a co-contributor on the team behind RoboCat, told TechCrunch in an email interview.
RoboCat — which was inspired by Gato, a DeepMind AI model that can analyze and act on text, images and events — was trained on images and actions data collected from robotics both in simulation and real life. The data, Lee says, came from a combination of other robot-controlling models inside of virtual environments, humans controlling robots and previous iterations of RoboCat itself.
Comments are closed.