The idiom “actions speak louder than words” first appeared in print almost 300 years ago. A new study echoes this view, arguing that combining self-supervised and offline reinforcement learning (RL) could lead to a new class of algorithms that understand the world through actions and enable scalable representation learning.
Machine learning (ML) systems have achieved outstanding performance in domains ranging from computer vision to speech recognition and natural language processing, yet still struggle to match the flexibility and generality of human reasoning. This has led ML researchers to search for the “missing ingredient” that might boost these systems’ ability to understand, reason and generalize.
In the paper Understanding the World Through Action, UC Berkeley assistant professor in the department of electrical engineering and computer sciences Sergey Levine suggests that a general, principled, and powerful framework for utilizing unlabelled data could be derived from RL to enable ML systems leveraging large datasets to better understand the real world.
Comments are closed.