Menu

Blog

Nov 13, 2020

Google Brain Paper Demystifies Learned Optimizers

Posted by in categories: information science, robotics/AI

Learned optimizers are algorithms that can be trained to solve optimization problems. Although learned optimizers can outperform baseline optimizers in restricted settings, the ML research community understands remarkably little about their inner workings or why they work as well as they do. In a paper currently under review for ICLR 2021, a Google Brain research team attempts to shed some light on the matter.

The researchers explain that optimization algorithms can be considered the basis of modern machine learning. A popular research area in recent years has focused on learning optimization algorithms by directly parameterizing and training an optimizer on a distribution of tasks.

Research on learned optimizers aims to replace the baseline “hand-designed” optimizers with a parametric optimizer trained on a set of tasks, which can then be applied more generally. In contrast to baseline optimizers that use simple update rules derived from theoretical principles, learned optimizers use flexible, high-dimensional, nonlinear parameterizations.

Comments are closed.