Amid other A.I.-focused announcements, Google today shared that its newer “multisearch” feature would now be available to global users on mobile devices, anywhere that Google Lens is already available. The search feature, which allows users to search using both text and images at the same time, was first introduced last April as a way to modernize Google search to take better advantage of the smartphone’s capabilities. A variation on this, “multisearch near me,” which targets searches to local businesses, will also become globally available over the next few months, as will multisearch for the web and a new Lens feature for Android users.
As Google previously explained, multisearch is powered by A.I. technology called Multitask Unified Model, or MUM, which can understand information across a variety of formats, including text, photos, and videos, and then draw insights and connections between topics, concepts, and ideas. Google put MUM to work within its Google Lens visual search features, where it would allow users to add text to a visual search query.
“We redefined what we mean to search by introducing Lens. We’ve since brought Lens directly to the search bar and we continue to bring new capabilities like shopping and step-by-step homework help,” Prabhakar Raghavan, Google’s SVP in charge Search, Assistant, Geo, Ads, Commerce and Payments products, said at a press event in Paris.