Google today announced a pair of new artificial intelligence experiments from its research division that let web users dabble in semantics and natural language processing. For Google, a company that’s primary product is a search engine that traffics mostly in text, these advances in AI are integral to its business and to its goals of making software that can understand and parse elements of human language.
The website will now house any interactive AI language tools, and Google is calling the collection Semantic Experiences. The primary sub-field of AI it’s showcasing is known as word vectors, a type of natural language understanding that maps “semantically similar phrases to nearby points based on equivalence, similarity or relatedness of ideas and language.” It’s a way to “enable algorithms to learn about the relationships between words, based on examples of actual language usage,” says Ray Kurzweil, notable futurist and director of engineering at Google Research, and product manager Rachel Bernstein in a blog post. Google has published its work on the topic in a paper here, and it’s also made a pre-trained module available on its TensorFlow platform for other researchers to experiment with.
The first of the two publicly available experiments released today is called Talk to Books, and it quite literally lets you converse with a machine learning-trained algorithm that surfaces answers to questions with relevant passages from human-written text. As described by Kurzweil and Bernstein, Talk to Books lets you “make a statement or ask a question, and the tool finds sentences in books that respond, with no dependence on keyword matching.” The duo add that, “In a sense you are talking to the books, getting responses which can help you determine if you’re interested in reading them or not.”
Continue reading “Google’s latest AI experiments let you talk to books and test word association skills” »