Google

Google showed off its next-generation AI by talking to Pluto and a paper airplane

Views: 134

Artificial intelligence is a huge part of Google’s business, and at this year’s I/O conference, the company highlighted its work with AI language understanding. The star of the show was an experimental model called LaMDA, which Google says could one day supercharge the ability of its conversational AI assistants and allow for more natural conversations.

“It’s really impressive to see how LaMDA can carry on a conversation about any topic,” said Google CEO Sundar Pichai during the presentation. “It’s amazing how sensible and interesting the conversation is. But it’s still early research, so it doesn’t get everything right.”

To demonstrate LaMDA’s abilities, the company showed videos of two short conversations conducted with the model. In the first, LaMDA answered questions while pretending to be Pluto, and in the second it was standing in for a paper airplane. As Pichai noted, the model was able to refer to concrete facts and events throughout the conversation, like the New Horizons probe that visited Pluto in 2015. (For some strange reason, both Pichai and LaMDA-Pluto erroneously referred to Pluto as a planet! It is, of course, a dwarf planet.)

You can see the conversations above and below:

The demos are certainly pretty impressive! With apparently no domain-specific training beforehand, LaMDA was able to talk from the point of view of two very disparate objects. This is Google’s bread and butter in the AI world, though, where it’s been pushing the boundaries of language understanding and generation for a while. Most notably, it spearheaded the use of a machine learning technique known as transformers, which are exceptional at handling language, and underpin work by rivals like OpenAI’s GPT-3.

But what’s the point of holding a conversation with a machine? As Pichai noted onstage, so much of Google’s AI work is about retrieving information, whether that’s through translating other languages or understanding what users mean when they search the web. If Google can get AI to understand language better, it can improve its core products. It can turn search or even using your phone into a conversation — something natural and flowing.

But it’s definitely hard work. As Pichai noted: “Language is endlessly complex. We use it to tell stories, crack jokes, and share ideas. […] The richness and flexibility of language make it one of humanity’s greatest tools and one of computer sciences’ greatest challenges.”

One notable point in the presentation, though, was Google’s emphasis on ensuring that LaMDA adheres to its AI principles: i.e., making sure it is socially beneficial, avoids bias, and so on. This is strange territory for Google to trumpet expertise on right now, after the company fired two of its most prominent AI ethics researchers, Timnit Gebru and Margaret Mitchell.

The story of their firing is complicated, but hinges on the reception of a research paper authored by the pair (along with other collaborators) on the dangers posed by these exact types of language models. None of this was even alluded to onstage. It seems that even though the fall-out from this event hasn’t yet subsided, Google is too fixated on its future with AI to address these concerns head-on.

Related:

Tags: , , ,
Android introduces new privacy-friendly sandbox for machine learning data
It’s about to get easier to change compromised passwords in Chrome for Android

Latest News

Film

Cars

Artificial Intelligence

SpaceX

You May Also Like