Meta’s AI research labs have created a new state-of-the-art chatbot and are letting members of the public talk to the system in order to collect feedback on its capabilities.
The bot is called BlenderBot 3
The bot is a prototype and built on Meta’s
This latter issue is something Meta specifically wants to test with BlenderBot. A big feature of the chatbot is that it’s capable of searching the internet in order to talk about specific topics. Even more importantly, users can then click on its responses to see where it got its information from. BlenderBot 3, in other words, can cite its sources.
By releasing the chatbot to the general public, Meta wants to collect feedback on the various problems facing large language models. Users who chat with BlenderBot will be able to flag any suspect responses from the system, and Meta says it’s worked hard to “minimize the bots’ use of vulgar language, slurs, and culturally insensitive comments.” Users will have to opt in to have their data collected, and if so, their conversations and feedback will be stored and later published by Meta to be used by the general AI research community.
“We are committed to publicly releasing all the data we collect in the demo in the hopes that we can improve conversational AI,” Kurt Shuster, a research engineer at Meta who helped create BlenderBot 3, told The Verge.
Releasing prototype AI chatbots to the public has, historically, been a risky move for tech companies. In 2016, Microsoft released a chatbot named Tay on Twitter that learned from its interactions with the public. Somewhat predictably, Twitter’s users soon coached Tay into regurgitating a range of
Meta says the world of AI has changed a lot since Tay’s malfunction and that BlenderBot has all sorts of safety rails that should stop Meta from repeating Microsoft’s mistakes.
Crucially, says Mary Williamson, a research engineering manager at Facebook AI Research (FAIR), while Tay was designed to learn in real time from user interactions, BlenderBot is a static model. That means it’s capable of remembering what users say within a conversation (and will even retain this information via browser cookies if a user exits the program and returns later) but this data will only be used to improve the system further down the line.
“It’s just my personal opinion, but that [Tay] episode is relatively unfortunate, because it created this chatbot winter where every institution was afraid to put out public chatbots for research,” Williamson tells The Verge.
Williamson says that most chatbots in use today are narrow and task-oriented. Think of customer service bots, for example, which often just present users with a preprogrammed dialogue tree, narrowing down their query before handing them off to a human agent who can actually get the job done. The real prize is building a system that can conduct a conversation as free-ranging and natural as a human’s, and Meta says the only way to achieve this is to let bots have free-ranging and natural conversations.
“This lack of tolerance for bots saying unhelpful things, in the broad sense of it, is unfortunate,” says Williamson. “And what we’re trying to do is release this very responsibly and push the research forward.”
In addition to putting BlenderBot 3 on the web, Meta is also