DeepMind’s new chatbot uses Google searches plus humans to give better answers
The lab trained a chatbot to learn from human feedback and search the internet for information to support its claims.
The trick to making a good AI-powered chatbot might be to have humans tell it how to behave—and force the model to back up its claims using the internet, according to a new paper by Alphabet-owned AI lab DeepMind.
In a new non-peer-reviewed paper out today, the team unveils Sparrow, an AI chatbot that is trained on DeepMind’s large language model Chinchilla.
Sparrow is designed to talk with humans and answer questions, using a live Google search to inform those answers. Based on how useful people find those answers, it’s then trained using a reinforcement learning algorithm, which learns by trial and error to achieve a specific objective. This system is intended to be a step forward in developing AIs that can talk to humans without dangerous consequences, such as encouraging people to harm themselves or others.
Large language models generate text that sounds like something a human would write. They are an increasingly crucial part of the internet’s infrastructure, being used to summarize texts, build more powerful online search tools, or as customer service chatbots.
But they are trained by scraping vast amounts of data and text from the internet, which inevitably reflects lots of harmful biases. It only takes a little prodding before they start spewing toxic or discriminatory content. In an AI that is built to have conversations with humans, the results could be disastrous. A conversational AI without appropriate safety measures in place could say offensive things about ethnic minorities or suggest that people drink bleach, for example.
AI companies hoping to develop conversational AI systems have tried several techniques to make their models safer.
OpenAI, creator of the famous large language model GPT-3, and AI startup Anthropic have used reinforcement learning to incorporate human preferences into their models. And Facebook's AI chatbot BlenderBot uses an online search to inform its answers.
DeepMind’s Sparrow brings all these techniques together in one model.
DeepMind presented human participants multiple answers the model gave to the same question, and asked them which one they liked the most. They were then asked to determine whether they thought the answers were plausible, and whether Sparrow had supported the answer with appropriate evidence, such as links to sources. The model managed plausible answers to factual questions—using evidence that had also been retrieved from the internet—78% of the time.
In formulating those answers, it followed 23 rules determined by the researchers, such as not offering financial advice, making threatening statements, or claiming to be a person.
The difference between this approach and its predecessors is that DeepMind hopes to use “dialogue in the long term for safety,” says Geoffrey Irving, a safety researcher at DeepMind.
“That means we don’t expect that the problems that we face in these models—either misinformation or stereotypes or whatever—are obvious at first glance, and we want to talk through them in detail. And that means between machines and humans as well,” he says.
DeepMind’s idea of using human preferences to optimize how an AI model learns is not new, says Sara Hooker, who leads Cohere for AI, a nonprofit AI research lab.
“But the improvements are convincing and show clear benefits to human-guided optimization of dialogue agents in a large-language-model setting,” says Hooker.
Douwe Kiela, a researcher at AI startup Hugging Face, says Sparrow is “a nice next step that follows a general trend in AI, where we are more seriously trying to improve the safety aspects of large-language-model deployments.”
But there is much work to be done before these conversational AI models can be deployed in the wild.
Sparrow still makes mistakes. The model sometimes goes off topic or makes up random answers. Determined participants were also able to make the model break rules 8% of the time. (This is still an improvement over older models: DeepMind’s previous models broke rules three times more often than Sparrow.)
“For areas where human harm can be high if an agent answers, such as providing medical and financial advice, this may still feel to many like an unacceptably high failure rate,” Hooker says. The work is also built around an English-language model, “whereas we live in a world where technology has to safely and responsibly serve many different languages,” she adds.
And Kiela points out another problem: “Relying on Google for information-seeking leads to unknown biases that are hard to uncover, given that everything is closed source.”
Deep Dive
Artificial intelligence
Large language models can do jaw-dropping things. But nobody knows exactly why.
And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.
OpenAI teases an amazing new generative video model called Sora
The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.
Google’s Gemini is now in everything. Here’s how you can try it out.
Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.
Providing the right products at the right time with machine learning
Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.