Have you ever imagined playing a game with a friend where you communicate secret messages through cryptic phrases? In this game, one person gives clues while the other attempts to deduce the hidden meaning, often utilizing yes-or-no questions for guidance. This concept underpins the latest research from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), where scientists have developed a unique “consensus game” to enhance AI’s text generation and comprehension capabilities.
In this innovative framework, two components of an AI system engage in a game-like interaction: one part generates sentences akin to clues, while the other is responsible for interpreting and assessing these sentences, simulating the guessing process. The researchers discovered that by treating this interaction as a structured game, they could greatly increase the AI’s proficiency in delivering accurate and coherent responses.
Traditionally, large language models produce responses through either direct generation (generative querying) or by scoring a set of predetermined answers (discriminative querying). Unfortunately, these methods can yield conflicting results. For example, when asked, “Who is the president of the United States?” the generative approach might concisely answer “Joe Biden,” while a discriminative query could incorrectly evaluate this response against alternatives like “Barack Obama.”
So, how can we harmonize these conflicting evaluation procedures to ensure smooth, accurate predictions? “Imagine a fresh approach for language models to decode and generate text,” explains Athul Jacob, a PhD student at MIT specializing in electrical engineering and computer science. “We devised a game-theoretic methodology that envisages the entire process as a strategic game, where a generator aims to convey the correct message through natural language to a discriminator. Our strategy involved identifying the ‘approximate equilibria,’ resulting in a novel decoding technique called ‘equilibrium ranking.’ This exciting breakthrough demonstrates how game-theoretic strategies can resolve significant challenges in enhancing the reliability of language models.”
The results have been promising. When tested on various tasks like reading comprehension, commonsense reasoning, math problem-solving, and dialogue, the team’s equilibrium ranking algorithm consistently outperformed even larger models. “It’s remarkable that, despite being competitive in size, our model outperformed models ten times its size,” says Jacob, reflecting on their success.
The Game Is Afoot
You might be familiar with the board game “Diplomacy,” designed for strategic negotiation and alliances in pre-World War I Europe. This game recently inspired the development of “Cicero,” an AI agent capable of successfully navigating complex human interactions in Diplomacy, laying ground for the consensus game concept.
The consensus game model works by achieving an equilibrium, ensuring the generated answers are both accurate and true to the model’s insights. The method refines the interplay between the generative and discriminative elements until they reach a mutual agreement that accurately reflects real knowledge.
Implementing this consensus game in practical applications, especially in question-answering formats, does bring computational challenges. For instance, utilizing extensive datasets like MMLU, which includes numerous questions and answer choices, requires the model to achieve consensus for each query, significantly increasing the workload.
Interestingly, the system faced challenges with elementary math word problems. It struggled to produce incorrect answers—an essential part of understanding how to arrive at the right one.
“The progress in strategic decision-making and language generation by AI systems has indeed been impressive in recent years. However, integrating both aspects is just scratching the surface. Equilibrium ranking represents a significant step forward, but there’s ample opportunity to tackle more intricate problems,” Jacob adds contemplatively.
The research team also sees future potential in advancing the existing model by merging outputs from their current method, a move that shows promise for yielding more accurate and consistent results across various tasks, including both factual queries and open-ended generation. This breakthrough could ensure language models like ChatGPT deliver more reliable and factual responses in everyday applications.
Google Research Scientist Ahmad Beirami, who was not directly involved in this study, suggests that “While language models like ChatGPT and Gemini have transformed various tasks into conversational interactions, the statistical decoding that produces responses has remained static for years. The innovative game-theoretic framework proposed by the MIT researchers for decoding language models by determining the equilibrium of a consensus game is a groundbreaking concept, demonstrating significant performance gains and paving the way for a new paradigm in language model applications.”
Jacob collaborated on this research alongside Yikang Shen from the MIT-IBM Watson Lab and assistant professors Gabriele Farina and Jacob Andreas from the MIT Department of Electrical Engineering and Computer Science, both affiliates of CSAIL. They presented their findings at the International Conference on Learning Representations (ICLR), where their work was marked as a “spotlight paper.” Additionally, the research won a “best paper award” at the NeurIPS R0-FoMo Workshop in December 2023.
Photo credit & article inspired by: Massachusetts Institute of Technology