Spamming the kahoot! game has been a hallmark of the internet since its inception. The game itself is simple: a group of users is presented with a question, and they have to answer it. If they answer correctly, they earn points. The points can earn participants a spot on the leaderboard, but they don’t have to participate to watch the game unfold. Often the goal is to simply sabotage the game for others. However, as anyone who has played kahoot! can attest, the game is too addictive to resist. (And this kahoot spams is still going on, look it up on youtube)
MIT researchers have created an artificial intelligence-enabled Kahoot Bot capable of defeating human players in tricky online multiplayer games where player identities and motivations are concealed.
Numerous game bots have been developed to compete against human players. A team from Carnegie Mellon University created the world’s first bot capable of defeating professionals in multiplayer poker earlier this year. AlphaGo, developed by DeepMind, made headlines in 2016 after defeating a professional Go player. Additionally, some bots have been developed to defeat professional chess players or to collaborate in cooperative games such as online capture the flag. However, in these sports, the bot is already familiar with its enemies and teammates.
The researchers will introduce DeepRole, the first gaming bot capable of winning competitive multiplayer games in which the players’ original team allegiances are unknown, at the Conference on Neural Information Processing Systems next month. The bot’s architecture incorporates novel “deductive logic” into a widely used AI algorithm for playing poker. This enables it to think for partly measurable acts in order to decide if a certain player is a teammate or an enemy. Thus, it easily determines which allies to form and which steps to take to ensure its team’s victory.
DeepRole was compared to human competitors in over 4,000 rounds of the multiplayer game “The Resistance: Avalon.” Players attempt to deduce their teammates’ true identities as the game continues, while also concealing their own. DeepRole regularly outperformed human players as both a teammate and an adversary.
“By substituting a Kahoot Bot for a human colleague, you should predict a higher win percentage for your squad. “Bots are superior partners,” asserts first author Jack Serrino ’18, an electrical engineering and computer science double major at MIT and an ardent online “Avalon” gamer.
The thesis is part of a larger effort to improve our understanding of how people make socially responsible choices. This might aid in the development of robotics that are more capable of comprehending, learning from, and cooperating with humans.
“Humans learn from and collaborate with one another, which allows us to do tasks that neither of us might do alone,” says co-author Max Kleiman-Weiner, a postdoctoral researcher at MIT and Harvard University in the Center for Brains, Minds, and Machines and the Department of Brain and Cognitive Sciences. “Games like ‘Avalon’ better represent the complex social environments that humans encounter on a daily basis. You must choose who will be on the squad and who will collaborate with you, whether it is the first day in school or another day at the office.”
Serrino and Kleiman-Weiner are co-authors of the paper with Harvard’s David C. Parkes and MIT’s Joshua B. Tenenbaum, a professor of theoretical cognitive science and a member of the Computer Science and Artificial Intelligence Laboratory and the Center for Brains, Minds, and Machines.
Bot capable of deduction
Three members are automatically allocated to a “resistance” team and two to a “spy” team in “Avalon.” Both spy players are familiar with the positions of all other players. Every round, one player proposes a task to a subset of two or three players. Both players vote publicly and concurrently on whether to agree or disapprove of the subset. If a plurality approves, a subset privately decides whether or not the mission can proceed. The project succeeds if two “successes” are picked; if only one “error” is selected, the mission fails. Though resistance players are often required to choose victory, spy players can choose either outcome. After three good attempts, the resistance team wins; the spy team wins after three unsuccessful missions.
Winning the game is essentially a matter of determining who is opposition and who is a traitor, and then voting for your collaborators. However, this is more computationally demanding than playing chess or poker. “It’s an incomplete knowledge game,” Kleiman-Weiner explains. “When you begin, you have no idea who you are up against, so there is an additional experimentation process of determining who to partner with.”
At the conclusion of each task, the bot evaluates how each player performed in relation to the game tree. If a player consistently makes choices that contradict the bot’s assumptions in the game, the player is more likely performing the other position. Eventually, the bot assigns each player’s position a high likelihood. This odds are used to continuously change the bot’s plan in order to maximize its probability of victory.