Facebook Takes on Poker

Facebook Takes on Poker

When people were playing poker on Facebook, nobody would have imagined that Facebook would sit-in in a few years down the line. Facebook has just achieved a huge milestone for Artificial Intelligence – and certainly for AI and poker – by designing an AI that could beat six of the world’s best in a series of experiments. Here’s more about the AI they called Pluribus, what you should know and what this could mean for card games and AI research in the near future.

In this paper, we present Pluribus, an AI that we show is stronger than top human professionals in six-player-no-limit Texas hold’em poker, the most popular form of poker played by humans.

If you’re looking for more about bridge and AI, we’ve covered the World Computer Bridge Championships here as well as AI and bridge over here.

The Study and Why It’s Useful

The original study itself is called Superhuman AI for Multiplayer Poker and credited to researchers Noam Brown and Tuomas Sandholm.

The entire thing is available on ResearchGate.

In short, regular poker AI has been around for a while in various forms, but multiplayer poker against several players presents more of a challenge for a computer system. There are far more variables to consider, including bluffing and the fact that there’s the element of hidden-information during play – the cards, hands and on-the-spot player choices that you can’t see at face value, or face up.

Pluribus proves that an AI can overcome “hidden information games” like chess – and probably bridge, too – with more than two players.

It proves a lot more than this, too.

Hidden information is everywhere. And proving that an AI can out-think professional poker players, might also help it to out-think criminals and vast amounts of search information when needing to flag reported posts.

The AI

Several videos of the AI in action was uploaded by Carnegie Mellon University, including these.

We went straight to the source for more information about Pluribus, and found a post written by Noam Brown for the Facebook AI Blog. Ideally, if you’re into AI at all, read the whole thing.

Pluribus learns by, seemingly, playing with itself – instead of adapting to other players, this AI picks up information in a different way, by playing endless versions of strategies solo.

“Pluribus [instead] uses an approach in which the searcher explicitly considers that any or all players may shift to different strategies beyond the leaf nodes of a subgame”, notes the study.

Search methods that conventional AI systems use for choosing moves don’t always account for hidden information that’s seen in games like poker – or bridge, for that matter. When the AI can’t see an opponent’s hand, this is an example of hidden information that any game-playing AI has to work around.

According to the piece on the Facebook AI Blog, Pluribus was trained “in eight days on a 64-core server and required less than 512GB of RAM” with an estimated cost of only $150 to train.

Well done!

By Alex J. Coyne