Arrow-right Camera
The Spokesman-Review Newspaper
Spokane, Washington  Est. May 19, 1883

Computers jump final checkers hurdle

Robert Mitchum Chicago Tribune

With uniform pieces and diagonal moves, checkers is simple enough for a child to learn. But to achieve absolute mastery of the game, scientists needed to run hundreds of computers for nearly 20 years, analyzing roughly 500 billion billion scenarios.

By completing the project, a team of Canadian researchers have officially “solved” checkers, creating an unbeatable program that will choose the best move in every possible situation.

This achievement represents a major benchmark in the field of artificial intelligence, which uses games to develop complex problem-solving strategies for computers.

In 1994, a program named Chinook beat the reigning human world checkers champion, a feat that preceded Deep Blue’s famous chess defeat of grandmaster Gerry Kasparov by three years. But even after proving dominant over humans, finishing the calculations required to solve the game required 13 more years of research.

“Had I known 18 years ago it was this big of a problem I probably would’ve done something else,” said Jonathan Schaeffer, who led the project at the University of Alberta, “but once I started, I had to finish.”

Despite the marathon scope of the effort, Schaeffer is pleased with the results and their implications for solving other gargantuan computing problems.

“It’s 1 million times bigger than the biggest computation previously solved optimally,” he said. “I’m hoping people will try to solve something big like that with our technology or similar technology. Maybe people will do bigger and better things.”

Detailed Thursday on the Web site of the journal Science, methods developed by Schaeffer’s team in the process of solving the game may be applicable to other areas, such as business and biology. As opposed to previous “brute force” applications like Deep Blue, the program’s searches save time by testing only the most relevant moves from the enormous number of possible board combinations.

“We built a huge database and had to compress it into something that was manageable, and could be accessed and searched fast by people,” said Schaeffer. “That core infrastructure that we developed is generic enough for other applications.”

The resulting program proves conclusively that checkers is a “draw” game; in other words, perfect play by both players will always result in a draw.

However, checkers experts say there is no fear that the solving effort will ruin the game for traditional players, amateur or professional.

“No human can possibly memorize the billions of combinations that Dr. Schaeffer has covered,” said Richard Beckwith, player representative for the American Checkers Foundation.

“You still have to play as you see it, based on your own expertise and knowledge.”

Solving board games has long been a pursuit of artificial intelligence researchers looking to develop computer strategies to deal with complex, rule-based problems. Previously, simpler games such as Connect Four and Hex have been solved.

“In artificial intelligence, the chess and checkers groups have gone beyond the pale of what we thought we could do,” said Michael Genesereth, an associate professor of computer science at Stanford University, who was not involved in the project. “At one point, we thought we never could solve them in this way.”

The logical next step in game-playing research would be to solve the game of chess. But while chess programs are already capable of beating even the best human players, the effort of analyzing the game in the same fashion as checkers would take much longer.

“It’s probably impossible, (chess is) just too complex,” said Genesereth. “I don’t think it will ever be done.”