Sunday, November 18, 2007
Computer chess
The idea of creating a chess-playing machine dates back to the eighteenth century. Around 1769, the chess playing automaton called The Turk became famous before being exposed as a hoax. Before the development of digital computing, serious trials based on automatons such as El Ajedrecista of 1912, were too complex and limited to be useful for playing full games of chess. The field of mechanical chess research languished until the advent of the digital computer in the 1950s. Since then, chess enthusiasts and computer engineers have built, with increasing degrees of seriousness and success, chess-playing machines and computer programs. Chess-playing computers are now available at a very low cost. There are many programs such as Crafty, Fruit and GNU Chess that can be downloaded from the Internet for free, and yet play a game that with the aid of virtually any modern personal computer, can defeat most master players under tournament conditions. Top commercial programs like Shredder or Fritz have surpassed even world champion caliber players at blitz and short time controls. As of February 2007, Rybka is top-rated in many rating lists such CCRL, CEGT, SSDF, SCCT, and CSS rating lists [1] and has won many recent official computer chess tournaments such as CCT 8 and 9 [2], 2006 Dutch Open Computer Championship [3], the 16th IPCCC [4], and the 15th World Computer Chess Championship. Motivation The prime motivations for computerized chess playing have been solo entertainment (allowing players to practice and to amuse themselves when no human opponents are available), as aids to chess analysis, for computer chess competitions, and as research to provide insights into human cognition. For the first two purposes, computer chess has been a phenomenal success — going from the earliest real attempts to programs which challenge the best human players took less than fifty years. However, to the surprise and disappointment of many, chess has taught us little about building machines that offer human-like intelligence, or indeed do anything except play excellent chess[citation needed]. For this reason, computer chess, (as with other games, like Scrabble) is no longer of great academic interest to researchers in artificial intelligence, and has largely been replaced by more intuitive games such as Go as a testing paradigm[citation needed]. Chess-playing programs essentially explore huge numbers of potential future moves by both players and apply a relatively simple evaluation function to the positions that result, whereas computer Go challenges programmers to consider conceptual approaches to play. Brute force versus selective search The first paper on the subject by Claude Shannon, published in 1950 before anyone had programmed a computer to play chess, successfully predicted the two main possible search strategies which would be used, which he labeled 'Type A' and 'Type B' (Shannon 1950). Type A programs would use a "brute force search" approach, examining every possible position for a fixed number of moves using the minimax algorithm. Shannon believed this would be impractical for two reasons. First, with approximately thirty moves possible in a typical real-life position, he expected that searching the approximately 306 (over 700,000,000) positions involved in looking three moves ahead for both side (six plies) would take about sixteen minutes, even in the "very optimistic" case that the chess computer evaluated a million positions every second. (It took about forty years to achieve this speed.) Second, it ignored the problem of quiescence, trying to only evaluate a position that is at the end of an exchange of pieces or other important sequence of moves ('lines'). He expected that adapting type A to cope with this would greatly increase the number of positions needing to be looked at and slow the program down still further. Instead of wasting processing power examining bad or trivial moves, Shannon suggested that type B programs would use a "strategic AI (Artificial Intelligence)" approach to solve this problem by only looking at a few good moves for each position. This would enable them to look further ahead ('deeper') at the most significant lines in a reasonable time. Adriaan de Groot interviewed a number of chess players of varying strengths, and concluded that both masters and beginners look at around forty to fifty positions before deciding which move to play. What makes the former much better players is that they use pattern recognition skills built from experience. This enables them to examine some lines in much greater depth than others by simply not considering moves they can assume to be poor. More evidence for this being the case is the way that good human players find it much easier to recall positions from genuine chess games, breaking them down into a small number of recognizable sub-positions, than completely random arrangements of the same pieces. In contrast, poor players have the same level of recall for both. The problem with type B is that it relies on the program being able to decide which moves are good enough to be worthy of consideration ('plausible') in any given position and this proved to be a much harder problem to solve than speeding up type A searches with superior hardware. One of the few chess grandmasters to devote himself seriously to computer chess was former World Chess Champion Mikhail Botvinnik, who wrote several works on the subject. He also held a doctorate in Electrical engineering. Working with relatively primitive hardware available in the Soviet Union in the early 1960s, Botvinnik had no choice but to investigate software move selection techniques; at the time only the most powerful computers could achieve much beyond a three-ply full-width search, and Botvinnik had no such machines. In 1965 Botvinnik was a consultant to the ITEP team in a US-Soviet computer chess match (see Kotok-McCarthy). One milestone was the abandonment of type B in 1973 by the team from Northwestern University responsible for the Chess series of programs, who had won the first three ACM Computer Chess Championships (1970-72). The resulting program, Chess 4.0, won that year's championship and its successors went on to come second in both the 1974 ACM Championship and that year's inaugural World Computer Chess Championship, before winning the ACM Championship again in 1975, 1976 and 1977. One reason they gave for the switch was that they found it less stressful during competition, because it was difficult to anticipate which moves their type B programs would play, and why. They also reported that type A was much easier to debug in the four months they had available and turned out to be just as fast: in the time it used to take to decide which moves were worthy of being searched, it was possible just to search all of them. In fact, Chess 4.0 set the paradigm that was and still is followed essentially by all modern Chess programs today. Chess 4.0 type programs won out for the simple reason that their programs simply played better chess. Such programs did not try to mimic human thought processes, but relied on full width alpha-beta and negascout searches. Most such programs (including all modern programs today) also included a fairly limited selective part of the search based around quiescence searches, and usually extensions and pruning (particularly null move pruning from the 1990s onwards) which were triggered based on certain conditions in an attempt to weed out or reduce obviously bad moves (history moves) or to investigate interesting nodes (e.g. check extensions, passed pawns on seventh rank, etc). Extension and pruning triggers have to be used very carefully however. Over extend and the program wastes too much time looking at uninteresting positions. If too much is pruned, there is a risk cutting out interesting nodes. Chess programs differ in terms of how and what types of pruning and extension rules are included as well as in the evaluation function. Some programs are believed to be more selective than others (for example Deep Blue was known to be less selective than most commercial programs because they could afford to do more complete full width searches), but all have a base full width search as a foundation and all have some selective components (Q-search, pruning/extensions). Though such additions meant that the program did not truly examine every node within its search depth (so it would not be truly brute force in that sense), the rare mistakes due to these selective search was found to be worth the extra time it saved because it could search deeper. In that way Chess programs can get the best of both worlds. Furthermore, technological advances by orders of magnitude in processing power have made the brute force approach far more incisive than was the case in the early years. The result is that a very solid, tactical AI player aided by some limited positional knowledge built in by the evaluation function and pruning/extension rules began to match the best players in the world. It turned out to produce excellent results, at least in the field of chess, to let computers do what they do best (calculate) rather than coax them into imitating human thought processes and knowledge. In 1997 Deep Blue defeated World Champion Garry Kasparov, marking the first time a computer has defeated a reigning world chess champion in standard time control. Computers versus humans However for a long time in the 1970s and 1980s it remained an open question whether any Chess program would ever be able to defeat the expertise of top humans. In 1968, International Master David Levy made a famous bet that no chess computer would be able to beat him within ten years. He won his bet in 1978 by beating Chess 4.7 (the strongest computer at the time), but acknowledged then that it would not be long before he would be surpassed. In 1989, Levy was crushed by the computer Deep Thought in an exhibition match. Deep Thought, however, was still considerably below World Championship Level, as the then reigning world champion Garry Kasparov demonstrated in two sterling wins in 1989. It wasn't until a 1996 match with IBM's Deep Blue that Kasparov lost his first game to a computer at tournament time controls in Deep Blue - Kasparov, 1996, Game 1. This game was, in fact, the first time a reigning world champion had lost to a computer using regular time controls. However, Kasparov regrouped to win three and draw two of the remaining five games of the match, for a convincing victory. In May 1997, an updated version of Deep Blue defeated Kasparov 3½-2½ in a return match. The latter claimed that IBM had cheated by using a human player during the game to increase the strategic strength of the computer. A documentary mainly about the confrontation was made in 2003, titled Game Over: Kasparov and the Machine. IBM keeps a web site of the event. While not an official world championship, the outcome of the match is often taken to mean that the strongest player in the world is a computer.[citation needed] Such a claim is open to strong debate, as a truly fair human-machine match is difficult to arrange. Seen as unfair is that human players must win their title in tournaments which pit them against a diverse set of opponents' styles, while computers are occasionally optimized for the current opponent.[citation needed] Also, unlike the human opponent, computers have access to huge databases for opening and end play. IBM dismantled Deep Blue after the match and it has not played since. However, other "Man vs. Machine" matches continue to be played. With increasing processing power, Chess programs running on regular workstations began to rival top flight players. In 1998, Rebel 10 defeated Viswanathan Anand who at the time was ranked second in the world, by a score of 5-3. However most of those games were not played at normal time controls. Out of the eight games, four were blitz games (five minutes plus five seconds Fischer delay (see time control) for each move) these Rebel won 3-1. Then two were semi-blitz games (fifteen minutes for each side) which Rebel won as well (1½-½). Finally two games were played as regular tournament games (forty moves in two hours, one hour sudden death) here it was Anand who won ½-1½ [5]. At least in fast games computers played better than humans but at classical time controls - at which a player's rating is determined - the advantage was not so clear. In the early 2000s, commercially available programs such as Junior and Fritz were able to draw matches against former world champion Garry Kasparov and former "classical" world champion Vladimir Kramnik. In October 2002, Vladimir Kramnik and Deep Fritz competed in the eight-game Brains in Bahrain match, which ended in a draw. Kramnik won games 2 and 3 by "conventional" anti-computer tactics - play conservatively for a long-term advantage the computer is not able to see in its game tree search. Fritz, however, won game 5 after a severe blunder by Kramnik. Game 6 was described by the tournament commentators as "spectacular." Kramnik, in a better position in the early middlegame, tried a piece sacrifice to achieve a strong tactical attack, a strategy known to be highly risky against computers who are at their strongest defending against such attacks. True to form, Fritz found a watertight defense and Kramnik's attack petered out leaving him in a bad position. Kramnik resigned the game, believing the position lost. However, post-game human and computer analysis has shown that the Fritz program was unlikely to have been able to force a win and Kramnik effectively sacrificed a drawn position. The final two games were draws. Given the circumstances, most commentators still rate Kramnik the stronger player in the match. In January 2003, Garry Kasparov played Junior, another chess computer program, in New York. The match ended 3-3. In November 2003, Garry Kasparov played X3D Fritz. The match ended 2-2. In 2005, Hydra, a dedicated chess computer with custom hardware and sixty-four processors and also winner of the 14th IPCCC in 2005, crushed seventh-ranked Michael Adams 5½-½ in a six-game match (though Adams' preparation was far less thorough than Kramnik's for the 2002 series). Some commentators [6] believe that Hydra will ultimately prove clearly superior to the very best human players, or if not its direct successor will. In November-December 2006, World Champion Vladimir Kramnik played Deep Fritz. This time the computer won, the match ended 2-4.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment