Chess Rating

From: larry.r.trout@
Date: September 22, 2010 6:50:39 

'The Elo rating system was invented half a century ago by Hungarian-born physicist and chess master Arpad Elo. It is the most famous technique for rating chess players and is used throughout the chess world. It has been applied to many other contests as well, including other board games, sports, and video games.  However, it has never really been demonstrated that the Elo approach to calculating chess ratings is superior.  Elo's formula was derived theoretically, in an era without large amounts of historical data or significant computing power.  With the benefit of powerful computers and large game databases, we can easily investigate approaches that might do better than Elo at predicting chess results.

There are several alternatives to the Elo approach. Professor Mark Glickman developed the Glicko and Glicko-2 systems, which extend the Elo system by introducing additional parameters to represent the reliability and volatility of player ratings.  Ken Thompson uses a linearly weighted average of a player's last 100 results to calculate a weighted performance rating.  Jeff Sonas (who put together this competition) developed Chessmetrics ratings to maximize predictive power. More details are available on the
hints page.

We want to see if somebody out there can do even better.  Competitors train their rating systems using a training dataset of over 65,000 recent results for 8,631 top players. Participants then use their method to predict the outcome of a further 7,809 games.


Halfway point:

'We have just passed the halfway mark of the "Elo vs the Rest of the World" contest, scheduled to end on November 14th. The contest is based upon the premise that a primary purpose of any chess rating system is to accurately assess the current strength of players, and we can measure the accuracy of a rating system by seeing how well the ratings do at predicting players' results in upcoming events. The winner of the contest will be the one whose rating system does the best job at predicting the results of a set of 7,800 games played recently among players rated 2200+.

So far we have had an unprecedented level of participation, with 162 different teams submitting entries to the contest! There is also a very active discussion forum to promote the free flow of ideas, although many teams are still hesitant to share too many details about their approach (especially considering that the winner will receive a copy of Fritz signed by Garry Kasparov, Viswanathan Anand, Anatoly Karpov, and Viktor Korchnoi). Both Chessbase and Kaggle have donated generous prizes, to be awarded to top-performing participants who are willing to share their methodology publicly.

A wide range of approaches have been tried, including almost every known chess rating system as well as other tries involving neural networks, machine learning, data mining, business intelligence tools, and artificial intelligence. In fact over 1,600 different tries have been submitted so far, and we anticipate far more submissions as the competition heats up over the final seven weeks.

The #1 spot is currently held by Portuguese physicist Filipe Maia, who confesses to little knowledge about statistics or chess ratings, but is nevertheless managing to lead the competition! He is also the author of El Turco, the first-ever Portuguese chess engine. Out of the current top ten teams on the leaderboard, seven use variants of the Chessmetrics rating system, two are modified Elo systems, and one is a "home-grown variant of ensemble recursive binary partitioning". That last approach belongs to the #3 team on the public leaderboard, a team known as "Old Dogs With New Tricks". This team is a collaborative effort between Dave Slate and Peter Frey, both prominent leaders in computer chess for many years.

Although the "Old Dogs With New Tricks" team clearly has a lot of chess expertise, and the #2 spot is held by Israeli mathematician and chess player Uri Blass (FIDE rating 2051), the top ten or twenty teams are primarily comprised of mathematicians, data miners, and other scientists having minimal direct experience with chess or chess ratings. This suggests that experts on chess rating theory might still have a lot to learn from experts in other fields, which of course is one of the desired outcomes of this contest. We have attracted interest from around the globe, with the top twenty comprised of participants from Portugal, Israel, USA, Germany, Australia, UK, Singapore, Denmark, and Ecuador.'


Do you think you either have, or could come up with a good rating system?

No comments:

Post a Comment