More troubles with CFC ratings….
SPRAGGETT ON CHESS
The effect was to kill what little credibility the rating system had left: the top players who were active in Canadian tournaments saw their ratings jump hundreds of points ahead of their corresponding FIDE ratings. Last year the CFC stopped this farce–only after vocal criticism– and halted the use of the Stockhausen/Craver in its original form.
You almost feel sorry for the CFC! After the restructuring of the CFC’s business model in 2007/8, the rating system became the principal source of revenue for the struggling organization. But the problem is that the CFC rating system is —let me find the most precise technical term–” fucked up”. In the previous 6 or 7 years two disasterous attempts to counter a natural tendency of rating systems to be deflationary have taken a heavy toll on the Canadian rating system, turning it into one of the world’s most laughable and abused national rating systems. Today most CFC members realize that the rating system has practically zero intrinsic value , especially as an objective indicator of relative chess strength. Many are now calling for its replacement by the much more efficient FIDE rating system.
Roger Patterson, a retired physicist and active organizer in British Columbia, has to be congratulated for his recent attempt —naive as it may be–to improve the national system and try to salvage it before we go FIDE ratings only. This is a lot of work! In the latest GL (LINK) there is an article that takes a look at the rating system between 2006 and today and discusses some of the problems and defects.
Mr. Patterson– with the support of rating auditor Paul LeBlanc– entered all of the data available from the CFC rating database-archives (over 12,000 individual entries!) into several mathematical models/programs that he himself wrote , in order to see what would happen. This is the first time in recent years that any such undertaking has been attempted that actually tries to anticipate and understand proposed changes to the rating system, and in particular the bonus point system.
Previously, in or abouts 2005, the CFC made completely arbitary changes to the rating system by adding some 10,000 rating points (a number apparently pulled from someone’s ass) to the system, distributing them in a completely random manner, with friends of Mr. Hamilton (one of the movers of this disaster) receiving a larger piece of the pie than others. THERE WAS NO mathematical or scientific basis for any of this. Just random and arbitrary gifting. Atleast, however, it was discussed between the governors of the CFC before being implemented.
GOING FROM BAD TO WORSE, in 2008 Stockhausen/Craver proposed another (disasterous) change to the rating bonus point system: this time, while there was atleast a formula involved, there was neither any mathematical understanding of its effects nor ANY DISCUSSION whatsoever. There turned out to be zero scientific basis for the new formula. It was–typically– blindly approved by completely disinterested governors. (I stand corrected: 1 governor asked a sensible question about what this was meant to accomplish–but it went unanswered ! Whoopy!)
The effect was to kill what little credibility the rating system had left: the top players who were active in Canadian tournaments saw their ratings jump hundreds of points ahead of their corresponding FIDE ratings. Last year the CFC stopped this farce–only after vocal criticism– and halted the use of the Stockhausen/Craver in its original form.
Thanks to the work of Patterson’s modelling, we can see the catastrophic effect of the Stockhausen/Craver bonus point system: the red line is where Canadian Champion Sambuev’s CFC rating should be today; the blue line is where his rating actually is today with the bonus points included.
______________________________________________________
Roger Patterson
While Patterson’s work has some value because it represents the first time that anyone has tried to take a mathematical look at the rating system with and without the said bonus point changes in recent years, Roger lessens his work by then trying to interpret these results in a subjective and often self-serving manner while suggesting significant changes to the bonus point system. A number of his conclusions are simply arbitrary in nature or are little more than flimsily disguised personal points of view… none of which are supported by the data or the results of his modelling.
For instance, Patterson proposes a new bonus point system (which has already been approved by the governors !) that is argued to be ‘‘superior” to the existing failed system. But ”superior’‘ is then defined in Patterson’s paper as being anything other than superior: it is , effectively, not intrinsicly superior, but simply less catastrophic than the other failed bonus point systems of 2005 and 2008!
Then rather than try to correct the damage of the super-inflation of the ratings of the top players (such as Sambuev) Patterson simply lets it stand and remain in the system. And then tries to ignore it. How can one sincerely try to improve the rating system without first correcting glaring mistakes and blunders?
Finally, Patterson argues that his new bonus point system will have significant beneficial results for the ratings of junior players, but–again– this is something neither the data nor the mathematical results deal with in any way or support such a finding. His 12,000 data entries simply do not allow for isolating a group that could be considered as ”junior”.
Paul LeBlanc, the rating auditor, himself describes the difficulty in isolating this group for purposes of analysis:
From the latest GL
HOWEVER, Patterson’s work stands as a useful source of information, uptodate and easily accessible for anyone with a computer. The problem is making sense of this information, and ,especially, trying to objectively use this information in an unbiased and practical manner that is of benefit to ALL Canadian chess players.
SPRAGGETT ON CHESS