MENU
USCF US logo

Alternate Rating Systems and USCF Scholastic Competition

by NTD Jeff Wiewel and LTD Fun Fong

The issue of how to regard players that use primarily alternate (non-USCF) ratings has been active within the national scholastic community for several years. The issue has been mulled over by various USCF sub-organizations and the last USCF policy statement resulted in Rule 12 under the Scholastic rules:

“Scholastic Policy Change: When taking entries for all National Scholastic Chess tournaments with “under” sections, the USCF will require players to disclose whether they have one or more ratings in other over the board rating system(s). The USCF may use this rating information to determine section and prize eligibility in accordance with USCF rules 28D and 28E. This policy will take effect immediately and will be in effect for the 2013 Supernationals and all future USCF national scholastic tournaments that have “under” sections.”

The operative word in the Scholastic policy is “may.”  In the case of USCF Nationals tournament registration there was nowhere to do that on the registration page and there wasn’t anything about doing so for those parents that don’t happen to know the scholastic guidelines (let alone Rule 12).

There had been rule 12.5 in place that would allow players to qualify for underclass competition if a total of 8 USCF-rated games had been played recently.  In July 2012 WA was working on organizing the USCF-rated events needed to satisfy the 12.5 requirements and then the Scholastic Council eliminated a major impetus for USCF-rating those events (thus slowing down the effort to get those kids good USCF ratings) by cancelling rule 12.5 last school year.

The issue flared up again in the Elementary Nationals this past May, when New York team parents asserted that players rated NW 1500+ were playing in the K3 U800 section.  This allegation caused tempers to flare and tensions to be raised. A scheduled Scholastics meeting at Saturday at 3pm saw Scholastics council Co-Chair Sunil Weermantry decide not to be participate as a council member; he remained in the audience.  It is unclear if he has actually resigned his position. The Council elected not to take any further action for purposes of the tournament.  A closer look at the facts may reveal the reasoning.

Analysis of the ratings data from NTD Jeff Wiewel reveals the following:

1.  “In the K-5 U900 section there were two 7-0 players. One was from WA, had a 459 USCF rating by the end of the 2012 US Open, a 655 rating by the summer of 2013, and got up to a 957 established rating between the cut-off for the rating list and the start of the tournament.

2.  The other player was from NY, had an established 1044 last November, had it drop to 979 in January, 947 in February, 880 in March before the cut-off, and rose to 113 in April. Somebody overly suspicious might claim sandbagging to get into an under section, but it is also quite likely the normal volatility a kid’s rating can have.

3.  The K-3 U800 section had two 7-0 players from WA and one 6.5-0.5 from NY. The WA players had one who played 7 USCF-rated games in 2014 before the cut-off to increase to 646, and the other who played 34 USCF-rated games since October (including 10 in CA at the junior chess congress, so they were not limited to a possibly underrated area) to end up with a 587 rating. The NY player took 20 months to go from 229 to the 787 rating he had before the event. Move him to WA and he would have been cited as an example of something that needs fixing rather than somebody who simply played well.

4.  There was also a WA 6-1 who started with a 335 rating last May, rising to 415 in November and 511 in April, and a NY 6-1 who went from 562 in March 2014 to 805 in early April, 796 in mid April, and 854 in early May. More examples of ratings volatility.

5.  In K-6 U1000 there was a CA player who was as high as 1091 in February 2014, dropped to 978 in March (before the cut-off) dropped to 899 after the All-Girls (which was too late anyway to affect the eligible ratings for the National Elementary – so even an overly suspicious person cannot plausibly cite that as sandbagging), and then gained 361 points while winning the section.

6.  The only WA 6-1 finished the previous seven tournaments (five this school year) all with an established (not provisional) rating in the 700s before gaining 403 points in the Elementary. He had a similar rating history as the NY 6.5-0.5 and also played well.

7.  The K-6 Unrated had players who actually had ratings, but those ratings were earned after the [registration] cut-off. There were anecdotal stories of players in various states around the country who deliberately played only in non-USCF-rated sections so that they would still be officially unrated at the cut-off and only then played in rated events to enhance their experience. The highest two ratings were 1007P4 (WA) and 968P20 (NY).

8.  The K-3 Unrated had players with ratings of 1147P1 (WA – not enough games to even be considered an official rating), 1195P20 (NY), 1030P8 (NY), 904P12 (NY), 812P16 (NY) and 738P20 (NY).”

Thus, the allegations that were made were not nearly as egregious as they were initially made out to be.  Variances in ratings are expected to some degree, particularly for stronger scholastic players that enter the tournament.  One should remember that many stronger players are somewhat underrated as they become stronger, as it frequently takes some time for their ratings to catch up with their playing skills.

The prospect of vetting over 2200 scholastic players for alternate rating systems may be a daunting task from a practical aspect, unless there are well-defined and refined ways of doing so.  For example, would a school’s club ladder count as a rating to be disclosed (might show up on a school’s website)?  How about a Yahoo.com rating (which all start at 1500)? Or chess.com (which start over 1000)? Would players be forced out of a section in mid-tournament if such an obscure rating is discovered?  Would the alternate ratings be taken directly even though they aren’t actually scaled the same as the USCF ratings?  Is there anything to keep NWSRS from simply recalibrating all ratings by a factor of 10 so that they showed 135 instead of 1350 and thus were always under the Uxxx cut-offs?  A simple across-the-board numeric reduction was mentioned as a possible response by NWSRS, but a factor of ten reduction (in both ratings and calculations) may be easier for them to do.

Some systems (such as CXR from Chess Express Ratings) rate events that players may not even realize are rated in that system (CXR seems to grab the occasional event and rate it even if the event is actually submitted to it). Would failure to disclose such ratings (when a player may have no clue that such a rating exists) be a punishable violation? Should they be used by the scholastic National tournaments?

The WA kids that are actually active in the USCF rating system are quite possibly still underrated because of the semi-closed system phenomenon.  Such underrating may not be that bad because a review of some of the players that participate in (out-of-state) nationals shows that they generally have their ratings stay pretty much in line with their previous USCF-rated results.

One of the WA parents who had his offspring playing in the championship (not an under section) plans on looking at WA and OR performances in various nationals to see whether the NWSRS ratings are close to the USCF ratings or are significantly inflated. Essentially he’s going to see if the semi-closed pool phenomenon is causing WA to have low USCF ratings, or the USCF ratings are accurate and the NWSRS are over-stated. If there is only a normal level of change in ratings for WA/OR players playing in events outside of the northwest then it is likely NWSRS that is over-stated.  If the WA/OR players generally perform much better than expected outside the northwest then it is likely the USCF ratings are deflated because of the limited pool of players. The latter result makes a stronger case for using NWSRS for section placement.  The former result makes a stronger case for ignoring NWSRS for section placement.  Unless and until that analysis is done it is harder to say what course should be taken. Relying on a volunteer parent to do that analysis (a very mathematically adept volunteer parent) is not ideal, but is better than nothing if it actually is done (I know the intent for doing it is there but I’m not sure if the parent will be able to follow through on that intent).

Whatever policy is decided on should be consistent over all three spring nationals. This year the decision was made that the NWSRS ratings would not be used for section assignments. Talking with some WA parents about their pre-tournament actions determined that after that decision had already been implemented at the high school and junior high nationals, some of the WA parents contacted the USCF to see if their kids needed to be moved to different sections in the elementary.  WA contacts say they were then informed that their kids would not be moved solely because of the NWSRS ratings and thus the parents left them in the under sections after performing their due diligence to provide that information to the USCF per rule 12. One WA school did move all of their kids to the K-5 Championship section, taking the fifth place trophy.

Policies made as knee-jerk reactions are often not fully thought through and will likely either not be practical or adequate. Once their ramifications are discovered, these proposals may end up getting cancelled. At this time, the issue is still more a theoretical problem than a real one; however, it is good to see thoughtful discussions in the USCF forums.

 

Related Posts

2 Responses to Alternate Rating Systems and USCF Scholastic Competition

  1. Mike Mulford says:

    There are no fewer than 3 Advanced Delegates Motions on this topic on the agenda for the annual meeting in Orlando Aug 2-3. This promises to be one of the more intensely discussed issues!

Leave a Reply