I’ve not been kind to the Rolex Rankings in previous posts. From listing Michelle Wie #2 early last year to ranking JLPGA players in the Top 10 to jumping Morgan Pressel to #4 when she won the Nabisco, they have been begging for my criticism. In the spirit of fairness, I decided to be more specific about where I think their system goes wrong and what I believe they should change to make it more accurate. You say they didn’t ask me to critique it? As if that would stop me...
Like my own statistical rating system, the Rolex system has undergone a couple of changes since its launch last year. They have increased the minimum tournament limit to fix that “Michelle Wie bias” and just last week changed their weighting structure to a more gradual scale to help prevent sudden spikes in a player’s ranking. Both of these adjustments made good sense but I must say that after the most recent adjustment, the rankings didn’t change a whole lot.
Here is the page at lpga.com which explains the Rolex system. I expect that most of you will start getting glassy-eyed about a quarter of the way down the page, so I’ll ask that you concentrate on the section titled “How are points awarded” and the couple of sections just past that one. Note that except for the major tournaments, each official event (which is all other events on six international tours) awards points based on the “strength of field”. The “strength of field” is based on the Rolex rankings of the players entered and the home tour’s ranking points, which coincidentally is also based on the Rolex rankings. I am not a statistics expert (although I play one on the Internet) but such inter-mingling of related data isn’t a very good idea. Think of it as Numerical Incest, unless those kinds of thoughts turn your stomach. You may also have noticed that the page doesn’t reveal how many points are awarded for each finishing position nor how the “strength of field” is factored into those points. I only know Pressel got 120 points for winning the KNC because an lpga.com article mentioned it. I haven’t found the Rolex point breakdown anywhere on the web.
Of all the events played worldwide, only the four major championships bring a reasonable number of players together from multiple tours to compare their relative strengths. Since the Futures Tour players are excluded from the majors, you don’t EVER have all six tours represented at the same event. The Evian Masters throws the LPGA and LET together, the three LPGA Asian events compare the Japanese and Korean tours with the LPGA and each other, and the annual wanderings of players like Annika and Laura Davies all can somewhat reveal the strengths of the various tours. This sample size of data which compares players from around the world is very small and “smaller” translates to “unreliable”. It is entirely possible that too many points are being awarded for play on the Japanese tour while the LET and KLPGA are getting short-changed. It is also possible that they could be overrating the LPGA’s strength relative to the other tours. My point is - there is so little data to work with, any method you use to tie all these tours together is probably doomed to fail.
However-many points awarded are then multiplied by a factor determined by how many weeks ago the event was played. A win last month carries more weight today than a win 18 months ago. No problem there. The adjusted points are re-figured each week, added up, and divided by the total number of events played by that player. This is where Rolex makes a second major error. Raising the minimum event limit mitigates the problem somewhat, but this fault alone enabled Morgan Pressel to jump into fourth place two weeks ago. You probably didn’t realize that if Michelle Wie had played the Nabisco and won, the 120 Rolex points she would have been awarded would have made her the #1 player in the world even though she would have played only 14 events in the last two years. I reckon both Annika and Lorena would have been a little pissed at that. To fix this problem they should either determine an average number of events that all players have played to divide by and adjust for a player being over or under that average, or they should just divide everyone’s points by the maximum possible events or by 104 (the number of weeks). Either way they would need to award a different number of points to a player not playing that week from one who misses the cut or plays poorly.
So to sum up, the points that get awarded on these multiple tours combined with the bias towards players with fewer and more effective performances result in questionable rankings. Which brings me to the heart of this issue – Rolex is attempting to measure something which may not even be measurable (the ability of women golfers worldwide), and once having tried to measure it, expect everyone to objectively accept the results. Due to circumstances beyond my control like the lack of time or access to international satellite TV, I am unable to see all of the world’s women golfers perform on a regular basis (lord knows, I’ve tried). By closely following the results on the LPGA Tour, I feel I’ve got a good handle on the abilities of their players and can legitimately cry foul when I see Pressel or Wie ranked ahead of Creamer, Jang or Han. But unless we get more cross-pollination of the tours, I don’t think we’re ever going to know for sure that Shiho Oyama has played better golf than Brittany Lincicome.