Given that playing “Who’s better” is one of my favorite activities (it’s the root cause of my Top 30 list), I naturally jumped at The Constructivist’s request of comparing Ai Miyazato and Moira Dunn.
He specifically asked how the HD system rated the two of them. It puts Miyazato around 60th place and Dunn around 110th. I generalize because my system was designed to rate players down to about 40th place and I doubt it’s accuracy beyond that. I could break out my (untested) advanced rating system but that would require a lot more work than I have time for right now. Have patience – you’ll see that advanced system someday.
When I line up Ai and Moira’s combined numbers from 2007 and 2008 (through Sybase) and compare them, it’s pretty obvious which player has been better:
AM – 35 starts, 72.96 average, $887,762, 7 Top 10s, 11 Top 20s, 9 missed cuts
MD – 32 starts, 73.40 average, $181,952, 0 Top 10s, 3 Top 20s, 12 missed cuts
But the inspiration for TC’s request came when Golfweek’s Sagarin ratings put Dunn one spot ahead of Miyazato – that system rates the players over the last 52 weeks. So to accurately assess why that rating might turn out that way, I need to delete the numbers prior to May 20, 2007:
AM – 27 starts, 73.40 average, $600,240, 4 Top 10s, 7 Top 20s, 8 missed cuts
MD – 22 starts, 73.24 average, $126,967, 0 Top 10s, 1 Top 20, 7 missed cuts
You and I can eyeball those last two lines and see that Miyazato has clearly played better than Dunn over the last twelve months. The small disadvantage in scoring and one extra missed cut is far outweighed by nearly five times the earnings, which often happens when you see a player with four extra Top 10s and six extra Top 20s. I’ve looked into the explanation of its method and GW/Sagarin does not take money winnings into account at all, and only indirectly touches on Top 10/20 finishes as it totals up a player’s W-L-T record. Given that Dunn has a slightly better scoring average over the last 52 weeks, I’m going to presume that is why she ranks slightly ahead of Miyazato in that system.
I think we’ve stumbled onto the huge philosophical difference between the GW/Sagarin and HD systems. GW/Sagarin only measures a player’s scoring ability and weighs it against the scores of her competition on each particular day. I believe that scoring average is a very important part of the equation but you have to include situational numbers as well (money, Top 10, missed cuts, victories) to get an accurate measure of a player’s ability. I gave a thumbs-up to GW/Sagarin a couple of months ago and still do, but it’s important to understand what it’s really measuring.
Tuesday, May 20, 2008
Subscribe to:
Post Comments (Atom)
3 comments:
Very interesting! Thanks. I'll have much more on this after I finish grading (a different kind of ranking system....)!
Ah, one quick comment. Your system presumes people have top 10s and top 20s, which is not true of the majority of players on tour, so perhaps the GSPI is better at ranking the lower-ranked players. Plus, they're looking for a way to compare performances across tours, and since money and finishes would be hard to factor in, it makes sense for them to look at the most basic unit--how did you finish relative to the field in a particular round?--and go from there.
My system has problems with lower-ranked players mainly because I stop giving points for scoring average at 72.7 and there is no distinction between missing six cuts and missing 10, 12 or more.
The advanced system I mentioned uses the same model as the current system (with a few other stats included) but carries the scales down to a level that would differentiate between the 60th and 110th ranked players better.
I'm sure GSPI is more accurate than my current system at those lower levels but I'm equally sure my advanced system would knock the socks off GSPI.
Post a Comment