So it’s pretty obvious that I have a negative opinion of the Rolex Rankings, but what about Jeff Sagarin’s Performance Index that appears in Golfweek magazine? I took the time recently to scope it out – the explanation of how the Index is compiled is here - and here’s my take.
Because I generally like the system, I’ll start by pointing out the biggest negative. Leaving out the KLPGA is a huge mistake, and omitting the Asian and Australian tours makes the PI even less reliable. I reiterate my opinion that lumping the multiple tours together is a difficult task to perfect because of the small amount of interlocking data (better to rate all the players on each Tour in separate lists, which I am considering doing here at HDLPGA) but Sagarin takes a hass-alfed approach of only adding the LET, JLPGA and Futures Tours to the mix. Just adding the Korean tour would be a big improvement.
I like the method of viewing a player’s posted score as a win, loss or tie versus all the other players in the event. For example, Paula Creamer’s record for the final round at Fields was 67-2-3. She shot a 66 that day, which was beaten by two players and tied by three others. It could be argued that the records should only be compiled at the end of the event rather than for each round, but that’s a minor detail. PI also factors in stroke differential in an undescribed fashion, collects the data for the last 52 weeks and comes up with a rating that professes to represent that player’s typical score relative to all the other players. All of that sounds great and the resulting rankings look somewhat similar to mine, so I’m obviously not going to complain too loudly.
The explainer doesn’t say if recent performances are weighted more, but it appears to do that when comparing week-to-week rankings. The comparisons would be a lot easier if they linked to previous rankings or had a “last week’s rank” column. Looking at one example, Sarah Lee was ranked #8 last week, #10 this week. Dropping her two spots after her T25 performance at HSBC seems a little harsh, if you don’t question her being slotted Top 10 in the first place. Considering ties as a half-win/half-loss, her overall winning percentage is 74.4, versus the Top 10 it’s 36.3, versus the Top 50 it’s 53.6. #11 Angela Stanford beats Sarah in all of these numbers by a substantial margin. The only advantage Lee has over Stanford is in the Strength-of-Schedule column - Sarah has the fifth-toughest where Angela’s is 15th. Perhaps the unknown “stroke differential” factor is in play here – Sarah posted four rounds of 66 or better last year while Angela only posted two. The SOS advantage and the “ability to go low”(?) hardly seem to outweigh Stanford’s edge in winning percentages. In fact, there are a few other players behind Stanford who also have better percentages than Lee along with comparable SOS ratings.
Aside from Sarah, the players in the Top 30 who seem grossly out-of-place to me are Stacy Prammanasudh, Morgan Pressel, Se Ri Pak (all too low), Stanford (slightly high), Juli Inkster, Karrie Webb (both too high) and Yuri Fudoh (a dead horse which I am no longer going to beat). I’m sure Webb gets a boost from her Australian Open win (an LET event) plus her good showing at the HSBC. Pak’s relatively low rating despite her #1 SOS leads me to believe that the stroke-differential piece is what boosts Sarah Lee above those other players. Some detail on how that is factored in would be very interesting (to geeks like me), especially if it included the reasoning behind its extra weight.
All things considered, I like PI as a measuring stick. If it would add the second-best women’s tour in the world and clarify its stroke-differential factor, its value would certainly exceed Rolex’s and could then be used to fill out the limited-field events like Evian and HSBC. Even if those things don’t happen (Rolex’s sponsorship will probably override any other reason to change the status quo), I’ll still regard PI as a useful alternative to my own system.