Dissecting college hoops' most maligned metric, more thoughts
The RPI is misunderstood and isn't best metric, but it does have some value
William Buford's shooting slump (36.4 three-point percentage) is hurting Ohio St.
Karl Hess should be suspended for his ejection of two former N.C. State players
Looks like we're getting started early this year.
Last week, the NCAA conducted its annual mock selection seminar in Indianapolis for members of the media. The goal was to educate participants on the process, but what ensued was yet another round of criticism aimed at the committee's methods. Not surprisingly, since there is no bracket to slice and dice as of yet, the bulk of the ire is being directed at those three polarizing letters: R-P-I.
I don't know of another metric in sports that is more maligned or more misunderstood. Put simply, a team's RPI is calculated on three pieces of information: its record (25 percent), its opponents' record (50 percent), and its opponents' opponents' record (25 percent). Since homecourt advantage is so pronounced in college basketball, road games count a little more, win or lose.
Most of the discussion leading up to selection weekend centers on the overall RPI rankings. If you want to assess a team's position, that is a quick and easy way to do it. It's also the laziest. A team's overall RPI rank is a virtual non-factor to the people who actually have to make these decisions.
The RPI was never meant to be a hard-and-fast listing of how good teams are, though it essentially accomplishes that. Rather, its primary purpose is to serve as an organizing tool that allows the committee to compare teams with different schedules. We all know that all 25-4 records are not alike. When the committee looks at results -- i.e. the "team sheets," which are being made public this year for the first time -- the games are arranged so people can see how a team did against teams ranked in the top 25 of the RPI, the top 50, the top 100, etc.
The RPI is also used to calculate a team's strength of schedule, which is especially relevant in the nonconference. The committee favors teams that played other good teams outside the league, even if they lost to some of them. The NCAA wants to incentivize coaches to play good teams away from home in the nonconference. That's good for the game.
If people can tweak or improve upon this formula, I encourage them to have at it. But if we're going to crunch numbers, we need to make sure we're doing it for the right reasons. I would argue that the basic metric for evaluating teams for the tournament should incorporate three factors and three factors only:
1. How often did they win?
2. Whom did they beat?
3. Where did they play?
For many RPI critics, this does not go far enough. They want to bring anything and everything into the formula, for all the wrong reasons. I'm not saying the RPI is without flaws, but I am saying it's the best metric that anyone has come up with so far. Here's why:
It does not include scoring margin.
This seems to be the chief complaint of RPI critics. It is also the number one reason people cite for replacing the RPI with the ratings put together by Jeff Sagarin of USA Today.
Reasonable people can disagree, but I'm going to be honest. I find nothing reasonable about the idea to integrate scoring margin into the formula that the NCAA uses to bracket the tournament. The reason is simple: It would give coaches the incentive to run up scores. It's hard for me to believe that serious people would make this argument, yet I'm seeing it everywhere.
But wait: Isn't beating a good team by 20 points more impressive than beating a bad team by one? Of course. That's why the members of the committee are free to take that into account when they burrow into the details. In fact, the committee members are free to take any factor into account. And they do. But to replace the RPI with a formula that explicitly incorporates scoring margin? There are some pretty bad ideas about how to improve this process, but this one is by far the worst.
It doesn't try to quantify missing or injured players.
This is the sexy wrinkle in ESPN's newly unveiled College Basketball Power Index (BPI). If a team is missing key players (as determined by their average minutes per game), the BPI de-weights its impact compared to games where both teams are at full strength.
This is a novel idea, but it's also a misguided one. In the first place, this dynamic is already an important part of the process. Members of the committee are assigned specific conferences that they monitor throughout the season. In addition, the committee and the NCAA's staff are bombarded with information provided by the teams and conferences themselves. Not only will everyone be fully aware that Pittsburgh played 11 games without Tray Woodall, that fact will be specifically brought into the discussion. This will be true for every team that comes under consideration.
Besides, who is to say that a team automatically gets worse because a starter is hurt? Two years ago, Notre Dame was floundering in mid-February when its leading scorer and rebounder, Luke Harangody, went out with a bone bruise in his knee. In his absence, the Irish came together and won six straight games. Those wins wouldn't count as much in the BPI as the RPI. Is that fair?
It doesn't account for efficiency.
If you're a true, hardcore college basketball fan, you know all about efficiency rankings, also known as tempo-free stats. They sprang from the brilliant mind of Ken Pomeroy. His site, KenPom.com, is a bountiful source of useful information. I check it almost every day, and I use Ken's nuggets frequently in my columns and television analysis.
However, these numbers have no place in the bracketing process. None. They tell us a great deal about why a team wins, but that is far less important than knowing whether it wins. Believe it or not, sometimes the less efficient team still ends up with more points.
When I gave voice to this opinion on Twitter last week, it prompted an industrious RPI critic, Jason Lisk of The Big Lead, to write a blog countering my position. Lisk wrote that "Davis' statement would imply that the Pomeroy rankings feature a bunch of teams that lose a lot more games but look really good when they do, while the RPI rewards winners."
My statement implied no such thing. We all understand that the more efficient a team is, the more likely it is to win. By setting up this standard, Lisk undercuts his own position. If the efficiency ratings basically tell who is winning, then using them in the selection process is redundant. If they don't tell us who is winning, then they're irrelevant. The bottom line is, I would rather be inefficient and win than efficient and lose.
It doesn't attempt to predict anything.
People who offer up alternatives to the RPI often rely on the argument that their method is a better indicator of success in the tournament. Pomeroy himself made this argument on Slate.com when he wrote that "a ranking system that doesn't account for margin of victory isn't particularly useful as a predictor of future results." Similarly, last year an engineering professor at Georgia Tech named Joel Sokol argued in The New York Times that the ranking system he devised was better at predicting NCAA tournament outcomes than any other, and was thus far superior to the RPI.
First of all, I am highly skeptical of these claims. If Sokol's formula is that good, he should head out to Vegas for those three weeks. But even if that were true, it would be irrelevant. Selecting and seeding should be about one thing: Rewarding or punishing teams for what they did during the regular season.
It does not rely on the eye test.
We hear all the time that a team needs to pass the "eye test" to get a bid. Well, whose eye are we talking about? And what exactly is the test? A team with lots of speed and athleticism may be pleasing to the eye, but that doesn't mean its players know how to win. Again, I get back to basics: Would you rather pass the eye test and lose, or fail the eye test and win?
I'm not saying members of the committee shouldn't watch games. They actually watch a ton of them during the season. To the extent that it leavens their understanding of what's going on, that's a good thing. But I hope they watch these games with a good dose of humility. Just because they think a team "looks" better than another one under consideration doesn't mean it really is. I want to know whether a team won, whom it beat, and where it played. I'm a lot less interested in knowing how it looked.
It doesn't require "basketball people" to tell us what happened.
This is the standard fallback position to critics of this process. If there were real "basketball people" on the committee -- e.g., former coaches -- they would make much better decisions.
This argument ignores the possibility that "basketball people" can be just as wrong as the rest of us. Ex-coaches are also just as prone to have personal agendas, if not more so. What happens if a coach on the committee is assessing a team coached by the son of his former assistant? What if that coach played nothing but man-to-man and believes a zone is a weak strategy? Or what if the coach believes that because he knows the game so well that all he has to do is watch the teams play, and therefore he doesn't have to waste time burrowing into all the other information. Anyone seen a coaches' poll lately?
I'm not saying the RPI can't be improved, tweaked or perfected, but can someone tell me exactly what is so broken about the NCAA tournament? The intensity of all this criticism reminds me of the reaction to the committee's decision last year to give an at-large bid to VCU. I was one of those critics. I said on the CBS Selection Show that I thought the committee made a mistake, and I still don't believe VCU's remarkable run to the Final Four validated its inclusion. Still, VCU's success was a stark reminder to all of us professional bloviators that just because we disagree with the committee doesn't mean we're right. We would all do well to remember how that humble pie tasted these next few weeks.
So let me conclude by issuing a challenge to all those folks who voice such righteous indignation toward the RPI, all those computer geeks and tempo-free adherents and "basketball people" who believe they know better: I want to see your brackets. Take your Sagarin ratings, your BPI, your KenPom numbers, your eye tests, and any other information your heart desires, and show us exactly how you would select and seed the 2012 NCAA tournament. Let's see how superior your brackets are to the official version that comes out of that conference room in Indianapolis.
My guess is, yours won't be that much different from theirs. Or mine. Maybe yours will be a little better, maybe it will be a little worse, but really, how will we know? Because in the final analysis, our opinions don't matter. Two days after the bracket gets revealed, the games will tip off, and all the mystery will be washed away. There is only one place in basketball where numbers reveal the ultimate truth, and the one thing we know for certain is that the scoreboard never lies.