SI.com Home
Get SI's Duke Championship Package Free  Subscribe to SI Give the Gift of SI
Posted: Monday March 15, 2010 2:42PM; Updated: Tuesday March 16, 2010 11:10AM
Andy Staples
Andy Staples>INSIDE COLLEGE BASKETBALL

In NCAA tournament selection subconscious human bias exists (cont.)

Decrease font Decrease font
Enlarge font Enlarge font

So DuMond tweaked the model to test several hypotheses, including whether committee representation aided teams. The trio fed the previous tournaments' data back into the model, and the number of misses dropped. Accuracy rose from 94 percent to 97 percent.

How accurate is the model now? On Sunday, it correctly predicted 33 of the 34 at-large teams. The lone miss fell just on the other side of the trio's cut line. The model predicted Mississippi State (with an 81.4 percent chance of getting a bid) would be the last team in, while Florida (with an 80.2 percent chance) would be the first team out. The Gators got a bid. The Bulldogs didn't.

"Once we started controlling for things like the membership representation on the committee as well as some other factors, our predictions got more accurate," DuMond said. "In some sense, that's the best proof that these things matter."

It's not proof enough for the NCAA's Paskus, who had major issues with the model that found bias in at-large selections. His chief complaint was that the authors tilted the study to their findings by using too high of a P-value, which is a statistical term that describes the odds of a random event. A P-value of .01 means there is only a one percent chance that the model will predict a totally random event. In some instances, the authors used a P-value of .1 in the at-large model, meaning there is a 10 percent chance of a random result. "The standard practice in the social sciences -- and I assume economics is still holding to this -- when you're looking into that many tests of significance, you're supposed to take that into account in the P-value you use," Paskus said. "So rather than stacking the odds in your favor by using that .1 and trying to find a bunch of significant results, you're supposed to go in the other direction and look at a .01 or .001 to be sure the effects you're seeing aren't just due to chance."

The authors contend that the most important findings -- the committee representation bias and favoritism toward the Pac-10 and Big East in at-large selections -- were statistically significant to the .01 level. They also pointed out that in the model that found bias in seeding, the results were statistically significant at the .0001 level, meaning there is a one-hundredth of one percent chance of a random result.

Paskus also took issue with some of the odds ratios used by the authors. An odds ratio is the likelihood that one event will happen over another with all other factors equal. In the case of Pac-10 teams receiving an at-large selection, the authors found an odds ratio of 9,999. "Those numbers, statistically, are nonsensical," Paskus said. "The odds ratios just can't be that large, especially given the data they have where they're only looking at a couple hundred teams over a 10-year period."

The authors argue that while the number seems high, when translated using standard statistical measures, it isn't. DuMond wrote in an e-mail that if, for example, UCLA and Siena each have a 90 percent chance -- based on their on-court resumes -- of receiving an at-large bid, the bias factor would instead give UCLA a 99.99 percent chance of receiving a bid. Siena's chance would remain at 90 percent. Such a difference, DuMond wrote, probably wouldn't be visible to the naked eye.

In his independent evaluation, Fort also mentioned the eye-popping odds ratio. "One shouldn't make too much of the idea that, say, a Pac-10 [team] has a 10,000 times higher chance of being selected relative to some other 'minor' team of similar performance-only variables," Fort wrote in an e-mail. He then explained the same translation as DuMond. Fort's main concern was the sample size. He wrote that because the results affected by bias were so few in number, the results may have statistical significance, but they may lack "impact significance."

"They call these observations 'the bias,' but the real question is what makes this bias happen?" Fort wrote. "At the heart of it then is why the NCAA structures this decision process so that it produces these outcomes (the authors must eventually admit, in a very small percentage of the actual cases)? If we observe that committee membership influences outcome, then why allow that committee membership in the first place? ... It is a choice by the NCAA that generates the outcomes that the authors observe. And that is the interesting issue, rather than the fact (that everybody knows anyway) that it occurs."

The NCAA has deterrents in place to discourage bias. For example, a conference commissioner must leave the room whenever a team from the commissioner's conference is discussed. An athletic director must leave the room during discussion of his own team, and he is allowed to offer only facts --no opinions -- about fellow conference teams. Shaheen said some members go above and beyond. When Bob Bowlsby, then athletic director at Iowa, was the committee chair in 2004 and 2005, he also recused himself from the room when his former employer, Northern Iowa, was discussed. Shaheen also said committee members don't engage in backroom dealings. During their brief respites from the committee room, they may exercise or sleep, but the last thing they want to talk about is seeding or at-large choices. "In 10 years, I have never witnessed an exchange that goes beyond the boundaries," Shaheen said.

DuMond said that while this seems an effective measure on its surface, it doesn't take into account discussions of other teams on the bubble. "They leave the room," DuMond said. "But they also come back in the room. ... A [member] knows [his school is] a bubble team, so he can vote against other bubble teams strategically and save a spot more or less. So while there are some rules in place to at least get rid of the perception of bias, that doesn't necessarily mean that they're working. The statistical evidence suggests that they're not."

Shaheen argues that because a team needs a majority to be seeded or selected as an at-large, one vote isn't likely to swing the decision. He also argued that some at-large decisions involve more than two teams. In some cases, committee members may be discussing the relative merits of five teams from five different conferences. If everyone -- including other schools' athletic directors -- with even tangential involvement left the room, there might not be enough members left to decide. "At some point," Shaheen said, "you have to vote."

The committee also utilizes blind resumes -- team profiles with the team names stripped away -- to help eliminate bias. The problem, according to the researchers, is that committee members are so well prepared going into the selection process that they can't help identifying which team is which using a blind resume. "It's almost impossible," Lynch said, "to make this a blind decision."

So what can the NCAA do to eliminate the perception of bias? DuMond suggests more transparency. "I don't understand why the NCAA doesn't let media people in the room when they're having this debate," DuMond said. "They act like it's some secretive process and it would be somehow unfair for the media to report on it. But if the U.S. Congress lets reporters in as they're making laws that affect millions of people, I don't see why they can't have reporters in there watching 10 guys pick 64 basketball teams."

Every year, the NCAA does hold a mock selection committee for selected media members to help reporters better understand the herculean task of filling the bracket while still following the tournament's principles and procedures, but reporters are not allowed to view the actual selection.

DuMond's idea has some merit, but allowing reporters into the room wouldn't eliminate the perception of bias because the accounts of the deliberations would be filtered through the reporters, who also are subject to their own subconscious biases. Here's another idea. John List, an economics professor at the University of Chicago, has studied altruism for years. Through his research --some done on the cutthroat trading floor of baseball card shows -- he discovered that people tend to be altruistic when they know they're being watched.

So why not station cameras in the committee room and turn it into a weekend-long reality show? Fans would certainly watch, and the NCAA could generate some more revenue that it could in turn distribute to member schools. Meanwhile, committee members would know that any bias in their decision would be immediately sniffed out by the fans at home, so they might be a little more conscious of their subconscious biases.

That wouldn't work, Shaheen said, because eventually the committee members must leave the room. Once outside, they would get skewered by fans and by colleagues for the opinions they expressed in the selection room. "You have to be able to know that you can say something honest and critical," Shaheen said.

That's a very human concern for a process that will forever be criticized and scrutinized because of its very humanity. "I don't think they're trying to actively screw other schools to benefit themselves," DuMond said. "But people act in certain ways where biases will show up."

1 2
ADVERTISEMENT
YES, I WILL TAKE THE SURVEY

MAYBE LATER

NO THANKS
SI.com
Hot Topics: NBA Playoffs NHL Playoffs Golden State Warriors Bryce Harper Paul Pierce Masai Ujiri
TM & © 2013 Time Inc. A Time Warner Company. All Rights Reserved. Terms under which this service is provided to you. Read our privacy guidelines and ad choices.
SI CoverRead All ArticlesBuy Cover Reprint