Random Walker Rankings for NCAA Football




Return to RWR Blog

Beginning & Disclaimer

Can monkeys rank teams?

2003 rankings

What was wrong with the old BCS formula?

2004 rankings

2005 rankings

2006 rankings

2007 rankings

2008 rankings

Our manuscripts

Press coverage

How the old BCS formula put Oklahoma #1 in 2003

Note: Most of the ideas in the paragraphs below (originally posted online in January 2004) were incorporated into an article submitted to the Notices of the American Mathematical Society in March 2004 (appearing in September). We also forwarded copies to BCS decision makers, but we have no knowledge that we at all influenced the process of scrapping the old BCS formula. Indeed, if we had been asked (we were not), we would probably have preferred that the new, simplified averaging system not be quite as weighted as it is to the polls, even though the original web-posted essay listed the now-used 2/3-1/3 weighting as an alternative to 1/2-1/2.

Each of the past four seasons, the two polls agreed on the top two teams prior to the bowl games. But three of those four years, the top two spots in the BCS Standings included only one of those teams. In 2000 and 2001, the #2 team in the polls ended up on the short end of the BCS stick; but this time it is USC as the #1 team in both polls that is on the outside looking in. Various media sources blamed this on the computer rankings; but---as we have remarked in this space earlier this season---the true problem lies in the BCS formula of polls, computers, schedule strength, losses, and quality wins. The polls and the computers obviously already account for schedule strength and "quality wins", otherwise Miami of Ohio, TCU, and Boise State would be in the top six. Adding these factors in again after the polls and computer rankings disastrously double-counts these effects.

We discussed this issue in the middle of November 2003, when Ohio State briefly leapfrogged USC in the BCS Standings. The computer systems then gave Ohio State an average #2 ranking while placing USC at an average #3.33. Meanwhile, USC was #2 and Ohio State was #4 in both polls. But Ohio State gained additional ground because of the direct inclusion of strength of schedule (even though this factor was already in the polls and computers), and briefly became #2 in the BCS. In contrast, a simple 50/50 average of the polls on one hand and the computers on the other would have kept USC at #2 (2+3.33=5.33) and Ohio State at #3 (4+2=6).

The same double counting problem came back with a vengeance at the end of 2003, with USC on the losing end of that double counting. USC was #1 in both polls, and averaged #2.67 in the computers. LSU was #2 in both polls, averaging #1.93 on the computers. Oklahoma was #3 in both polls, averaging #1.17 in the computers. Even though the computers still ranked Oklahoma ahead of the other teams on average, it was Oklahoma's 11th place schedule strength and quality win over Texas that combined to give it a full 1.55 double-counted BCS points compared to USC. Without those points, a straight-up averaging of the polls on one hand and the computers on the other would put USC first (1+2.67=3.67), LSU second (2+1.93=3.93), and leave Oklahoma third (3+1.17=4.17).

The BCS problems at the end of the 2000 and 2001 seasons would also have been made clearer with simple averages of the polls and computers. Above, we have for simplicity suggested averaging 1/2 each to the polls and computers. On the results of those seasons, such averaging still selects Oklahoma versus Florida State in 2000 and Miami versus Nebraska in 2001. Clearly the computers aren't too popular right now, so maybe 2/3 to the polls and 1/3 to the computers would be more palatable to fans? In that event, the 2000 standings would have selected Oklahoma and Miami while the 2001 standings would have picked Miami and Oregon. Obviously, how much the polls should count versus the computers is a choice that would need to be solidified for official standings.

That this double-counting problem isn't widely appreciated further supports our opinion that the BCS system needs to be made more transparent. College football fans shouldn't have to accept computer rankings without some reasonable explanation of the ingredients that go into those algorithms, both so that fans have more confidence in the resulting rankings and the opportunity to suggest modifications. For instance, there is certainly a need for a discussion about how much more losing a game late in the season or in a conference championship game should matter compared to an earlier loss. Additionally, we strongly believe that the BCS formula and its double counting should be tossed for some simple average of the polls and computer rankings.


GT UNC Copyright © 2004 Peter J. Mucha (mucha@unc.edu), Thomas Callaghan, Mason A. Porter

THIS PAGE IS NEITHER A PUBLICATION OF THE UNIVERSITY OF NORTH CAROLINA (UNC) NOR THE GEORGIA INSTITUTE OF TECHNOLOGY (GT), WHERE THIS WORK BEGAN. NEITHER UNC NOR GT ARE RESPONSIBLE FOR EDITING OR EXAMINING ITS CONTENT. THE AUTHOR OF THIS PAGE IS SOLELY RESPONSIBLE FOR THE CONTENT. THE RIGHTS TO ANY AND ALL MATERIALS CREATED BY THE AUTHOR OF THIS PAGE ARE RETAINED BY THAT AUTHOR.