clock menu more-arrow no yes mobile

Filed under:

Why I Think Pre-Season Polls are Destructive

As Purple Reign pointed out, LSU was ranked #6 in the first important pre-season ranking of the season.  The ESPN/USAToday coaches' poll.  Purple Reign goes on to say, "Okay, it's the preseason, so it really doesn't mean anything (unless it's 2004 and you're Auburn)."

But there's the problem.  It sometimes really does mean something.  In fact, sometimes the pre-season polls actually are determinative of something important that happens later in the season, and the 2004 Auburn squad provides the perfect example of a team that that lost a significant opportunity for the sole reason that no one realized before the season that they would actually be really really good.

How did it hurt Auburn?  Well, in the pre-season rankings, Southern Cal and Oklahoma were #1 and #2.  Auburn was down in the teens.  All three of those teams went undefeated, but teams so rarely moved down the polls unless they lost.  Because Southern Cal and Oklahoma never lost, Auburn never had a shot at breaking into the top 2, no matter what they did.  And all they did was run the table in the SEC, a feat that was not respected as much as it should have been because they, "had a few close games" rather than blowing everybody out.

Now, what's so bad about that?  Well, I have a theoretical problem with pre-season rankings (and early-season rankings for that matter).  I think I said it best in a post back at GeauxTuscaloosa:

First, let's go into (again) why I don't like preseason rankings in general. I've discussed it before, and the criticisms are mostly obvious, but I want to go into one that isn't obvious.

Late-season polls and rankings are (or should be at least) measurements of how good your season is, compared to other teams. Late season polls are based on performance, then. All the talent in the world won't get you in the top 10 if you finish the season 5-8. 

On the contrary, preseason and early-season polls and rankings cannot be based on performance, because there is no performance to gauge. They have to be based on something else. But what exactly? 

In my opinion, preseason polls and rankings, if you're going to have them at all, should be based on perceived ability. This means that you look at the talent on the team and the coaching and try to decide who is better than whom. Frankly, I think it's an impossible task to get right, and it's plainly laughable to make a serious effort at it when you don't know which key players on the teams will be leaving early for the NFL.

The problem is that I don't think many polls or rankings in preseason or early in the season are based on perceived ability. I think they're based more on expected performance. These are two different things, and I'll illustrate to you what makes them different. If you are evaluating perceived ability, I think the easiest way to visualize this is to imagine two teams playing on a neutral site, and imagining who would likely win. If you are evaluating expected performance, you look at the team's schedule and decide how they will perform against that schedule, compared to other teams will perform against their own schedule.

If you're using expected performance, you aren't ranking ability before the season. You're predicting the end result after the season. I believe it is illegitimate to do so, since the biggest and most commonly referenced rankings are actually used as a starting point for future rankings. In other words, they predict performance and then the predictions actually have an impact on how performance is gauged. Schlabach is doing exactly this kind of prediction, as are most people. How do I know? Every one of his brief team profiles contains at least one sentence evaluating the difficulty of the team's schedule. If you were simply using perceived ability to evaluate the team, there would be no reason to even consider the team's schedule.

I was reacting to an absurdly early pre-season ranking put together by ESPN's Mark Schlabach.  His ranking, like so many others, aren't really attempting to rank the quality of a team, but are incorporating at least some element of predicted performance.  The cockles of my heart find this deeply troubling.

I think it's part of the Culture of Prediction that I rebel against.  Various members of the media and the public think you aren't doing your job as an analyst if you don't at least try to predict who will win the game, the conference, the national championship, or certain post-season awards.  It is as if many people believe that if you understand the game of football, it means you should be able to predict its outcome.

I disagree very strongly with that notion, and in fact I think just the opposite.  I think probably the single biggest football-related epiphany I ever had was when I realized that the game of football was eminently unpredictable.  When I didn't understand football all that well, I thought I could handicap.  The more I learned, the more closely I studied, the more I realized that the elements of randomization inherent in the game of football made prediction very unreliable.

Sure, when a really good team plays a really bad team, you can predict the really good team will win 100% of the time, and you'll be right approximately 99% of the time.  But when two evenly matched teams play, the game is almost always decided by little things.  A penalty here.  A missed tackle there.  A fumble that could have been recovered by one team but was instead recovered by the other.  Most games are decided by a couple of key plays.  You would be a fool to think you could reliably predict the outcome when two similarly skilled teams play one another.

Even the LSU-Ohio State game, which was not particularly close, would have been very different if just a couple of plays had gone differently.  If Ricky Jean-Francois doesn't block that kick (and assuming the kick was on target), and if the Ohio State player doesn't rough our punter early in the 3rd, the game would have been very different.  If Harry Coleman doesn't recover Chad Jones' fumble, the game would have been very different.  

You can't possibly predict those things.  You can't predict which of those events will happen, or how they will impact the game, and only the most lopsided matchups aren't profoundly impacted by those sorts of plays.

This is why I think rankings should only come out starting about halfway through the season, and they should only measure actual accomplishments on the field.  Nothing else.  Even if you use "perceived ability" rather than "expected performance" as your gauge, you eventually have to transition between that and measuring past performance, and that is a tricky thing to do, in my opinion.