G
Google News
This season, ESPN released a new tool in its analytic arsenal: its College Football Playoff Predictor. The tool, according to ESPN, is a model that âis derived from the committeeâs past behavior in its rankings (both in-season and on selection day) throughout the first four years of the playoff system.â
ESPN has used this Playoff Predictor throughout the season. It posts consistent updates on the college football section of their website. The Predictor is quoted in most articles by Senior Writer (and main CFP analyst) Heather Dinich. It is referenced in on-air broadcasts. In short, itâs a tool that ESPN is relying on heavily to monitor the College Football Playoff and predict how the committee will decide.
Unfortunately for ESPN, the Playoff Predictor is also pure nonsense.
Logically, we should realize that any predictive model of what the selection committee will do is rubbish. The committee is not a monolithic body; members rotate out every year. In addition, members have dropped out and not been replaced so at least half the time the committee has been in service, the entire 13-person committee hasnât been able to decide.
We can certainly note trendsâI myself do that with the committee rankings every weekâbut that doesnât mean that we can predict what the committee will do before the season starts. I try to learn from the committee each week so that its weekly rankings will give us hints as to what the final rankings will be. I donât pretend to be able to read minds; just to base off what the committee has shown us so far.
That logical issue is far, far from the only problem with the Playoff Predictor, though. Letâs look in-depth at what ESPN claims the Predictor does, and why itâs flawed.
ESPNâs factors
ESPN leads its introduction to the Playoff Predictor with this explanation: âAnd through study of the committee, ESPN Analytics identified five key factors that determine each teamâs chance to reach the playoff.â Again, leaving aside the issue that the committee is different every year, letâs evaluate these claims on their own.
1. STRENGTH OF RECORD (HOW MUCH TEAMS HAVE ACCOMPLISHED)
2. FPI (HOW GOOD TEAMS ARE)
Iâll deal more with strength of record and FPI later, because that is the biggest issue. I will point out, however, that according to the official CFP protocols, the selection committee is not allowed to consider ESPNâs FPI, for two reasons. Here are the relevant protocols:
SB Nationâs Bill Connelly also has a metrics-based rating of teams. His S&P+ ratings are far more open. He explains, weekly, what factors go into the ratings. He releases full box scores to show how a team really did and why the score alone might not be truly indicative of a teamâs efficiency. Connelly tells the fans as much as he can while still keeping the rankings proprietary. In short, S&P+ is everything that FPI isnât.
3. NUMBER OF LOSSES (INCORPORATED INTO SOR BUT THE COMMITTEE PLACES EVEN MORE EMPHASIS ON LOSSES)
This is one thing that ESPN is clearly correct on. I have noted that the number of losses still generally determines the rankings, especially at the top. Sure, undefeated Florida State was behind two one-loss teams in the final 2014 rankings. That didnât keep the Seminoles out of the Playoff, though.
In fact, the only time weâve ever seen a two-loss team ahead of a one-loss team towards the top of the final rankings was 2015. Stanford finished the year at No. 6, ahead of No. 7 Ohio State. The Cardinal had a clearly superior resume, including one of the best strengths of schedule in the country. Ohio State looked impressive and had a ton of NFL talent (FPI really liked Ohio State that year), but could only pick up one ranked win. The Cardinal still finished below Iowa that year, though.
4. CONFERENCE CHAMPIONSHIPS
This is an easy claim to make, but a much harder one to substantiate. Everyone assumes that what got Ohio State into the Playoff in 2014 was its conference championship game. Jeff Long did say back then that the way Ohio State won that game pushed them over the edge. Itâs also worth noting that Ohio State had a better strength of schedule and more quality wins than Baylor. TCUâs resume was close to Ohio Stateâs (each had three wins over committee-ranked teams), but the Horned Frogs lost to Baylor head-to-head. Also, Baylor and TCU were each conference champions in 2014. We can point out the Big 12âs cynicism in claiming them as such, but the committee has never given any indication that they pretended they werenât conference co-champions.
Leaving Ohio State in 2014 aside, there are plenty of other examples of the committee not quite respecting a conference champion. Iowa was ahead of Stanford in 2015, even though the Cardinal had a better SOS, resume, and were a conference champion. 2016 saw Ohio Stateâwith a clearly-superior resumeâahead of Pac 12 champion Washington (and Big Ten champion Penn State). The committee clearly gives some weight to conference champions, but itâs very clear that it matters far less than the resume.
5. INDEPENDENT STATUS (NOTRE DAME CANâT BE A CONFERENCE CHAMPION, BUT ALL ELSE BEING EQUAL IT MIGHT GET MORE CREDIT THAN A TEAM THAT DIDNâT WIN ITS CONFERENCE CHAMPIONSHIP)
Itâs really hard to know if this is meaningful at all. Does Notre Dame get some special treatment from the selection committee? Logic dictates they wouldâeveryone involved in college football has some bias (towards or against) Notre Dame. Thatâs just a fact about the Irish. But Notre Dame has only really been ranked when it has a strong resume, and in that case the rankings have seemed fair.
Also, as a total aside, itâs pretty cynical of ESPN to refer to this category as âindependent status.â BYU isnât getting any special treatment for being an independent. Army and UMass certainly never have. This is a nice way of ESPN claiming that the committee is biased towards Notre Dame. Notre Dame has consistently received the benefit of the doubt from the committee (more than any other team not named Alabama), but itâs tough to claim that has been undeserved. The Irish consistently have had quality wins and a strong SOS, as well.
How can you predict the committee?
This passage shows the biggest problem with the Playoff Predictor:
The committee has also shifted from talking about wins over ranked teams to âwins over teams with .500 or better records.â Of course, the committee has ignored this at times, like with LSU in 2016, but the committee has never been consistent in doing what it says it does. Thatâs part of the absurdity of ESPNâs Playoff Predictor.
The committee also doesnât seem to use any SOS metric. Back in 2014, Jerry Palm explained that the committee seems to just address SOS by eyeballing it. They look at who teams have played and those teamsâ records, and thatâs enough. How can a fancy âStrength of Recordâ metric account for that? It just canât.
The problem with FPI
For the conspiracy-minded, the above quote might indicate that ESPN is giving numbers to the selection committee, which the committee uses. As a fun fact for those anti-ESPN conspiracists out there: The Wikipedia page about the CFP used to say thatâadvanced statistics and metrics from ESPN are expected to be submitted to the committee,â but that line no longer appears.
Of course, the selection committee is made up of experts and analysts in football, who are well-aware of ESPNâs conflict of interest. Taking proprietary metrics from the one entity most invested in how many people actually watch the games would be foolish in the extreme. ESPN doesnât get to sit in (or have a reporter sit in) on the committeeâs meetings. There is no reason to believe that the committee uses FPI at all. Given that, as stated above, itâs explicitly against the protocols, I highly doubt the committee uses it.
Of course, what the committee does do is watch television. Members probably read articles on ESPN.com as well. ESPN is by far the biggest carrier of college football and airs the CFP. ESPN also has the most vested interest in maximizing ratings for the College Football Playoff. Back in 2015, when there was concern over the CFP viewership on New Yearâs Eve, Disney (the parent company of both ESPN and ABC), threw lines about watching the CFP into ABC daytime-soap General Hospital. Committee members absolutely must recognize the conflicts of interest before considering any proprietary FPI metrics they see on ESPN.
I am not particularly conspiracy-minded. The committee has a lot of members, and they rotate; itâs always impossible to keep a conspiracy that so many people know about under wraps. What ESPN is doing, though, is preparing and influencing public opinion.
Final thoughts
As long as what goes into FPI remains private, we can never know if ESPN is influencing public opinion to suit ESPN. The conflicts of interest are obvious and I doubt anyone at ESPN would deny them. When CFP viewership is higher, ESPN does better. And when certain teams are in the CFP, viewership is higher.
At its absolute best, ESPN is using this Playoff Predictor to claim an expertise that doesnât exist. It canât exist. We can only do our best to understand the committee each yearâI certainly try to.
Of course, most of the selection committeeâs decisions have been obvious anyway. There should be consensus on at least three top teams every year. Of the CFPâs 16 selections, 13 should have been essentially unanimous. Ohio State in 2016 was obvious to anyone who doesnât believe that conference championships should trump all. The only selections up for debate were Ohio State in 2014 and Alabama last year. Itâs not hard to make a program that would accurately look back and predict at least 14 of the committeeâs 16 selections.
The problem is what ESPN is using this for. Theyâre pretending theyâve cracked the code to the selection committee. They havenât, because there is no such code. The committee is a group of human beings. The group changes, and the minds of people can change. They also discuss the rankings with each other, so different conversations can yield different results. One year, offensive prowess might be deemed more significant. In other years, maybe the committee will like defense better. With SOS being eyeballed, there is no real common feature to link SOS ratings every year.
ESPN wants us to believe that theyâre the end-all-be-all place for college football information. That includes both watching games and expert analysis. Theyâre trying to convince us that it also includes looking ahead to who will make the Playoff. That final part, at least, is utter nonsense. Itâs a marketing tool, not an actual Playoff Predictor.
ESPN's College Football Playoff Predictor is pure nonsense
ESPN has used this Playoff Predictor throughout the season. It posts consistent updates on the college football section of their website. The Predictor is quoted in most articles by Senior Writer (and main CFP analyst) Heather Dinich. It is referenced in on-air broadcasts. In short, itâs a tool that ESPN is relying on heavily to monitor the College Football Playoff and predict how the committee will decide.
Unfortunately for ESPN, the Playoff Predictor is also pure nonsense.
Logically, we should realize that any predictive model of what the selection committee will do is rubbish. The committee is not a monolithic body; members rotate out every year. In addition, members have dropped out and not been replaced so at least half the time the committee has been in service, the entire 13-person committee hasnât been able to decide.
We can certainly note trendsâI myself do that with the committee rankings every weekâbut that doesnât mean that we can predict what the committee will do before the season starts. I try to learn from the committee each week so that its weekly rankings will give us hints as to what the final rankings will be. I donât pretend to be able to read minds; just to base off what the committee has shown us so far.
That logical issue is far, far from the only problem with the Playoff Predictor, though. Letâs look in-depth at what ESPN claims the Predictor does, and why itâs flawed.
ESPNâs factors
ESPN leads its introduction to the Playoff Predictor with this explanation: âAnd through study of the committee, ESPN Analytics identified five key factors that determine each teamâs chance to reach the playoff.â Again, leaving aside the issue that the committee is different every year, letâs evaluate these claims on their own.
1. STRENGTH OF RECORD (HOW MUCH TEAMS HAVE ACCOMPLISHED)
2. FPI (HOW GOOD TEAMS ARE)
Iâll deal more with strength of record and FPI later, because that is the biggest issue. I will point out, however, that according to the official CFP protocols, the selection committee is not allowed to consider ESPNâs FPI, for two reasons. Here are the relevant protocols:
âWhile it is understood that committee members will take into consideration all kinds of data including polls, committee members will be required to discredit polls wherein initial rankings are established before competition has occurred;â
âAny polls that are taken into consideration by the selection committee must be completely open and transparent to the public;â
Yes, FPI is not technically a poll. Maybe this technicality matters, but I doubt it. Additionally, FPI is far from âcompletely open and transparent to the public.â FPI is proprietary, and no one (other than those at ESPN who program it) knows what is in it. The only description we have comes from ESPN:âAny polls that are taken into consideration by the selection committee must be completely open and transparent to the public;â
âThe Football Power Index (FPI) is a measure of team strength that is meant to be the best predictor of a teamâs performance going forward for the rest of the season. FPI represents how many points above or below average a team is. Projected results are based on 10,000 simulations of the rest of the season using FPI, results to date, and the remaining schedule. Ratings and projections update daily.â
This is not a very descriptive or meaningful explanation for the ratings. We donât know how FPI simulates the games or what goes into them. How are incoming freshmen treated? How are injuries accounted for? What do things like turnover luck and individual matchups mean to the rankings? No one has ever done a full study (that I am aware of), about what FPIâs success is as a predictive model. How often does it correctly pick games, both straight up and against the spread? These are very important questions, and ESPN doesnât give us the answers.SB Nationâs Bill Connelly also has a metrics-based rating of teams. His S&P+ ratings are far more open. He explains, weekly, what factors go into the ratings. He releases full box scores to show how a team really did and why the score alone might not be truly indicative of a teamâs efficiency. Connelly tells the fans as much as he can while still keeping the rankings proprietary. In short, S&P+ is everything that FPI isnât.
3. NUMBER OF LOSSES (INCORPORATED INTO SOR BUT THE COMMITTEE PLACES EVEN MORE EMPHASIS ON LOSSES)
This is one thing that ESPN is clearly correct on. I have noted that the number of losses still generally determines the rankings, especially at the top. Sure, undefeated Florida State was behind two one-loss teams in the final 2014 rankings. That didnât keep the Seminoles out of the Playoff, though.
In fact, the only time weâve ever seen a two-loss team ahead of a one-loss team towards the top of the final rankings was 2015. Stanford finished the year at No. 6, ahead of No. 7 Ohio State. The Cardinal had a clearly superior resume, including one of the best strengths of schedule in the country. Ohio State looked impressive and had a ton of NFL talent (FPI really liked Ohio State that year), but could only pick up one ranked win. The Cardinal still finished below Iowa that year, though.
4. CONFERENCE CHAMPIONSHIPS
This is an easy claim to make, but a much harder one to substantiate. Everyone assumes that what got Ohio State into the Playoff in 2014 was its conference championship game. Jeff Long did say back then that the way Ohio State won that game pushed them over the edge. Itâs also worth noting that Ohio State had a better strength of schedule and more quality wins than Baylor. TCUâs resume was close to Ohio Stateâs (each had three wins over committee-ranked teams), but the Horned Frogs lost to Baylor head-to-head. Also, Baylor and TCU were each conference champions in 2014. We can point out the Big 12âs cynicism in claiming them as such, but the committee has never given any indication that they pretended they werenât conference co-champions.
Leaving Ohio State in 2014 aside, there are plenty of other examples of the committee not quite respecting a conference champion. Iowa was ahead of Stanford in 2015, even though the Cardinal had a better SOS, resume, and were a conference champion. 2016 saw Ohio Stateâwith a clearly-superior resumeâahead of Pac 12 champion Washington (and Big Ten champion Penn State). The committee clearly gives some weight to conference champions, but itâs very clear that it matters far less than the resume.
5. INDEPENDENT STATUS (NOTRE DAME CANâT BE A CONFERENCE CHAMPION, BUT ALL ELSE BEING EQUAL IT MIGHT GET MORE CREDIT THAN A TEAM THAT DIDNâT WIN ITS CONFERENCE CHAMPIONSHIP)
Itâs really hard to know if this is meaningful at all. Does Notre Dame get some special treatment from the selection committee? Logic dictates they wouldâeveryone involved in college football has some bias (towards or against) Notre Dame. Thatâs just a fact about the Irish. But Notre Dame has only really been ranked when it has a strong resume, and in that case the rankings have seemed fair.
Also, as a total aside, itâs pretty cynical of ESPN to refer to this category as âindependent status.â BYU isnât getting any special treatment for being an independent. Army and UMass certainly never have. This is a nice way of ESPN claiming that the committee is biased towards Notre Dame. Notre Dame has consistently received the benefit of the doubt from the committee (more than any other team not named Alabama), but itâs tough to claim that has been undeserved. The Irish consistently have had quality wins and a strong SOS, as well.
How can you predict the committee?
This passage shows the biggest problem with the Playoff Predictor:
Strength of Record is the most important factor. Fifteen of the 16 playoff teams in the past four years have ranked in the top four of Strength of Record on selection day.
How is ESPN claiming that its Strength of Record metric is an accurate predictor for CFP selection? The committee has changed their talking points of what matters over the years. For most of 2014, it was âgame control,â an entirely subjective factor that was just a euphemism for âeye test.â That factor seems to have gone away; at least, the words âgame controlâ are no longer used.The committee has also shifted from talking about wins over ranked teams to âwins over teams with .500 or better records.â Of course, the committee has ignored this at times, like with LSU in 2016, but the committee has never been consistent in doing what it says it does. Thatâs part of the absurdity of ESPNâs Playoff Predictor.
The committee also doesnât seem to use any SOS metric. Back in 2014, Jerry Palm explained that the committee seems to just address SOS by eyeballing it. They look at who teams have played and those teamsâ records, and thatâs enough. How can a fancy âStrength of Recordâ metric account for that? It just canât.
The problem with FPI
For the conspiracy-minded, the above quote might indicate that ESPN is giving numbers to the selection committee, which the committee uses. As a fun fact for those anti-ESPN conspiracists out there: The Wikipedia page about the CFP used to say thatâadvanced statistics and metrics from ESPN are expected to be submitted to the committee,â but that line no longer appears.
Of course, the selection committee is made up of experts and analysts in football, who are well-aware of ESPNâs conflict of interest. Taking proprietary metrics from the one entity most invested in how many people actually watch the games would be foolish in the extreme. ESPN doesnât get to sit in (or have a reporter sit in) on the committeeâs meetings. There is no reason to believe that the committee uses FPI at all. Given that, as stated above, itâs explicitly against the protocols, I highly doubt the committee uses it.
Of course, what the committee does do is watch television. Members probably read articles on ESPN.com as well. ESPN is by far the biggest carrier of college football and airs the CFP. ESPN also has the most vested interest in maximizing ratings for the College Football Playoff. Back in 2015, when there was concern over the CFP viewership on New Yearâs Eve, Disney (the parent company of both ESPN and ABC), threw lines about watching the CFP into ABC daytime-soap General Hospital. Committee members absolutely must recognize the conflicts of interest before considering any proprietary FPI metrics they see on ESPN.
I am not particularly conspiracy-minded. The committee has a lot of members, and they rotate; itâs always impossible to keep a conspiracy that so many people know about under wraps. What ESPN is doing, though, is preparing and influencing public opinion.
Final thoughts
As long as what goes into FPI remains private, we can never know if ESPN is influencing public opinion to suit ESPN. The conflicts of interest are obvious and I doubt anyone at ESPN would deny them. When CFP viewership is higher, ESPN does better. And when certain teams are in the CFP, viewership is higher.
At its absolute best, ESPN is using this Playoff Predictor to claim an expertise that doesnât exist. It canât exist. We can only do our best to understand the committee each yearâI certainly try to.
Of course, most of the selection committeeâs decisions have been obvious anyway. There should be consensus on at least three top teams every year. Of the CFPâs 16 selections, 13 should have been essentially unanimous. Ohio State in 2016 was obvious to anyone who doesnât believe that conference championships should trump all. The only selections up for debate were Ohio State in 2014 and Alabama last year. Itâs not hard to make a program that would accurately look back and predict at least 14 of the committeeâs 16 selections.
The problem is what ESPN is using this for. Theyâre pretending theyâve cracked the code to the selection committee. They havenât, because there is no such code. The committee is a group of human beings. The group changes, and the minds of people can change. They also discuss the rankings with each other, so different conversations can yield different results. One year, offensive prowess might be deemed more significant. In other years, maybe the committee will like defense better. With SOS being eyeballed, there is no real common feature to link SOS ratings every year.
ESPN wants us to believe that theyâre the end-all-be-all place for college football information. That includes both watching games and expert analysis. Theyâre trying to convince us that it also includes looking ahead to who will make the Playoff. That final part, at least, is utter nonsense. Itâs a marketing tool, not an actual Playoff Predictor.
ESPN's College Football Playoff Predictor is pure nonsense