You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.

Nate Silver and His Equations Don't Understand the World Cup

The dubiousness of Big Data's soccer predictions

Astrid Stawiarz/Getty Images Entertainment

I have no idea what a diagonal inflated bivariate Poisson regression is and I don't suppose you do either. It turns out it's part of the fancy math tricked-out by Nate Silver and his colleagues at FiveThirtyEight to predict the results of all the matches at this World Cup. If you think it seems fishy you would not be alone. 

I am agnostic in the Big Data journalism wars. I think there's a place for it. There are times when number-crunching really can offer valuable insights. But there are other times when it, er, doesn't. The World Cup is one of those occasions. 

I mean, Silver chewed all his data and came up with the blinding observation that Brazil are the team most likely to win this tournament. Gee, thanks. Moreover, before the tournament had begun the next three best teams were Argentina, Germany and Spain. Double thanks. 

Sometimes you don't need data, you only need your eyes. Football is not baseball (not least because it hasn't been corrupted by junk stats such as RBIs or "pitching wins"). The tiresome argument between your lying eyes and dispassionate data just doesn't apply to soccer. Or, to put it another way, when the data delivers eyebrow-raising results are you going to trust the data or your eyes? 

So sure, Silver predicts a Brazil victory. But his numbers, largely based on ESPN's "Soccer Power Index" would also have you believe that Ecuador had a better chance of winning the World Cup than Italy. Indeed, Italy were given a 0.5% chance of winning it all, barely any better than the USA's 0.4% likelihood of success. 

You might argue this reflected the fact Italy were drawn in a tougher group than Ecuador but this is still an example of how the numbers lead you to some very strange conclusions. Nor is it the only one. Silver reckoned Chile were fifth favorites and more than four times as likely to win the tournament than the Netherlands. Really? Your eyes tell you Chile is a decent side not to be taken lightly; your eyes won't believe they're that much better than Holland. Because they're not. Especially in a one-off game. 

Which, of course, is the problem. There are very few games of international football. Silver admits that small sample sizes complicate his predictions and a more modest enterprise would admit that all this data-crunching is really little more than fancy guess-work. Instead, however, we endure absurdly precise predictions of manifestly uncertain events. 

England, for instance, is now reckoned to have a 37% chance of beating Uruguay, a 34% chance of defeat and a 29% probability of drawing. In other words, it's a game that could produce any result. Who knew? Similarly, the USA's win-loss-draw chances against Ghana are presumed to be 36-34-29. This seems about right! So right, in fact, that it would never have occurred to anyone absent bold Mr. Silver. 

Even so, oddities abound. For instance Mr. Silver's computers say Costa Rica have a slightly better chance of beating Italy (30%) than they do of defeating England (26%). Perhaps they do though for the life of me I can't see what basis there is for such a belief beyond the fact it's what the computer says. And we must always trust the computer! 

As an entertainment there's nothing wrong with this. The problem is that it's presented as some kind of magic capable of unearthing truths those of us relying only upon our eyes might have missed. I suppose it might work if the World Cup were played over 162 games rather than seven but a stats-based approach that might be useful for a league campaign is not nearly so reliable in a cup competition. 

So, sure, Brazil don't often lose at home. Then again, they don't often play at home either. Nine games every four years (usually) and of those nine games fewer than a handful are against top class sides. So it's a small sample size in the first instance and one further queered by the competition, especially since "not losing to Brazil in Brazil" would generally be useful but not necessarily vital to another country's prospects of qualifying for the World Cup. A one-off, do-or-die game is a very different proposition.  

Meanwhile, Silver's algorithm is refreshed (i.e., recalculated) after every game. Obviously this makes sense since it is supposed to be a predictive tool but it is also, I can't help think, a means by which the algorithm will take credit for everything it gets "right" while laughing off everything it gets "wrong" on the grounds we didn't have enough information previously. 

It still does some strange things, however. For instance, if Ghana beat the USA this will "slightly improve SPI's estimates of how strong Africa is compared to other continents and could thereby also improve the odds for teams like Nigeria and Ivory Coast." Again, this seems superficially attractive. But only superficially because it means we should take seriously a proposition that says "Nigeria is more likely to beat Argentina in a one-off game because Ghana beat the USA in another one-off game". Even though the Ghana-USA match cannot possibly impact the way the Argentina-Nigeria game is played. 

And this is the problem with Silver's approach. Given the choice between being platitudinous or bizarre, he boldly squares a circle and decides to be both.