QR-big-box-ad
CLS_bigbox

Elections Whiz, Nate Silver, Fails to Predict Oscars; What Went Wrong?


Christoph Waltz at the 2010 Oscars. Waltz is the unexpected winner of Best Supporting Actor for this year's Oscars. (Wikimedia Commons)

Christoph Waltz at the 2010 Oscars. Waltz, for his role in Django Unchained, is the unexpected winner of Best Supporting Actor for this year’s Oscars. (Wikimedia Commons)

The statistician Nate Silver predicted four out of the six major-category winners for last night’s Oscars. That’s a 66.7 per cent hit rate. Not bad — but it pales in comparison to what Silver did last year: Predict the results of all 50 states for the American presidential election — a hit rate of 100 per cent.

Silver failed to predict the winners of Best Director and Best Supporting Actor. He picked Steven Spielberg (Lincoln) and Tommy Lee Jones (Lincoln), respectively, but the winners turned out to be Ang Lee (Life of Pi), and Christoph Waltz (Django Unchained).

So how did the predictor of 130 million American voters fumble when it comes to the 6,000 odd members of the Oscars Academy?

His methodology might give some answers. Two days before the Oscars, along with his predictions, Silver posted on his blog a very detailed insight into his methods.

There are “plenty of parallels” between the presidential voting process and that of the Oscars, writes Silver. Just like how the presidential election had numerous pre-election polls, the Oscars also have their equivalent: “The other awards that were given out in the run-up to the Oscars.”

Those awards are numerous, and include the Director’s Guild Awards (DGA), the Producer’s Guild Awards (PGA) and the Golden Globe Awards. And just like the pre-election polls, the results of some awards tend to coincide with the Oscars — some, more than others. The DGA and the PGA results coincided with the Oscars 80 and 70 per cent of the time, respectively, while those of the Globe only coincided 16 per cent of the time.

“These patterns aren’t random,” writes Silver. “Instead, the main reason that some awards perform better is because some of them are voted on by people who will also vote for the Oscars.”

Silver then compiled the results of this year’s pre-Oscar awards. Results from awards that proved themselves to coincide heavily with the Oscars, like the DGA, were given more weight; results from awards like the Globe were given less.

With the analyses from those compilations — the pre-election polls of the Oscars — Silver came up with his predictions, which came true for the winners for Best Picture (Argo),Best Actor (Daniel Day Lewis, Lincoln), Best Actress (Jennifer Lawrence, Silver Linings Playbook) and Best Supporting Actress (Anne Hathaway, Les Miserables).

And those were the easy ones. The numbers were overwhelmingly in their favour. For Best Director, though — a category that Silver got wrong — it was really, really hard. Spielberg and Lee stood toe-to-toe against each other with a mere 0.02 points difference. It was something nearly impossible to predict.

But the Best Supporting Actor prediction was something else altogether. The winner, Waltz, was actually Silver’s third choice — and he lagged a whole 0.36 points behind the top choice, Jones. Waltz’s win — nobody saw that coming.

Maybe that’s the true winner of the Oscars right there: Christoph Waltz the underdog.

Written by: Ani Hajderaj, Staff Writer

Quantumrun Foresight
Show more