The Oscars have come and gone again, and I nearly matched my score from last year. I got either a 70% or a 68.75% this year for my Oscar predictions; the two scores are based on how you choose to calculate the rare tie in the Sound Editing category. Regardless, I am rather proud of my score. I learned a great deal about the different Awards that occur during the season, and I feel that this information will be helpful to me in the future. Furthermore, since I have already collected a great deal of data, I feel very optimistic about next year.
Majority of the awards this were obvious, but a few were not. There were of course some moments where I knew I would be wrong, but I had a strong sense that something different would happen. I now recognize this bias muddled my decision making. The only category that totally threw me off was the Production Design category, which I will explain below.
Per my predictions for the 85th Academy Awards I attempted to create a model using some statistical data, which I admitted was obviously flawed. However, much of my data was good enough for me to correctly predict a vast majority of the winners, even if I had deterred from some of the signals. Then again, there were unexpected inconsistencies, including my Production Design loss.
Nate Silver, like always, does a much better job and looking at data that is more helpful at making useful predictions. He took a look at all the other awards ceremonies’ voting base, and then determined how the voting base matched up with the Oscars’ voting base. For example, the Producer Guild Awards and the Screen Actors Guild Awards are voted on members who are may also vote for their respective industries in the Academy Awards, whereas the Satellite and Critics’ awards are voted on by critics and press. Essentially, he splits them up into Insiders and Outsiders, with more weight on Insiders.
He also took a look at the Critic association awards, which are Outsiders, and realized they rarely matched up with Oscar and other Award winners. He also solely focuses on the main six categories, which typically have more data to work with. He did not predict the other eighteen categories, probably because the data is less sufficient.
Once again, in my attempt to predict the Oscars I used historical information on Oscar categories and other awards, and expert opinions and forecasts to make my predictions. Rather than focus on what I got right, I thought it would be more beneficial to look at what I got wrong and why.
What I Got Wrong
Best Director: Silver’s analysis of the category mirrored my own, but how he assembled the data varied drastically, but our predictions were the same, but wrong. Silver’s data had both Steven Spielberg and Ang Lee’s tail-to-tail, where I had Lincoln up a head, which was why I did not consider Lee.
Production Design: My analysis did not even have Lincoln on the radar for this award. In fact, my analysis had this category as an almost dead heat between Les Misérables and Anna Karenina. Although I saw that there was a slight favorable lean towards Les Misérables, which also won the BAFTA, I chose Anna Karenina because I truly felt that this category would consider the eclectic adaptation of the Russian novel as an achievement. My bias got in the way.
Supporting Actor: Christoph Waltz won the BAFTA and the Golden Globe, but Tommy Lee Jones won the Screen Actors Guild Award, so this category could have gone either way. I had put more emphasis on the SGA, since there is more voting overlap. I do not discredit my methodology.
Documentary Short: Categories such as these are the most difficult to predict since they have the least amount of data. I had seen the winner Inocente which was very good, but the data I looked at put Open Heart at the front. Then again, all my data for this category is not extensive, but was the same methodology I used to accurately predict the Best Live-Action Short.
Makeup and Hairstyling: This was another category that had little data to compare. My Hobbit prediction was just as arbitrary.
Original Screenplay: Similarly to the Supporting Actor category, Django Unchained had won the BAFTA and the Golden Globe, but the Writers Guild Award went to Zero Dark Thirty, which has more overlap. I also considered the Golden Globe and BAFTA to have a more European influence, which favors Tarantino’s style, whereas the SGA is a strictly American guild.
The Tie
Although unusual, the Sound Editing category wound up in a tie. Both Zero Dark Thirty and Skyfall won this category. Typically, in most Oscar pools, they do not consider ties, and neither do the participants. In my final tally, I have chosen to provide both a half point and a full point separately to determine my final outcome. However, I did successfully predict Zero Dark Thirty, but I did not have Skyfall on my radar at all.
Conclusions
One of the biggest conclusions that I found was that I should have put more faith into my model, rather than considering my gut decisions. I recognize that, even though I had attempted to remain unbiased, I did have a bias towards the films that I felt deserved it and those who would actually win. Predicting events like the Oscars, Elections, and sporting events requires a discipline, one of which I recognized existed, but involuntarily dismissed. I confused heart with signal, which was actually noise. Even though all the data really pointed to Brave and Django Unchained to win the Best Animated and Best Original Screenplay respectively, I chose to go with my gut.
Furthermore, although my data showed Lincoln winning the Best Adapted Screenplay, I correctly predicted that Argo would win because I resorted to historical pattern that the Best Picture winner would also typically win Best Adapted. I only felt sure about this because the data weighed heavily on Argo to win Best Picture. If I actively applied this same method to the Animated category, I would have predicted that category as well. In other words, remain consistent when applying weights, and apply them properly.
The Production Design category had me choosing between a nearly dead heat with Anna Karenina and Les Misérables, when Lincoln would end up winning. Lincoln was incredibly far down in my data with a meager 2% chance. I am not sure how to approach this category to fix it for next year. I may have to see if there is a correlation between the Makeup and Hair category.
Momentum. I need to stop considering buzz and momentum. Reading rumors created false insights and muddled my decision. This is the herd mentality, and I should have stuck to the empirical sources and stick with them. Again, this is a bias issue.
Next year for the Short categories, I might look at the festivals these short films played at and see how they performed. This information would include how many and which festivals these short films played at and how many times they won, if at all. This type of data collection is tough and exhaustive, but since these categories have the least amount of data, it is easier for the majority to get it wrong, and having just a little bit more data could help out.
Overall, I had fun trying to develop my won prediction model, which I recognize is flawed, but it did help me get a minimum of a 68%. Although, if I really stuck to my method, I think I could have gotten about 85%, which is more helpful when trying to win a pool.