Synthesizing Big, Wide and Dirty Data to Predict Election Outcomes
DECISION SCIENCES |

Synthesizing Big, Wide and Dirty Data to Predict Election Outcomes

AN APPLIED SOCIAL SCIENCE PROBLEM INSPIRED BOCCONI PROFESSOR OMIROS PAPASPILIOPOULOS TO INVESTIGATE NEW SCALABLE BAYESIAN COMPUTATION STRATEGIES

After the 2008 financial crisis, 45 new parties were created all over Europe to capitalize on the discontent of voters towards austerity policies. These new parties – ranging from extreme left to extreme right – got a significant number of seats (18,3% of the total in 2016). In such a scenario, predicting election outcomes is even more important than usual, but is also more difficult because of the absence of historical data on these emerging parties.
 
In most European countries, there are three different data sources which can be informative about elections. The first comprises the opinion polls that are published by media outlets and institutions during the election campaign, even up to a couple of days before the election. These data are abundant, but also subject to a number of biases and are  only informative about the average national sentiment, whereas – in most European countries – seats are allocated according to votes in local districts. A second source of information is represented by activities on social media, which are harder to collect and potentially even more biased. These two sources of information are examples of “big” data, at least in comparison with the third one, which consists of more carefully designed surveys with geographically and demographically stratified samples. These surveys are carried out by national institutes and report a number of voter characteristics in addition to their voting intention. In this sense, these can be called “wide” data. However, among other issues, such surveys are conducted months before the election, hence do not capture the shifts that may occur near election day. Considering all three of these sources of information, data for predicting election outcomes are abundant but “dirty” and coming from heterogeneous sources. None of these data sources is fully informative when considered in isolation, but their synthesis is. However, the problem of synthesising these data is far from trivial.
 
Omiros Papaspiliopoulos, a new Full Professor at Bocconi (Department of Decision Sciences), started working on this problem in 2015, together with José Garcia-Montalvo (UPF), a prominent applied economist, and Timothée Stumpf-Fetizon (Warwick), then just graduated from the Master in Data Science at Barcelona Graduate School of Economics which Professor Papaspiliopoulos founded in 2013 and directed until 2020. Besides data-warehousing and modelling challenges, this project posed serious computational problems, which turned out to be of much broader relevance, more generally applying to high-dimensional sparse data and models. Professor Papaspiliopoulos, whose primary expertise is in statistical and computational methodology, recognised the common structures and – in joint work with Gareth Roberts (Warwick) and Giacomo Zanella (Assistant Professor at Bocconi) – developed new computational approaches that are provably scalable, that is whose running time increases only linearly with the amount of data and the size of the model. This makes them practical for use in large scale applications.
 
Such an interaction between applied problems and methodological innovations is far from unusual in  Papaspiliopoulos’s works. “I think that real-data projects like this are very stimulating for us as statisticians: they make us think about new problems and – hopefully – design new solutions. The same holds for consulting projects, which push us out of our academic comfort zone and can help us being more pragmatic. Of course, this does not mean being less rigorous. On the contrary, scientific rigour is our identity and potential contribution, both to the private and public sector. Especially when predictions are intended for policy makers, it is key that the methodology used is transparent, interpretable, and scientifically justified.”
 
Find out more
 
Montalvo, J. G., Papaspiliopoulos, O., & Stumpf-Fétizon, T. (2019). “Bayesian Forecasting of Electoral Outcomes with New Parties’ Competition.” European Journal of Political Economy, 59, 52-70. https://doi.org/10.1016/j.ejpoleco.2019.01.006
 
Papaspiliopoulos, O., Roberts, G. O., & Zanella, G. (2020). “Scalable Inference for Crossed Random Effects Models.” Biometrika, 107(1), 25-40. https://doi.org/10.1093/biomet/asz058
 
Papaspiliopoulos, O., Stumpf-Fetizon, T. & Zanella, G. (2021) “Scalable Computation for Bayesian Hierarchical Models,” preprint on arXiv, 1-48,  https://arxiv.org/abs/2103.10875
 

by Sirio Legramanti
Bocconi Knowledge newsletter

News

  • Finding the Perfect Number

    Francesco Grossetti has released a computer package capable of finding out the number of topics that best describes the content of corpuses of texts too large to be managed  

  • Fear of COVID, the Effectiveness of Non Pharmaceutical Interventions

    Colombe Ladreit analyzes the economic effects of lockdowns and distancing  

Seminars

  August 2021  
Mon Tue Wed Thu Fri Sat Sun
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

Seminars

  • The Equilibrium Existence Duality
    Theory & Experiments Seminar Series

    Elizabeth Baldwin, Oxford University https://elizabeth-baldwin.me.uk/papers/equilibrium-existence-duality.pdf

    Webinar

  • tbc
    Development Labor Political Economy - Joint with Collegio Carlo Alberto

    Jesse Shapiro (Brown University)

    Webinar