Updates from Chicago Booth

How opinion polls are presented affects how we understand them
By Alice G. Walton – Oleg Urminsky and Luxi Shen used data provided by the prominent data-driven forecasting organization FiveThirtyEight leading up to the 2016 US presidential election.

The researchers presented the then-current forecasts to two groups of study participants, but in different formats. One group saw probability projections that, on average, said Democratic candidate Hillary Clinton had a 74 percent chance of winning. The other group saw margin forecasts that said, on average, that she would get 53 percent of the vote.

On a given day, both forecasts represented the same snapshot in time—two essentially identical takes on Clinton’s expected victory. But participants interpreted the forecasts differently. When people saw the probability forecast and were then asked to estimate a margin by which Clinton would win, they overestimated, predicting she would get 60 percent of the vote on average, more than the 53 percent. Meanwhile, people shown the second, margin forecast predicted the probability of her winning at 60 percent on average rather than the actual 74 percent average.

Both predictions turned out to be incorrect, as Clinton won 48 percent of the vote and lost the election to Republican candidate Donald Trump, who received 46 percent. But they illustrated bias in people’s perceptions.

The difference in interpretations is unlikely to be explained by forecasters having the wrong assumptions in their models, the researchers say. more>


Leave a Reply

Your email address will not be published. Required fields are marked *