The week of the Georgia senatorial elections, I’ve been following the forecasts by Northwestern University data scientist Tom Miller, who bested most of the polls in the presidential race by missing the margin of Joe Biden’s victory by just 12 electoral college votes. In fact, Miller’s only major wrong call was forecasting a Trump win in Georgia. This time, Miller learned from his misreading of the Peach State and the nailed the outcomes of the two senatorial races. Just after midnight on January 5, Miller predicted Democratic challengers Raphael Warnock and Jon Ossoff would sweep both races, defeating Senators would Kelly Loeffler and David Perdue by 2% and 1% respectively.
In the final count, Warnock prevailed 50.9% to 49.1%, a margin of 1.8 points, while Ossoff won 50.5% to 49.5%, or by 1 point. Miller was just .2% off on Warnock vs Loeffler, and hit the precise mark on Ossoff-Perdue. Miller’s coup was showing a sudden shift towards the two Democrats in the final days of the election that lifted both from underdogs to moderate favorites. Polling was light in Georgia, but the few surveys didn’t perceive the strength of that shift as accurately as Miller. The last Fox5/InsiderAdvantage reading had both races a tie, and final Trafalgar Group poll called Ossoff-Perdue dead heat, and showed Warnock trailing by two points.
Miller says that it was original analytics he developed specifically for the Georgia runoffs that yielded 100% accuracy in the Ossoff race, and 90% in the Warnock win. “I was attempting to fix what’s broken in election forecasting,” Miller told Fortune. “The predictions as reported in the media are broken.” The problem, he says, is that forecasting comes in two main categories, and both suffer from substantial drawbacks. The first type are the polls that canvas carefully-chosen samples of probable voters. “Those surveys have two problems,” says Miller. “First, one pollster’s sample is different from another’s, even though both are trying to choose the most representative populations. Second, polls tend to lean Democratic. Pollsters then need to adjust the results for that political bias, and also tweak the numbers for example, if they don’t get enough Hispanic voters in their latest survey.” For Miller, those differing adjustments cause lots of variability both across different polls, and even in the same surveys over time, rendering them highly unreliable guides to which candidates lead or trail by what margin.
The second category encompasses the modelers. The two most influential practitioners are the Economist and Nate Silver’s FiveThirtyEight. “The modelers start with everything they know about previous elections, and then incorporate a single number from each of the polls showing where the candidates stood on the day of the survey,” as Miller. The rub, he says, is that the modeling in largely based on polling done by somebody else. “They use one summary number from each poll––one candidate’s leading 51% to 49%, say––but that’s the only data they get from the polls. That limits what they can do with their models.”
Miller’s goal was to combine polling and modeling under one roof into a single system incorporating far more data than either deploy individually. In the past, he’d relied on betting odds instead of either polls or models, specifically the prices posted on the single major venue available in the U.S., Predictit.org. “The betting odds are much more accurate predictors than surveys or models, but they’re dominated by higher-income folks who are mainly male, and often bet on sports. So they have a Republican bias.” The gambling sites were mostly right-on in predicting a much narrower Biden win than the polls, but got Georgia wrong because of that GOP tilt. It was an over-reliance on those odds that caused Miller’s misfire on the Peach State in the presidential race.
Miller created a new methodology where he first conducted a conventional poll to ask whom voters expected to back, and what variables, notably race, sex and where they lived, influenced their preference. That’s the polling part. Then, he united that survey with a model that incorporated an element of political gaming, but at the same time eliminated the Republican bias. Here’s how his “magic beans” operated. To organize the survey, Miller partnered with polling specialists Isometric Solutions, Panel Consulting Group, and Prodege LLC. In the six days from December 30 to January 4, he canvassed 1,200 potential voters. Once again, his questions fell into two categories, first for the regular poll, called the Preference Survey, and second for the model designed to generate a more accurate, final result, tagged as the Prediction Survey.
Read the full article here!
Share This Article