Why a Republican Senate takeover looks so shaky: Is it the GOP or the models?

Top election forecast models put Democratic chances at keeping their Senate majority at from 33 percent to 56 percent. But don't place too much stock in such variations. Take a look under the hood of the models.

|
Jacquelyn Martin/AP/File
The Capitol reflected in the windows of the Newseum, on Pennsylvania Avenue in Washington in this Jan. 8, 2013, file photo. The race for control of the US Senate is going to be close, despite what some election models are forecasting.

Longtime readers will know that I was a big fan of Drew Linzer’s 2012 presidential forecast model contained at his Votamatic website, and not just because he was kind enough fly out here to give a talk to Middlebury College students during the middle of the presidential campaign. Linzer’s model, you will recall, correctly predicted the electoral vote outcome in each state during the 2012 presidential election, thus making his final Electoral College prediction of Obama 332, Romney 206 the most accurate of the transparent forecast models of which I’m aware (along with political scientist’s Simon Jackman’s) for that cycle. Now Linzer is back with another forecast model (two actually!) looking at both Senate and gubernatorial races during the current election cycle and, as was the case in 2012, he is once again opening up the model so we can see the moving parts.

As I’ve said repeatedly, for political scientists, getting a prediction right is not the ultimate objective – it’s knowing why the prediction was right that matters. For that reason it is imperative that we understand the assumptions built into the model and why I generally only discuss forecast models that are transparent to outside inspection. Of course, for most of you, the bottom line – who is going to win – is what really matters! Fortunately, I’m pretty sure Linzer’s model will satisfy both our needs.

What do we find when we look at Linzer’s model? Interestingly (and here I’m subject to Drew’s correction) his Senate forecast model appears to differ from his presidential model. In 2012, Linzer used a combination of a fundamentals-based forecast model combined with state-based polling data to generate his presidential prediction. Essentially, he began by establishing a baseline forecast using Emory University political scientist Alan Abramowitz’s original Time-for-Change forecast model, which estimates the presidential popular vote using three variables: the incumbent president’s net approval rating at the end of June, the change in real GDP in the second quarter of the election year, and a first-term incumbency advantage. Drew then updated that forecast based on state-level polling data, which factored more heavily into his prediction as the campaign progressed so that by the time of the presidential election his forecast model was based almost entirely on polling data.

However, for his 2014 Senate forecast at the DailyKos website, Linzer is no longer incorporating any “fundamentals” into his model. Instead, he appears to rely entirely on state-level polling data. Why the change in methodology from his phenomenally successful presidential forecast model? My guess is that Linzer is less confident that there exists a Senate-oriented fundamentals model equivalent to the Abramowitz Time-for-Change forecast model in terms of accuracy. That is, he does not believe his forecast will be improved even at this early date by incorporating structural elements, such as measures of the economy, presidential approval ratings, or generic ballot results beyond what the state-based Senate polls tell him. If so, I can see the logic to this – in contrast to a presidential election, there are a lot more moving parts in trying to estimate which party will gain the majority in the Senate. To begin, with 36 Senate races there’s many more candidates, rather than just two, and they are operating in different local political contexts, instead of in the more national-based electoral context prevailing during a presidential election year. Linzer’s approach assumes state-based polling data is going to do a good enough job picking up these state-based variations without the added noise provided by incorporating fundamentals into his prediction model. In contrast, in a presidential election with only two major candidates and the likelihood of greater correlation in the votes across states due to the more nationalized environment, incorporating fundamentals as a starting point probably makes sense.

To put this another way, to justify including fundamentals into his Senate forecast model, Linzer would have to assume that there was some structural variable that influenced the election results that was not being picked up in the polling data. Since he has no baseline fundamentals tempering his Senate forecast even this early in the campaign, his prediction is based entirely on polls right from the start. (This may be one reason – a paucity of good Senate polls prior to Labor Day – that he has gotten a relatively late start on the prediction game compared to some of the other forecasters.)

Note that Linzer’s approach differs from the “mixed” forecast models presented at the New York Times Upshot  or the Monkey Cage Election Lab sites, both of which incorporate structural factors in addition to polling data. As a result, you should not be surprised to see Linzer’s initial Senate forecast differ from what these other models are predicting. And that is the case – as of today, the Linzer-based DailyKos model gives Democrats about a 56% chance of holding onto the Senate. That’s a bit more optimistic for Democrats than most of the models that include a structural component. For example, at last look the MonkeyCage’s Election Lab forecast model gives Democrats a 47% chance of retaining their majority, while the New York Times Upshot model gives them only a 33% chance.

Before you e-mail me with the inevitable “What about this forecast model” prediction, let me conclude with two reminders. First, given the uncertainty in the models at this early date, you – unlike the pundits who are even now ready to “unskew” the models – should not place too much stock in the difference between a 56% and a 33% chance of retaining a majority. As we get closer to Election Day, most of the structurally based forecast models will likely increasingly rely on polling data, and I expect their forecasts to move closer to Drew’s. But it is also the case that Drew’s current estimate is going to change as well, as his model incorporates more and better Senate polling data. (One potential variable to watch is the impact of pollsters switching from registered to likely voters samples as election day draws night.) Barring some dramatic poll-changing event, such as an escalation of the US military presence in Iraq (or Ukraine?), all signs point to an extremely tight race for control of the Senate in 2014. At this point, I would be skeptical of any forecast model that suggests otherwise.

By the way, since I will likely be blogging increasingly about these various election forecasting models, you might be interested in hearing Drew assess how reliable presidential forecast models are (hint: he’s not a huge fan of fundamentals-only models). Some of his criticisms apply to the Senate fundamentals-based models as well. As you can see in this video, however, Lynn Vavreck and I both are a bit more optimistic about political scientists’ understanding of elections. This panel took place at the Dirksen Senate building this past spring.

Matthew Dickinson publishes his Presidential Power blog at http://sites.middlebury.edu/presidentialpower/.

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Why a Republican Senate takeover looks so shaky: Is it the GOP or the models?
Read this article in
https://www.csmonitor.com/USA/Politics/Politics-Voices/2014/0904/Why-a-Republican-Senate-takeover-looks-so-shaky-Is-it-the-GOP-or-the-models
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe