Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Better but not stellar: Pollsters faced familiar complaints, difficulties in assessing Trump-Harris race

Hand holding a mobile phone showing CNN's "Magic Wall."

CNN’s Magic Wall map with U.S. presidential, seen on a mobile phone on Nov.

Beata Zawrzel/NurPhoto via Getty Images

An oracle erred badly. The most impressive results were turned in by a little-known company in Brazil. A nagging problem reemerged, and some media critics turned profane in their assessments.

So it went for pollsters in the 2024 presidential election. Their collective performance, while not stellar, was improved from that of four years earlier. Overall, polls signaled a close outcome in the race between former President Donald Trump and Vice President Kamala Harris.


That is what the election produced: a modest win for Trump.

With votes still being counted in California and a few other states more than a week after Election Day, Trump had received 50.1% of the popular vote to Harris’ 48.1%, a difference of 2 points. That margin was closer than Joe Biden’s win by 4.5 points over Trump in 2020. It was closer than Hillary Clinton’s popular vote victory in 2016, closer than Barack Obama’s wins in 2008 and 2012.

There were, moreover, no errors among national pollsters quite as dramatic as CNN’s estimate in 2020 that Biden led Trump by 12 points.

This time, CNN’s final national poll said the race was deadlocked – an outcome anticipated by six other pollsters, according to data compiled by RealClearPolitics.

The most striking discrepancy this year was the Marist College poll, conducted for NPR and PBS. It estimated Harris held a 4-point lead nationally at campaign’s end.

‘Oracle’ of Iowa’s big miss

In any event, a sense lingered among critics that the Trump-Harris election had resulted in yet another polling embarrassment, another entry in the catalog of survey failures in presidential elections, which is the topic of my latest book, “ Lost in a Gallup.”

Comedian Jon Stewart gave harsh voice to such sentiments, saying of pollsters on his late-night program on election night, “I don’t ever want to fucking hear from you again. Ever. … You don’t know shit about shit, and I don’t care for you.”

Megyn Kelly, a former Fox News host, also denounced pollsters, declaring on her podcast the day after the election: “Polling is a lie. They don’t know anything.”

Two factors seemed to encourage such derision – a widely discussed survey of Iowa voters released the weekend before the election and Trump’s sweep of the seven states where the outcome turned.

The Iowa poll injected shock and surprise into the campaign’s endgame, reporting that Harris had taken a 3-point lead in the state over Trump. The result was likened to a “ bombshell ” and its implications seemed clear: If Harris had opened a lead in a state with Iowa’s partisan profile, her prospects of winning elsewhere seemed strong, especially in the Great Lakes swing states of Wisconsin, Michigan and Pennsylvania.

The survey was conducted for the Des Moines Register by J. Ann Selzer, a veteran Iowa-based pollster with an outstanding reputation in opinion research. In a commentary in The New York Times in mid-September, Republican pollster Kristen Soltis Anderson declared Selzer “the oracle of Iowa.” Rachel Maddow of MSNBC praised Selzer’s polls before the election for their “uncanny predictive accuracy.” Ratings released in June by data guru Nate Silver gave Selzer’s polls an A-plus grade.

But this time, Selzer’s poll missed dramatically.

Trump carried Iowa by 13 points, meaning the poll was off by 16 points – a stunning divergence for an accomplished pollster.

“Even the mighty have been humbled” by Trump’s victory, the Times of London said of Selzer’s polling failure.

Selzer said afterward she will “be reviewing data from multiple sources with hopes of learning why that (discrepancy) happened.”

It is possible, other pollsters suggested, that Selzer’s reliance on telephone-based surveying contributed to the polling failure. “Phone polling alone … isn’t going to reach low-propensity voters or politically disengaged nonwhite men,” Tom Lubbock and James Johnson wrote in a commentary for The Wall Street Journal.

These days, few pollsters rely exclusively on the phone to conduct election surveys; many of them have opted for hybrid approaches that combine, for example, phone, text and online sampling techniques.

Surprise sweep of swing states

Trump’s sweep of the seven vigorously contested swing states surely contributed to perceptions that polls had misfired again.

According to RealClearPolitics, Harris held slender, end-of-campaign polling leads in Michigan and Wisconsin, while Trump was narrowly ahead in Arizona, Georgia, Pennsylvania, North Carolina and Nevada.

Trump won them all, an outcome no pollster anticipated – except for AtlasIntel of Sao Paulo, Brazil, a firm “about which little is known,” as The New Republic noted.

AtlasIntel estimated Trump was ahead in all seven swing states by margins that hewed closely to the voting outcomes. In none of the swing states did AtlasIntel’s polling deviate from the final vote tally by more than 1.3 points, an impressive performance.

AtlasIntel did not respond to email requests I sent requesting information about its background and polling technique. The company describes itself as “a leading innovator in online polling” and says it uses “a proprietary methodology,” without revealing much about it.

Its founder and chief executive is Andrei Roman, who earned a doctorate in government at Harvard University. Roman took to X, formerly Twitter, in the election’s aftermath to post a chart that touted AtlasIntel as “the most accurate pollster of the US Presidential Election.”

It was a burst of pollster braggadocio reminiscent of a kind that has emerged periodically since the 1940s. That was when polling pioneer George Gallup placed two-page advertising spreads in the journalism trade publication “Editor & Publisher” to assert the accuracy of his polls in presidential elections.

Underestimating Trump’s support again

A significant question facing pollsters this year – their great known unknown – was whether modifications made to sampling techniques would allow them to avoid underestimating Trump’s support, as they had in 2016 and 2020.

Misjudging Trump’s backing is a nagging problem for pollsters. The results of the 2024 election indicate that the shortcoming persists. By margins ranging from 0.9 points to 2.7 points, polls overall understated Trump’s support in the seven swing states, for example.

Some polls misjudged Trump’s backing by even greater margins. CNN, for example, underestimated Trump’s vote by 4.3 points in North Carolina, by more than 6 points in Michigan and Wisconsin as well as Arizona.

Results that misfire in the same direction suggest that adjustments to sampling methodologies were inadequate or ineffective for pollsters in seeking to reach Trump backers of all stripes.The Conversation

Campbell, is professor emeritus of communication at the American University School of Communication.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The concept of AI hovering among the public.

Panic-driven legislation—from airline safety to AI bans—often backfires, and evidence must guide policy.

Getty Images, J Studios

Beware of Panic Policies

"As far as human nature is concerned, with panic comes irrationality." This simple statement by Professor Steve Calandrillo and Nolan Anderson has profound implications for public policy. When panic is highest, and demand for reactive policy is greatest, that's exactly when we need our lawmakers to resist the temptation to move fast and ban things. Yet, many state legislators are ignoring this advice amid public outcries about the allegedly widespread and destructive uses of AI. Thankfully, Calandrillo and Anderson have identified a few examples of what I'll call "panic policies" that make clear that proposals forged by frenzy tend not to reflect good public policy.

Let's turn first to a proposal in November of 2001 from the American Academy of Pediatrics (AAP). For obvious reasons, airline safety was subject to immense public scrutiny at this time. AAP responded with what may sound like a good idea: require all infants to have their own seat and, by extension, their own seat belt on planes. The existing policy permitted parents to simply put their kid--so long as they were under two--on their lap. Essentially, babies flew for free.

The Federal Aviation Administration (FAA) permitted this based on a pretty simple analysis: the risks to young kids without seatbelts on planes were far less than the risks they would face if they were instead traveling by car. Put differently, if parents faced higher prices to travel by air, then they'd turn to the road as the best way to get from A to B. As we all know (perhaps with the exception of the AAP at the time), airline travel is tremendously safer than travel by car. Nevertheless, the AAP forged ahead with its proposal. In fact, it did so despite admitting that they were unsure of whether the higher risks of mortality of children under two in plane crashes were due to the lack of a seat belt or the fact that they're simply fragile.

Keep ReadingShow less
Will Generative AI Robots Replace Surgeons?

Generative AI and surgical robotics are advancing toward autonomous surgery, raising new questions about safety, regulation, payment models, and trust.

Getty Images, Luis Alvarez

Will Generative AI Robots Replace Surgeons?

In medicine’s history, the best technologies didn’t just improve clinical practice. They turned traditional medicine on its head.

For example, advances like CT, MRI, and ultrasound machines did more than merely improve diagnostic accuracy. They diminished the importance of the physical exam and the physicians who excelled at it.

Keep ReadingShow less
Digital Footprints Are Affecting This New Generation of Politicians, but Do Voters Care?

Hand holding smart phone with US flag case

Credit: Katareena Roska

Digital Footprints Are Affecting This New Generation of Politicians, but Do Voters Care?

WASHINGTON — In 2022, Jay Jones sent text messages to a former colleague about a senior state Republican in Virginia getting “two bullets to the head.”

When the texts were shared by his colleague a month before the Virginia general election, Jones, the Democratic candidate for attorney general, was slammed for the violent rhetoric. Winsome Earle-Sears, the Republican candidate for governor, called for Jones to withdraw from the race.

Keep ReadingShow less
A U.S. flag flying before congress. Visual representation of technology, a glitch, artificial intelligence
As AI reshapes jobs and politics, America faces a choice: resist automation or embrace innovation. The path to prosperity lies in AI literacy and adaptability.
Getty Images, Douglas Rissing

America’s Unnamed Crisis

I first encountered Leszek Kołakowski, the Polish political thinker, as an undergraduate. It was he who warned of “an all-encompassing crisis” that societies can feel but cannot clearly name. His insight reads less like a relic of the late 1970s and more like a dispatch from our own political moment. We aren’t living through one breakdown, but a cascade of them—political, social, and technological—each amplifying the others. The result is a country where people feel burnt out, anxious, and increasingly unsure of where authority or stability can be found.

This crisis doesn’t have a single architect. Liberals can’t blame only Trump, and conservatives can’t pin everything on "wokeness." What we face is a convergence of powerful forces: decades of institutional drift, fractures in civic life, and technologies that reward emotions over understanding. These pressures compound one another, creating a sense of disorientation that older political labels fail to describe with the same accuracy as before.

Keep ReadingShow less