Big-name polling analysts failed to predict last week’s presidential election. There’s a lesson there in how leaders should use and think about data.
We live in the era when knowing the numbers matters, an era of data-driven decision making. And then the numbers didn’t do what they were supposed to do.
Yes, this is an election-related post. I sympathize if you have little interest in revisiting last week, regardless of your political persuasion. It was a wearying campaign, not just in terms of chatter about the candidates but in terms of how they were covered. But I wanted to discuss polling in this space because I think there’s a lesson for leaders in how much to trust data—and how knowing your blind spots is an important piece of data itself.
To recap: On election day, just about everybody in the poll-analysis business had called Democratic nominee Hillary Clinton somewhere from a pretty-sure bet to a Smooth Jimmy’s Lock of the Week to win the White House. Five Thirty Eight, the site most dedicated to algorithmic slicing and dicing, put that probability at 71 percent. The New York Times’ analysis had that probability north of 90 percent. The Princeton Election Consortium, perceived by many this cycle to be making a play at being the new king of the poll-aggregation hill, had Clinton as a 99 percent sure thing.
So what went wrong? That argument will be going on for a while, but one reasonable answer for now is that those aggregators are only as good as the polls that feed them, and the polls weren’t providing the portraits of voters that they were supposed to. “State polls were off in a way that has not been seen in previous presidential election years,” Sam Wang of the Princeton Election Consortium told the New York Times, in a story that cast a lot of healthy skepticism about the shiny new age of big data and algorithms.
Statistics are best treated as a guide to thinking, not a replacement for it.
Professional pollsters aren’t dumb. They know that sometimes, even if you do all the appropriate outreach in terms of demographics, access to landlines, and all the rest, imprecision still prevails. In the case of polls that reported a loss for President-elect Donald Trump in Rust Belt states, non-response and social desirability bias may have played a role. (In plain English, Trump supporters weren’t picking up the phone or, knowing his strong unfavorability toward the tail end of the cycle, wouldn’t disclose their support to a stranger on the phone.) But this miscue underscores an important point that has perhaps been neglected of late: When the questions are volatile, the numbers can be harder to trust.
None of this is an indictment of data-driven decision making, of course. Every association depends on it, as does the association community as a whole. The ASAE Foundation delivers plenty of research; so do plenty of other association-related firms, and Associations Now covers a lot of those reports. Some of that research is better than others—you should check the methodology of every research paper you receive, sure as you brush your teeth when you wake up in the morning. But statistics are best treated as a guide to thinking, not a replacement for it.
After all, a data point can tell you how people feel, but not necessarily why they feel it. Luckily, you can gather some of that information passively. Whenever possible, I try to take a look at open-end responses to a study or poll, where people have written in additional responses—it’s a treasure trove of supplementary information, and a place where people air the grievances they have about a particular issue. The other option, naturally, is to get that information actively: Conduct follow-up interviews or focus groups to talk to the people who are invested in a particular subject, not necessarily with a mind to contradict the data points, but to better understand them.
That may be especially urgent for associations that do advocacy in what is almost certain to be an unusual legislative landscape in the near future: How will you know what they’ll support you on if you haven’t taken the temperature of their enthusiasm?
Last fall, when a Trump presidency seemed like a remote possibility if not a punch line, I spoke with a few associations about how they were addressing the new political environment. The general consensus was that associations needed to be quicker on their feet in response to changes, and that meant not only better understanding their stakeholders but being better equipped to mobilize them.
One healthcare association, for instance, launched a database of health statistics sortable by legislative district, the easier for local champions to make their cases to lawmakers. It was a smart, always-on tactic that merged useful data and the passions of people behind it. At the time, I wrote that it made sense because “if recent years have suggested anything, it’s that Trumpishness is likely to be a factor in American politics for some time to come.”
It’s a solid prediction. Probably.
How do you approach—and question—research data and build it into your decision-making process? Share your experiences in the comments.