Fact-Checking the President in Real Time

Even robots are trying to hold Donald Trump accountable.

Edmon de Haro

It’s February 2019, and I’m waiting to see whether a robot will call the president of the United States a liar.

I have tuned in to the State of the Union address, a speech that I haven’t missed more than a couple of times in my four decades of adulthood. Some addresses were soaring, some were slogs, and one, a magisterial performance by Ronald Reagan, was thrilling, because I watched it in the House chamber, from a press-gallery perch right behind the president. But I have never had the sense of suspense I feel now, as I sit staring not at a TV, but at a password-protected website called FactStream. I log in and find myself facing a plain screen with a video player. It looks rudimentary, but it might be revolutionary.

At the appointed time, President Donald Trump comes into view. Actually, not at precisely the appointed time; my feed is delayed by about 30 seconds. In that interval, a complicated transaction takes place. First, a piece of software—a bot, in effect—translates the president’s spoken words into text. A second bot then searches the text for factual claims, and sends each to a third bot, which looks at an online database to determine whether the claim (or a related one) has previously been verified or debunked by an independent fact-checking organization. If it has, the software generates a chyron (a caption) displaying the previously fact-checked statement and a verdict: true, false, not the whole story, or whatever else the fact-checkers concluded. If Squash, as the system is code-named, works, I will see the president’s truthfulness assessed as he speaks—no waiting for post-speech reportage, no mental note to Google it later. All in seconds, without human intervention. If it works.

Also watching the experiment, from Duke University, is a journalism professor named Bill Adair, along with researchers at the university’s Reporters’ Lab. A doyen of fact-checking since 2007, when he established PolitiFact.org, Adair has for years dreamt of fact-checking politicians in real time. Squash is his first attempt to teach computers to do just that. In the run-up to the State of the Union, he and a team of journalists, computer scientists, web developers, and students scrambled to prepare what tonight is still jerry-rigged software. “Guys,” he told the group ahead of time, defensively talking down expectations, “I’ll be happy if we just have stuff popping up on the screen.” (Fact-check: Stuff was not the word he used.) But if the system works, for an hour or two we might glimpse a new digital future, one that guides us toward truth instead of away from it.

The web and its digital infrastructure are sometimes referred to as information technology, but a better term might be misinformation technology. Though many websites favor information that is true over information that is false, the web itself—a series of paths and pipes for raw data—was designed to be indifferent to veracity. In the early days, that seemed fine. Everyone assumed that users could and would gravitate toward true content, which after all seems a lot more useful than bogus content.

Instead, perverse incentives took hold. Because most online business models depend on advertising, and because most advertising depends on clicks, profits flow from content that users notice and, ideally, share—and we users, being cognitively flawed human beings, prefer content that stimulates our emotions and confirms our biases. Dutifully chasing ever more eyeballs, algorithms and aggregators propagate and serve ever more clickbait. In 2016, an analysis by BuzzFeed News found that fake election news outperformed real election news on Facebook. Donald Trump, a self-proclaimed troll, quickly caught on. So did propagandists in Russia, and conspiracy websites, and troll farms, and … well, you already know the rest. By now, multiple studies have confirmed that, as Robinson Meyer reported last year for The Atlantic, “fake news and false rumors reach more people, penetrate deeper into the social network, and spread much faster than accurate stories.”

The resulting situation is odd. Normally, if an information-technology system delivered false results at least as often as true results, we would say it was broken. But when the internet delivers a wrong answer, we say the user is to blame. That cannot be right. Our information systems need to help us avoid error, not promulgate it. As philosophers and politicians have known since at least the time of Plato, human cognition is inherently untrustworthy.

The word inherently, though, implies a question. Must things be this way? What if the information superhighway could be regraded to tilt more toward veracity? What if social media’s implicit preference for clickbait and trolling could be mitigated, or even reversed? No one, mind you, is talking about censorship, which is repugnant and ineffective—just about providing signage and guardrails that alert consumers to epistemic hazards and point them toward reality.

Google, Facebook, and various digital-media organizations are working on such safeguards, with mixed but promising results. Facebook, for example, uses algorithms and user flags to identify questionable news items, which it then routes to outside fact-checking organizations; if an item is found to be false, Facebook’s algorithms demote it in your News Feed and display links to corrective information. Also, if you try to post or share an item that has been identified as bogus, Facebook displays a pop-up advising you that the item is dicey, asking whether you want to share it anyway, and offering links to fact-checks and real news. According to Facebook, demoting and contextualizing misinformation slows its spread on the platform by 80 percent. Google, too, has launched a suite of measures that it says make its service more truth-friendly. For example, it has created what it calls a “knowledge panel,” which displays information about publishers alongside search results, in order to help users weigh sources. (Google The Atlantic, and you can see one of these publisher-information boxes on the right side of the page.) Google News also labels and promotes articles written by fact-checking organizations, making them easier to spot and boosting their traffic.

Meanwhile, smaller players are building other epistemic road signs. An example is NewsGuard, launched last year by the entrepreneurs and journalists Steven Brill and Gordon Crovitz. NewsGuard is a browser extension that displays credibility ratings—along with detailed explanatory pop-ups (“nutrition labels”)—when a listed website’s links appear in search results or in social-media feeds. Having tried it for a while, I find that it works smoothly, and that the ratings are sensible and well explained. The Heritage Foundation’s Daily Signal, for example, gets a green check mark (“This website adheres to all nine of NewsGuard’s standards of credibility and transparency,” says the pop-up). Meanwhile, Breitbart News is rated “Proceed with caution” and gets a red exclamation point (“This website fails to meet several basic standards of credibility and transparency,” which are enumerated).

All of these approaches are embryonic, many have kinks to be worked out, and some have been controversial. For example, critics charge that Facebook’s effort needs more funding and transparency, and some conservatives have complained about bias in Google’s implementation of fact-checking. Still, such measures may collectively be having some effect; a recent study by five political scientists found that from 2016 to 2018, web users’ exposure to fake news leveled off and may have even declined.

Nevertheless, a vexing asymmetry remains. Inventing and disseminating falsehoods is quick and cheap and fun, whereas identifying and debunking falsehoods is slow and expensive and boring. There is no way to fact-check all the bogus claims that circulate online.

It turns out, though, that not all misinformation is created equal. Although alarm about Russian disinformation and trollbots and the like is justified, politicians are by far the biggest shapers of opinion and spreaders of bullshit. “The most worrisome misinformation in U.S. politics remains the old-fashioned kind: false and misleading statements made by elected officials,” Brendan Nyhan, a University of Michigan political scientist who studies political disinformation, wrote recently for Medium. And politicians, unlike random trolls, can be monitored, pressured, and hopefully influenced. Research by Nyhan and others finds that politicians who are reminded that they might be fact-checked stick significantly closer to the truth.

Where, then, might you start if you wanted to nip a lot of disinformation in the bud? Perhaps with a prominent politician who makes false or misleading claims at a rate of 17 or so a day. Perhaps with a politician who repeats those falsehoods over and over (for instance, saying 134 times, as of early April, that his tax cut was the biggest ever, according to a count by The Washington Post). Perhaps with Donald J. Trump.

Edmon de Haro

When I first met Bill Adair, he was a freshman at Camelback High School, in Phoenix, and I was a junior. We didn’t socialize much, because I was a debate nerd and he was a journalism nerd, but I knew him to be easygoing and friendly. Even then, though, he was fiercely ambitious to make a mark in journalism. As a boy, he had a newspaper route; by high school, he was writing for both the school paper and The Phoenix Gazette. At Arizona State University, his senior thesis on political ads concluded that journalists needed to do more fact-checking, instead of simply transmitting what politicians say.

In his 20s, he went to work for the St. Petersburg Times, but as he rose through the reportorial ranks, a dissatisfaction with he-said, she-said journalism nagged at him. We need to tell people what’s true and what’s not, he thought. In 2003, FactCheck.org launched, debuting the concept of a journalistic organization that specializes in reporting on the veracity of what politicians say. One fact-check organization could not begin to cover the territory, Adair believed, and so in 2007 he persuaded the paper to join the fray with PolitiFact, which he led until 2013. “What I loved about [FactCheck.org] was that it reached a conclusion,” Adair told me. “That was my pitch for PolitiFact—that we need to call the balls and strikes.” Today PolitiFact is operated by the Poynter Institute, and it has lots of company. Organizations devoted to fact-checking have sprung up around the world; as of 2018, more than 150 were operating, in more than 50 countries. Most are staffed by journalists who follow protocols developed by the International Fact-Checking Network (which Adair co-founded). Unsurprisingly, activists, especially on the right, complain that the fact-checkers are biased. But reputable organizations show their work, cite their sources, pledge nonpartisanship, and disclose their funding, ensuring a degree of transparency that distinguishes them from propagandists and fakers.

By 2013, when Adair accepted his journalism professorship at Duke, fact-checking had come into its own, but a frustrating bottleneck remained. The problem, as Adair saw it, is that people have to go looking for fact-checks, typically after a false claim has already entered general circulation. Also, although a lot of people read articles from fact-checking websites (about one in four Americans during the run-up to the 2016 election, according to research by Nyhan, Andrew Guess, and Jason Reifler), they tend not to be the same people who visit sketchy websites and repeat misinformation. “What we need to do is close the gap in time and space so people get the fact-check at the moment they get the political claim,” Adair said. That should be possible, because politicians repeat their lines. “So the fact-check published a month ago is just as valuable today. The key is finding the match.”

As recently as 2015, Adair and six colleagues published a paper describing real-time, automated fact-checking as a “holy grail” that “may remain far beyond our reach for many, many years to come.” Since then, however, software’s ability to translate speech to text has improved, as has its ability to parse written text and to tell when different words refer to the same idea. Ditto scouring content for checkable claims. Exploiting those developments, the Duke Reporters’ Lab—which Adair leads—created, among other automated gizmos, a bot that monitors politicians’ statements on Twitter and CNN and in the Congressional Record; when it finds new claims, it emails them to fact-checkers.

The hardest step, though, has been to develop an algorithm that can match a claim, new or old, with an existing fact-check. But when Adair put some Duke undergraduates to work on the task last summer, they made quick progress. Which is how they came to think that they might have a shot at robotically checking the biggest presidential speech of all, in real time.

On February 5, when Adair and his team gathered in his office for the State of the Union address, they had little more idea of what to expect than I did. Their software was crude and barely tested, the last pieces of it having been slapped together with only hours to spare. Monitoring from my kitchen table in Washington, D.C., I grew uneasy as the first minutes of Trump’s speech ticked by and no fact-checks appeared on the screen. Finally, about five minutes in, the first one popped up. It was laughably off-target, bearing no relationship to what the president was saying. Several more misses followed. But then came several that were in the ballpark. And then, after about half an hour: bull’s-eye.

The president said, “In the last two years, our brave ICE officers made 266,000 arrests of criminal aliens, including those charged or convicted of nearly 100,000 assaults, 30,000 sex crimes, and 4,000 killings or murders.” It was a claim he had made before, in similar words, and the software recognized it. As Trump spoke, a chyron appeared quoting a prior version of the claim. Alongside, this verdict: “Inflates the numbers.”

A few minutes later, a second bull’s-eye. Trump: “The border city of El Paso, Texas, used to have extremely high rates of violent crime, one of the highest in the entire country, and [was] considered one of our nation’s most dangerous cities. Now, immediately upon its building, with a powerful barrier in place, El Paso is one of the safest cities in our country.” Beneath his image appeared, again, a prior version of the claim, plus: “Conclusion: false.”

Squash’s performance exceeded Adair’s expectations and impressed me, but it is still nowhere near ready for prime time. Before the public sees robo-checking, the software needs to become more sophisticated, the database of fact-checks needs to grow larger, and information providers need to adopt and refine the concept. Still, live, automated fact-checking is now demonstrably possible. In principle, it could be applied by web browsers, YouTube, cable TV, and even old-fashioned broadcast TV. Checker bots could also prowl the places where trollbots go and stay just a few seconds behind them. Imagine setting your browser to enable pop-ups that provide evaluations, context, additional information—all at the moment when your brain first encounters a new factual or pseudo-factual morsel.

Of course, outrage addicts and trolls and hyper-partisans will continue to seek out fake news and conspiracy theories, and some of them will dismiss the whole idea of fact-checking as spurious. The disinformation industry will try to trick and evade the checkers. Charlatans will continue to say whatever they please, foreign meddlers will continue trying to flood the information space with junk, and hackers of our brains will continue to innovate. The age-old race between disinformation and truth will continue. But disinfotech will never again have the field to itself. Little by little, yet faster than you might expect, digital technology is learning to tell the truth.


This article appears in the June 2019 print edition with the headline “Autocorrect.”