Skip to main content

MIT CSAIL fights fake news with AI

Image Credit: Shutterstock

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


Fake news — loosely defined as a type of propaganda consisting of deliberate disinformation spread via traditional outlets or social media — is a menace. A December 2016 survey by the Pew Research Center suggests that 23% of U.S. adults have shared fake news, knowingly or unknowingly, with friends and others. It’s begun to erode trust in major television and newspaper outlets, studies show — 77% of respondents to a Monmouth University survey said that they believed media reported fake news. And in a particularly egregious example, an untrue (but viral) story about a Washington, D.C. pizzeria led 9% of U.S. voters in a poll of 1,224 to report that they believed former secretary of state Hillary Clinton was “connected to a child sex ring.”

As part of an effort to bring attention to the problem’s scale, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) recently investigated ways so-called fake news detectors could be fooled with factually true articles. Coinciding with this work, the same team used one of the world’s largest fact-checking data sets to develop automated systems that could detect false statements.

It builds on a study conducted by MIT CSAIL last year, which produced an AI system that could determine whether a source is accurate or politically prejudiced.

The first of the researchers’ two preprint papers describes a framework based on OpenAI’s GPT-2, an AI model they coopted to corrupt the meaning of human-written text before feeding it to a fake news detector. In one experiment, they tapped auto-completion tools similar to reliable sources to produce information about legitimate news. The generator, fed a story about how NASA was collecting data on coronal mass ejections, spit out an informative (and correct) explanation about how said data could help scientists study the Earth’s magnetic fields. It was nevertheless identified as “fake news,” demonstrating that the fake news detector couldn’t differentiate fake from real text if both were machine-generated.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

“This finding of ours calls into question the credibility of current classifiers in being used to help detect misinformation in other news sources,” said study contributor and MIT professor Regina Barzilay.

In the second paper, the team sourced Fact Extraction and VERification (FEVER), a repository of false statements cross-checked with evidence from Wikipedia articles, to develop a best-in-class fact-checking algorithm. This was easier said than done; they note that that FEVER contains bias that could cause errors in machine learning models if left unaddressed.

Problematically, systems trained on FEVER tend to focus on the language of statements without taking external evidence into account. (For example, a statement like “Adam Lambert does not publicly hide his homosexuality” would likely be declared false by the fact-checking AI even though it’s true and can be inferred from the corpus.) The effect is exacerbated when the target statement contains information that’s true today, but that might be considered false in the future.

The coauthors created a data set that de-biases FEVER to address this, but that didn’t solve the dilemma entirely. Models performed poorly on the unbiased evaluation sets, a result the researchers chalk up to those models’ overreliance on the bias to which they were initially exposed. The final fix involved engineering an entirely new algorithm — one that when trained on the debiased dataset outperformed previous fact-checking AI across all metrics.

The team hopes that combining fact-checking into existing defenses will make models more robust against attacks. In the future, they hope to further improve existing models by developing new algorithms and constructing data sets that cover more types of misinformation.

They’re not the only ones attempting to combat the spread of fake news with AI. Delhi-based startup MetaFact taps natural language processing algorithms to flag misinformation and bias in news stories and social media posts. AdVerif.ai, a software-as-a-service platform that launched in beta last year, parses articles for misinformation, nudity, malware, and other problematic content and cross-references a regularly updated database of thousands of fake and legitimate news items. And for its part, Facebook has experimented with deploying AI tools that “identify accounts and false news.”

Whatever the ultimate solution — whether AI, human curation, or a mix of both — it can’t come fast enough. Gartner predicts that by 2022, if current trends hold, a majority of people in the developed world will see more false than true information.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.