"Why Iteration is not Innovation"

Watch our recorded WEBINAR!

Two teams propose AI solutions to fake news problem

Fake news is one of the defining problems of this decade. And no, I don’t mean ‘fake news’ in the way President Trump uses the term, which for him simply means any news he doesn’t agree with or like. I mean fake news in the ‘factually-inaccurate-but-widely-shared-news-story’ kind of way. I’m talking about Pizzagate, Pope Francis ‘endorsing’ candidate Trump or Donald Trump sending his personal plane to save stranded marines (all of which were demonstrably false but enjoyed viral distribution). Regardless of your political persuasion, experts agree a huge influx of actual fake news notably impacted the 2016 election, with some even suggesting it was the deciding factor. I certainly don’t know if it was actually the ‘deciding’ factor in the 2016 election, but I do know from my own personal experience existing in a digital world dominated by social media that fake news is a large and growing problem. And the sheer influx of total content, multiplied by a population with subpar 21st century media literacy, means this problem isn’t going anywhere, and it just might be an existential threat to democracy.

Can AI save the day?

How big is the fake news problem?

Some academic research for you:

  • “More than one-quarter of voting-age adults visited a fake news website supporting either Clinton or Trump in the final weeks of the 2016 campaign, according to estimates from [Princeton’s Andrew] Guess and his co-authors. That information was gleaned from a sample of more than 2,500 Americans’ web traffic data collected (with consent) from October and November of 2016. Some posts, in particular spread, especially far: In the months leading up to the election, the top 20 fake news stories had more shares, reactions, and comments on Facebook (8.7 million engagements) than the 20 top hard news stories (7.3 million engagements), according to a Buzzfeed analysis.
  • According to a Stanford study, “There was a proliferation of fake news during the 2016 election cycle. Grinberg et al. analyzed Twitter data by matching Twitter accounts to specific voters to determine who was exposed to fake news, who spread fake news, and how fake news interacted with factual news (see the Perspective by Ruths). Fake news accounted for nearly 6% of all news consumption, but it was heavily concentrated—only 1% of users were exposed to 80% of fake news, and 0.1% of users were responsible for sharing 80% of fake news. Interestingly, fake news was most concentrated among conservative voters.”
    • Basically, the folks inclined toward fake news made up a disproportionate percentage of total fake news viewership. So, it might not have impacted that many people, but the people it did impact, it had a huge influence on.
  • Another Ohio State study concluded that fake news depressed turnout for Hillary Clinton — enough so that it might have swung the election according to the Washington Post: the ‘study suggests that about 4 percent of President Barack Obama’s 2012 supporters were dissuaded from voting for Clinton in 2016 by belief in fake news stories.’

So — in the 2016 election alone, it was a pretty big problem. But what about the larger implications? A democracy can’t function without an agreed upon framework for facts. If two sides of any issue cannot agree on what’s real and what’s not, there’s no infrastructure for discourse. Truth and factual accuracy undergirds American democracy, so getting this right is paramount.

How can AI help?

“The sheer amount of content generated today makes human filtering impossible. The only hope is automating the detection and neutralizing of fake content”, according to ZDNet — a sentiment with which I agree. So that’s why/how AI might be the key to this problem.

But what would that look like?

There are two teams working on this issue showing promise.

In Hierarchical Propagation Networks for Fake News Detection, researchers looked at how fake news moves through our consumption networks to see if it’s distinguishable from how real news is distributed.

From ZDNet: “The researchers, from Arizona State and Penn State, used the FakeNewsNet data repository and modeled the links between real and fake news, including tweets. They analyzed the resulting network graphs, and discovered that there are, indeed, significant differences between how real news and fake news spread through our social networks. Metrics cover such macro structural issues as tree depth and number of nodes, as well as temporal issues such as the time difference between the first tweet and the last retweets. The micro level of user conversations includes metrics such as how long a conversation tree lasts, as well as the sentiment expressed in retweets.”

So if the way fake information moves through our networks is quantifiably different, perhaps AI can identify it and block it in its tracks.

Another team is working on a more structural change in how news is created and shared, which relies on blockchain.

Researchers at Pakistan’s Information Technology University want to use Blockchain to Rein in The New Post-Truth World and Check The Spread of Fake News. Again from ZDNet: “Their proposed architecture has three blockchain based components: a publisher management protocol (PMC); a smart contract for the news; and, a news blockchain. The PMC uses three smart contracts to enroll, update, and revoke the identities of news organizations. The smart news contract is used to publish news, and ensures that the content is as originally published, and includes publisher and verification data. Finally, the news blockchain guards against malicious alterations and includes a proof-of-truthfulness method that makes it easy to confirm validity.”

This would require a complete rewrite of how news is created, how it’s distributed and how it’s consumed. I’m a little less convinced by this approach, but bold ideas are required to conquer existential threats.

All in all, fake news is a large and growing problem. But hopefully, advances in AI and commitment from necessary stakeholders will allow us to get a handle on things sooner rather than later.



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

1 × 1 =

Jeff Francis

Jeff Francis is a veteran entrepreneur and founder of Dallas-based digital product studio ENO8. Jeff founded ENO8 to empower companies of all sizes to design, develop and deliver innovative, impactful digital products. With more than 18 years working with early-stage startups, Jeff has a passion for creating and growing new businesses from the ground up, and has honed a unique ability to assist companies with aligning their technology product initiatives with real business outcomes.

Get In The Know

Sign up for power-packed emails to get critical insights into why software fails and how you can succeed!

EXPERTISE, ENTHUSIASM & ENO8: AT YOUR SERVICE

Whether you have your ducks in a row or just an idea, we’ll help you create software your customers will Love.

LET'S TALK

Beat the Odds of Software Failure

2/3 of software projects fail. Our handbook will show you how to be that 1 in 3.