Fake news is a scourge on the global community. Despite our best efforts to combat it, the problem lies deeper than just fact-checking or squelching publications that specialize in misinformation. The current thinking still tends to support an AI-powered solution, but what does that really mean?
According to recent research, including this paper from scientists at the University of Tennessee and the Rensselaer Polytechnic Institute, we’re going to need more than just clever algorithms to fix our broken discourse.
The problem is simple: AI can’t do anything a person can’t do. Sure, it can do plenty of things faster and more efficiently than people – like counting to a million – but, at its core, artificial intelligence only scales things people can already do. And people really suck at identifying fake news.
According to the aforementioned researchers, the problem lies in what’s called “confirmation bias.” Basically, when a person thinks they know something already they’re less likely to be swayed by a “fake news” tag or a “dubious source” description.
Per the team’s paper:
In two sequential studies, using data collected from news consumers through Amazon Mechanical Turk (AMT), we study whether there are differences in their ability to correctly identify fake news under two conditions: when the intervention targets novel news situations and when the intervention is tailored to specific heuristics. We find that in novel news situations users are more receptive to the advice of the AI, and further, under this condition tailored advice is more effective than generic one.
This makes it incredibly difficult to design, develop, and train an AI system to spot fake news.
While most of us may think we can spot fake news when we see it, the truth is that the bad actors creating misinformation aren’t doing so in a void: they’re better at lying than we are at telling the truth. At least when they’re saying something we already believe.
The scientists found people – including independent Amazon Mechanical Turk workers – were more likely to incorrectly view an article as fake if it contained information contrary to what they believed to be true.
On the flip-side, people were less likely to make the same mistake when the news being presented was considered part of a novel news situation. In other words: when we think we know what’s going on, we’re more likely to agree with fake news that lines up with our preconceived notions.
While the researchers do go on to identify several methods by which we can use this information to shore up our ability to inform people when they’re presented with fake news, the gist of it is that accuracy isn’t the issue. Even when the AI gets it right we’re still less likely to believe a real news article when the facts don’t line up with our personal bias.
This isn’t surprising. Why should someone trust a machine built by big tech in place of the word of a human journalist? If you’re thinking: because machines don’t lie, you’re absolutely wrong.
When an AI system is built to identify fake news it, typically, has to be trained on pre-existing data. In order to teach a machine to recognize and flag fake news in the wild we have to feed it a mixture of real and fake articles so it can learn how to spot which is which. And the datasets used to train AI are usually labeled by hand, by humans.
Often this means crowd-sourcing labeling duties to a third-party cheap labor outfit such as Amazon’s Mechanical Turk or any number of data shops that specialize in datasets, not news. The humans deciding whether a given article is fake may or may not have any actual experience or expertise with journalism and the tricks bad actors can use to create compelling, hard-to-detect, fake news.
And, as long as humans are biased, we’ll continue to see fake news thrive. Not only does confirmation bias make it difficult for us to differentiate facts we don’t agree with from lies we do, but the perpetuation and acceptance of outright lies and misinformation from celebrities, our family members, peers, bosses, and the highest political offices makes it difficult to convince people otherwise.
While AI systems can certainly help identify egregiously false claims, especially when made by news outlets who regularly engage in fake news, the fact remains that whether or not a news article is true isn’t really an issue to most people.
Take, for instance, the most watched cable network on television: Fox News. Despite the fact that Fox News lawyers have repeatedly stated that numerous programs – including the second highest-viewed program on its network, hosted by Tucker Carlson – are actually fake news.
Per a ruling in a defamation case against Carlson, U.S. District Judge Mary Kay Vyskocil — a Trump appointee — ruled in favor of Carlson and Fox after discerning that reasonable people wouldn’t take the host’s everyday rhetoric as truthful:
The “‘general tenor’ of the show should then inform a viewer that [Carlson] is not ‘stating actual facts’ about the topics he discusses and is instead engaging in ‘exaggeration’ and ‘non-literal commentary.’ … Fox persuasively argues, that given Mr. Carlson’s reputation, any reasonable viewer ‘arrive with an appropriate amount of skepticism’.
And that’s why, under the current news paradigm, it may be impossible to create an AI system that can definitively determine whether any given news statement is true or false.
If the news outlets themselves, the general public, elected officials, big tech, and the so-called experts can’t decide whether a given news article is true or false without bias, there’s no way we can trust an AI system to do so. As long as the truth remains as subjective as a given reader’s politics, we’ll be inundated with fake news.
Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.
Published March 16, 2021 — 21:57 UTC
This article is auto-generated by Algorithm Source: thenextweb.com