Home General Twitter joins Facebook and YouTube in banning Covid-19 vaccine misinformation

Twitter joins Facebook and YouTube in banning Covid-19 vaccine misinformation

Twitter joins Facebook and YouTube in banning Covid-19 vaccine misinformation

Open Sourced logo

On Wednesday, Twitter announced that it will begin to take down Covid-19 vaccine misinformation starting next week. The company plans to remove false vaccine content that it considers “the most harmful,” and later on it will start labeling other posts that could be misleading.

“In the context of a global pandemic, vaccine misinformation presents a significant and growing public health challenge — and we all have a role to play,” the company said in a blog post. “We are focused on mitigating misleading information that presents the biggest potential harm to people’s health and wellbeing.”

Twitter’s announcement follows similar pledges from both Facebook and YouTube, which recently said they’ll remove false information related to Covid-19 vaccines. The announcement also comes after the US Food and Drug Administration’s authorization of the vaccine developed by Pfizer and BioNTech.

As the Pfizer/BioNTech vaccine begins to be administered to health care workers and people in long-term care homes, misinformation related to vaccines has flourished online. For instance, unproven narratives that the Covid-19 vaccine has links to the Chinese Communist Party, or that the vaccine has a proven connection to a condition called Bell’s palsy, have gained tens of thousands of mentions in the past week, according to data collected by Zignal Labs.

This surge of misinformation has exacerbated concerns that part of the US population may be unwilling to get the vaccine, or will delay doing so. Recent polling suggests that while most Americans say they’ll probably or definitely get vaccinated against Covid-19, many may not do so immediately.

In a blog post, Twitter explained that it will take a two-pronged approach to vaccine content: taking down misinformation that poses the most harm while labeling content that’s misleading or out of context. Posts that could be removed, the company says, include anything that suggests that a Covid-19 vaccine is part of a “deliberate conspiracy” or that falsely claims Covid-19 is a hoax and so vaccines aren’t necessary. The company also said it would tackle misinformation related to vaccines more generally, including claims that have been “widely debunked about the adverse impacts or effects of receiving vaccinations.”

A Twitter spokesperson told Recode that when someone posts this type of misinformation, the platform will hide that content from public view. The person who posted it can then appeal the decision to Twitter or log on and remove that content themselves before they’ll be allowed to post again from their account.

Beginning next year, Twitter will also start adding labels to posts that the platform decides need further context, like rumors, contested claims, or claims about the Covid-19 vaccine that are “incomplete.”

Back in October, YouTube announced that it planned to remove Covid-19 vaccine misinformation, and it banned vaccine claims that went against what health experts and the World Health Organization said.

Earlier this month, Facebook said that under its policy requiring the removal of content that could lead to “imminent physical harm,” it too would remove false information related to Covid-19 vaccines. For instance, the company said it would take down content that said vaccines include microchips — a common and false conspiracy theory related to the Covid-19 vaccine. It’s also removing posts that claim that “specific populations are being used without their consent to test the vaccine’s safety.”

Just a month ago, Twitter had told Recode that while it recognized the importance of its platform to public health, it was still working out how it would approach moderating content surrounding a Covid-19 vaccine. Twitter would not comment on whether it worked with the other social media companies in developing the policies announced today.

Throughout the pandemic, Twitter has used a sliding scale between posts that deserve a label and posts that require removal, depending on how harmful that content could be. The company has also frequently used labels on posts sharing election misinformation.

Twitter’s new rules on vaccine misinformation suggest the fight against this problem is likely to continue. In fact, many of the same accounts that have pushed other types of false claims, like election misinformation, are now turning their attention toward the Covid-19 vaccine, indicating a significant challenge ahead for social media platforms.

Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

This article is auto-generated by Algorithm Source: www.vox.com

Related Posts