WASHINGTON — In a new attempt to combat misinformation regarding Covid-19, social media giant Twitter recently announced that it would begin testing a new reporting feature for users to flag tweets containing possible misinformation.

Through the new feature, the users will report misinformation through the same process as they used to report harassment or other harmful content.

Users will also be given an option to select whether the misinformation is political, health-related, or falls into another category.

Twitter rolled out the new feature on Aug. 17 for most users in the U.S., Australia, and South Korea, adding that it expects to run this experiment for a few months before deciding to make it available for users in additional markets.

The testing for the new feature might be helpful to identify tweets containing misinformation that have the potential to go viral on the internet.

Twitter said, in a statement, that not every report will be reviewed as the platform continues to test the feature. Still, the data obtained through the test will help the company determine how it could expand on the feature over the next few weeks.

“Content that is demonstrably false or misleading and may lead to a significant risk of harm (such as increased exposure to the virus, or adverse effects on public health systems) may not be shared on Twitter,” the social media platform said.

“This includes sharing content that may mislead people about the nature of the Covid-19 virus; the efficacy and safety of preventative measures, treatments, or other precautions to mitigate or treat the disease; official regulations, restrictions, or exemptions about health advisories; or the prevalence of the virus or risk of infection or death associated with Covid-19.”

This new measure has been taken into consideration after the Joe Biden administration took a stronger stance against misinformation as new variants of Covid-19 have continued to spread in the U.S.

Last month, U.S. President Biden slammed online platforms like Facebook and others for spreading misinformation against the Covid-19 vaccine.

The White House asked Facebook and other social media companies to be more aggressive in removing “harmful” posts that spread disinformation and flagging posts that disseminate information, as per White House press secretary Jen Psaki.

The U.S. Surgeon General’s office published a report outlining new ways platforms could counter health misinformation.

“I am urging all Americans to help slow the spread of health misinformation during the Covid-19 pandemic and beyond,” said Vivek H. Murthy, Vice Admiral, U.S. Public Health Service, in the report.

“Health misinformation is a serious threat to public health. It can cause confusion, sow mistrust, harm people’s health, and undermine public health efforts. Limiting the spread of health misinformation is a moral and civic imperative that will require a whole-of-society effort.”

The report called for “clear consequences for accounts that repeatedly violate” a platform’s rules and for companies like Facebook and Twitter to redesign their algorithms to “avoid amplifying” false information.

(With inputs from ANI)

Edited by Saptak Datta and Praveen Pramod Tewari



The post Twitter Introduces New Feature For Reporting Covid-19 Misinformation appeared first on Zenger News.

Website | + posts