Automated technology that Twitter began using this month to label tweets containing coronavirus misinformation is making mistakes, raising concerns about the company's reliance on artificial intelligence to review content.
On May 11, Twitter started labeling tweets that spread a conspiracy theory about 5G causing the coronavirus. Authorities believe the false theory prompted some people to set fires to cell towers.
Twitter will remove misleading tweets that encourage people to engage in behavior such as damaging cell towers. Other tweets that don't incite the same level of harm but include false or disputed claims should get a label that directs users to trusted information. The label reads "Get the facts about COVID-19" and takes users to a page with curated tweets that debunk the 5G coronavirus conspiracy theory.
Twitter's technology, though, has made scores of mistakes, applying labels to tweets that refute the conspiracy theory and provide accurate information. Tweets that include links to news stories from Reuters, BBC, Wired and Voice of America about the 5G coronavirus conspiracy theory have been labeled. In one case, Twitter applied the label to tweets that shared a page the company itself had published titled "No, 5G isn't causing coronavirus." Tweets with words such as 5G, coronavirus, COVID-19 or hashtags #5Gcoronavirus have also been mistakenly labeled.
Experts say the mislabeled tweets could confuse users, especially if they don't click on the label. Since Twitter doesn't notify users when their tweets get labeled, they likely won't know their tweets have been flagged. Twitter also doesn't give users a way to appeal its evaluation of their posts.
"Arguably, labeling incorrectly does more harm than not labeling because then people come to rely on that and they come to trust it," said Hany Farid, a computer science professor at University of California, Berkeley. "Once you get it wrong, a couple hours go by and it's over."
Making mistakes
Twitter declined to say how many 5G-coronavirus tweets have been labeled or provide an estimated error rate. The company said its Trust and Safety team is keeping track of labeled coronavirus-related tweets. The mislabeled tweets identified by CNET haven't been fixed. The company said its automated systems are new and will improve over time.
"We are building and testing new tools so we can scale our application of these labels appropriately. There will be mistakes along the way," a Twitter spokesperson said in a statement. "We appreciate your patience as we work to get this right, but this is why we are taking an iterative approach, so that we can learn and make adjustments along the way."
The company is labeling tweets about the 5G coronavirus conspiracy theory first, but plans to tackle other hoaxes.
With 166 million monetizable daily active users, Twitter faces a huge moderation challenge because of the wave of tweets that flow over the site. The company said its automated tools help workers review reports more efficiently by surfacing content that's most likely to cause harm, helping them to prioritize which tweets to review first.
Twitter's approach to coronavirus misinformation is similar to Facebook's efforts to combat inaccurate content, though the world's largest social network relies more on human reviewers. Facebook works with more than 60 third-party fact checkers globally who review the accuracy of posts. If a fact-checker rates a post as false, Facebook will display a warning notice and show the content lower in a person's News Feed to reduce its spread. Twitter is automatically labeling content without a human review first.
UC Berkeley's Farid said he isn't surprised that Twitter's automated system is making errors.
"The difference between a headline with a conspiracy theory and one debunking it is very subtle," he said. "It's literally the word 'not' and you need full blown language understanding, which we don't have today."
Instead, he said, Twitter could take action against users who are spreading coronavirus misinformation and have a large number of followers. Researchers at Oxford University released a study in April that showed high-profile social media users such as politicians, celebrities or other public figures shared about 20 percent of false claims but generated 69 percent of the total social media engagement.
Fooling Twitter's automated system
Some Twitter users are also testing the system by tweeting the words 5G and coronavirus, flooding the site with more incorrectly labeled tweets.
Ian Alexander, a 33-year-old YouTuber who posts videos about tech, said he spotted the new label on a tweet on May 11 that had nothing to do with the coronavirus 5G conspiracy theory. He decided to test Twitter's system by tweeting "If you type in 5G, COVID-19 or Coronavirus in a tweet.. this will show up underneath it…" The label automatically popped up on the tweet.
Labeling tweets, Alexander said "may be more harmful than good" because somebody might just see the notice on their timeline without clicking through.
Other tweets with misleading coronavirus information are slipping through the cracks. Actress Fran Drescher, who has more than 260,000 followers, tweeted on May 12: "I can't believe all the commercials for 5G . Gr8 4cancer, harming birds, bees &mor viruses like Corona. Dial it bac." A tweet from another user included remarks from Judy Mikovits, who is featured in "Plandemic," a viral video containing coronavirus conspiracy theories, stating she believes 5G plays a part in the coronavirus pandemic. Both tweets didn't have a label. (CNET isn't linking to these tweets because they contain false information.)
Other social networks say they've had success with labeling false content. In March, Facebook displayed warning labels on about 40 million posts about COVID-19. When people saw those warning labels, they didn't go on to view the inaccurate content about 95% of the time, according to Facebook.
Still, a study by MIT found that labeling false news could result in users believing stories that hadn't gotten labels even if they contained misinformation. The MIT researchers call this phenomenon the "implied truth effect."
David Rand, a professor at the MIT Sloan School of Management, who co-authored the study, said one potential solution is for companies to ask social media users to rate content as trustworthy or untrustworthy.
"Not only would it help inform the algorithms," Rand said, "but also it makes people more discerning in their own sharing because it just kind of nudges them to think about accuracy."
Technology - Latest - Google News
May 25, 2020 at 07:08PM
https://ift.tt/3c4pRzM
More harm than good? Twitter struggles to label misleading COVID-19 tweets - CNET
Technology - Latest - Google News
https://ift.tt/2AaD5dD
Bagikan Berita Ini
0 Response to "More harm than good? Twitter struggles to label misleading COVID-19 tweets - CNET"
Post a Comment