Advertisement
Supported by
Humans Can Help Clean Up Facebook and Twitter
Computers are good at spreading lies. People can help stop them and mitigate the damage they cause.
By
Mr. Bensinger is a member of the editorial board.
Social media companies this election cycle did more to try to prevent the spread of misinformation than in any previous American election. It wasn’t nearly enough.
Half-truths and lies spread widely and quickly. On Facebook and Twitter, the most inflammatory, unreliable and divisive posts are shared and too often believed more readily than those with verifiable facts. Now that we’ve had time to survey the fallout from the election, it’s apparent that much more needs to be done to rapidly and more consistently stop the proliferation of bad info, year round and globally.
Leading up to last month’s election, Twitter and Facebook appended warning labels to numerous tweets and posts from Donald Trump and his supporters, and the sites have sporadically continued to do so as the president broadcasts unsubstantiated claims of voting fraud and ballot counting inconsistencies. It’s a start, but the evidence suggests the labels themselves didn’t stop the spread of the posts. Facebook, which allows politicians to post lies on its website, indicated in an internal discussion that the labels lowered the spread of the president’s objectionable posts by only about 8 percent. Twitter said its labels helped to decrease the spread of offending tweets by 29 percent, by one measure.
Worse, the labels contained squishy language, like calling the president’s assertions that he won the election or that it was stolen “disputed,” rather than simply false. Because the companies have not revealed how often users clicked through the labels to more reliable information, it seems safe to assume the those click-throughs were minimal.
Cleaning up social media won’t be easy, particularly since banning or significantly throttling more prominent accounts even after repeated violations of policy or common decency would be bad for business. Top accounts appear to be treated more leniently than the general public, forcing Facebook, in one recent episode, to explain why it wasn’t giving Steve Bannon the boot after he suggested that Dr. Anthony Fauci and Christopher Wray, the director of the F.B.I., should be beheaded. Facebook said Mr. Bannon hadn’t committed enough violations.
It’s really about money. Divisiveness brings more engagement, which brings in more advertising revenue.
Users should worry that Facebook and Twitter won’t maintain the same level of vigilance now that the election has passed. (Facebook’s chief executive, Mark Zuckerberg, said as much, according to BuzzFeed.) And the incentives for posting misleading content didn’t disappear after Nov. 3.
If the companies truly care about the integrity of their platforms, they’ll form teams of people to monitor the accounts of users with the most followers, retweets and engagement. That includes those of Mr. Trump, both today and later as a private citizen, but also of President-elect Joe Biden and President Jair Bolsonaro of Brazil, and other influential accounts, like those of Elon Musk, Bill Gates and Taylor Swift. Facebook says it has software tools to identify when high-reach accounts may violate rules, but they clearly are not catching enough quickly enough.
Think of these frontline moderators as hall monitors whose job is to ensure that students have a pass, but not necessarily to issue penalties if they don’t. The monotony of refreshing Justin Trudeau’s social media feed is worth it for the preservation of democracy and promotion of basic facts.
“For the platforms to treat all the bad info as having the same weight is disingenuous,” said Sarah Roberts, an information studies professor at the University of California, Los Angeles. “The more prominent the profile, the higher the accountability should be.”
With such a system, the companies could ensure the swiftest possible response so that posts are vetted by actual people, including outside fact checkers, familiar with company policy, nuance and local customs. When they rely too much on software to decide what to examine, it can happen slowly or not at all. Particularly in the heat of an election, minutes count and dangerously false information can be seen by millions immediately. If enough people believe an unfettered lie, it gains legitimacy, particularly if our leaders and cultural icons are the ones endorsing it.
Posts, tweets and screenshots that lack a warning label are more likely to be believed because users assume they have passed Facebook’s and Twitter’s smell tests, said Sinan Aral, a Massachusetts Institute of Technology professor who studies social media.
Staffing shouldn’t be an issue: Between Facebook and Twitter, tens of thousands of people work monitoring for commonplace violative content. Such a system could bolster Facebook’s specialized teams that work in conjunction with artificial intelligence software that the company says can detect when a post has or is likely to go viral.
“Human-in-the-loop moderation is the right solution,” said Mr. Aral, whose book, “The Hype Machine,” addresses some of social media’s foibles. “It’s not a simple silver bullet, but it would give accountability where these companies have in the past blamed software.”
Ideally, social media companies would ban public officials, media personalities and celebrities who consistently lie or violate policies. But that’s sure to upset the finance folks, and the companies have demonstrated little willingness to do so as a result. Adding simple modifications like stronger language in warning labels, moving the labels to above from below the content and halting the ability of users to spread patently false information from prominent accounts could go a long way toward reforming the sites.
In two recent hearings, Republican lawmakers raked tech executives over the coals for supposedly impinging on free speech by removing or suppressing content or adding warning labels. But it’s worth noting that Facebook and Twitter are actually exercising their own free-speech rights by policing their sites. Private companies monitoring specific accounts and acting against objectionable content may seem unpalatable to some, but it is not against any law. The alternatives are worse.
Facts take time to verify. Until social media companies care more about fact than fiction, their sites will be nothing more than an accelerant for the lies our leaders can and do tell every day.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.
Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.
Advertisement
"help" - Google News
December 03, 2020 at 03:13AM
https://ift.tt/36uVcMu
Humans Can Help Clean Up Facebook and Twitter - The New York Times
"help" - Google News
https://ift.tt/2SmRddm
Bagikan Berita Ini
0 Response to "Humans Can Help Clean Up Facebook and Twitter - The New York Times"
Post a Comment