Over the past few weeks, Twitter has flagged multiple tweets from President Donald Trump and his administration, claiming that a number of their tweets violated the platforms rules surrounding glorifying violence and potentially misleading voting information.
“Twitter is doing nothing about all of the lies & propaganda being put out by China or the Radical Left Democrat Party,” the president responded in a tweet. “They have targeted Republicans, Conservatives & the President of the United States. Section 230 should be revoked by Congress. Until then, it will be regulated!”
Following his response on Twitter, the president signed an executive order encouraging the Federal Communications Commission to reevaluate the scope and rules of Section 230 of the Communications Decency Act, which protects tech companies from being held liable for anything that users say on their platforms.
In an interview for Fox News last month, Facebook CEO Mark Zuckerberg took a stance on the issue by exclaiming that private companies should not act as the “arbiter of truth” online. Twitter CEO Jack Dorsey later responded to Zuckerberg’s statement in a thread of tweets, explaining that the platform’s intention is to “connect the dots of conflicting statements and show the information in dispute so people can judge for themselves.”
The debate between the two Silicon Valley giants sparks a much larger conversation: do social media platforms have an obligation to rebuke fake news and educate users, especially when it comes to public figures like the president?
Earlier this year, Twitter implemented new labels and warning messages that provide additional context and information to dispute false and/or misleading information. The platform broke down false and misleading content into three categories, including misleading information, disputed claims and verified claims that may be given a warning message, a label or nothing at all depending on the claim’s severity.
To identify problematic Tweets, the social media platform explained that it is using internal systems to monitor content and other trusted partners to identify Tweets that could insight offline harm. Twitter has promised that the process is ongoing and that it will “continue to introduce new labels to provide context around different types of unverified claims and rumors as needed.”
Facebook, on the other hand, has a very different stance on the spread of fake news and misinformation. In Facebook’s Community Standards, the platform states that it believes there is a line between false news and satire or opinion. To further emphasize this statement, Facebook explains that they “don’t remove false news from Facebook but instead, significantly reduce its distribution by showing it lower in the News Feed.”
In addition, Zuckerberg has further elaborated on the platform’s decision to not remove false news, especially when it comes to the president, because “[Facebook] think[s] people need to know if the government is planning to deploy force.”
While both Facebook and Twitter claim they disapprove of fake news and do not support its spread, each platform has astronomically different opinions on its removal from social media. Does the public deserve to have access to potential false and/or misleading information president for their own protection, or will leaving false or misleading information on the platform position them in even more danger?
Though there may not be a cookie-cutter answer to this ongoing debate, if the president’s plan to hold tech companies liable for their actions moves forward, the big tech industry could change entirely as we know it (and possibly not for the better).
Dylan Manderbach is a public relations associate at Flackable, a national, full-service public relations agency headquartered in Philadelphia. To learn more about Flackable, please visit www.flackable.com.