Perspectives: Should social media platforms be regulated to stop hate speech?

Social media restricted speech and cancel culture censorship concept or thought control and canceling or restricting opinions that are offensive or controversial to the public in a 3D illustration style. cancel culture tile / Getty Images
Social media restricted speech and cancel culture censorship concept or thought control and canceling or restricting opinions that are offensive or controversial to the public in a 3D illustration style. cancel culture tile / Getty Images

YES: Lawmakers and regulators should implement policies to mitigate harmful, false content

By Yosef Getachew

On Jan. 6, a mob of insurrectionists stormed the U.S. Capitol in an attempt to overturn our country's 2020 presidential election. The attack, which resulted in the death of five people, was fueled by a stream of disinformation and hate speech that flooded across social media platforms before, during and after the election. Despite their civic integrity and content moderation policies, platforms have been slow or unwilling to take action to limit the spread of content designed to disrupt our democracy.

This failure is inherently tied to platforms' business models and practices that incentivize the proliferation of harmful speech. Content that generates the most engagement on social media tends to be disinformation, hate speech and conspiracy theories. Platforms have implemented business models designed to maximize user engagement and prioritize their profit shares over combating harmful content.

While the First Amendment limits our government from regulating speech, there are legislative and regulatory tools at its disposal that can rein in social media business practices bad actors exploit to spread and amplify speech that interferes with our democracy.

The core component of every major social media platforms' business model is to collect as much user data as possible, including characteristics such as age, gender, location, income and political beliefs. Platforms then share relevant data points with advertisers for targeted advertising. It should come as no surprise that disinformation agents exploit social media platforms' data-collection practices and targeted advertising capabilities to micro-target harmful content, particularly to marginalized communities.

Comprehensive privacy legislation, if passed, can require data minimization standards, which limit the collection and sharing of personal data to what is necessary to provide service to the user. Legislation can also restrict the use of personal data to engage in discriminatory practices that spread harmful content such as online voter suppression. Without the vast troves of data platforms collect on their users, bad actors will face more obstacles targeting users with disinformation.

In addition to data-collection practices, platforms use algorithms that determine what content users see. Algorithms track user-preferences through clicks, likes and other forms of engagement. Platforms optimize their algorithms to maximize user engagement, which can mean leading users down a rabbit hole of hate speech, disinformation and conspiracy theories.

Unfortunately platform algorithms are a "black box" with little known about their inner workings. Congress should pass legislation that holds platform algorithms accountable. Platforms should be required to disclose how their algorithms process personal data. Algorithms should also be subject to third-party audits to mitigate the dangers of algorithmic decision-making that spreads and amplifies harmful content.

Federal agencies with enforcement and rule-making capabilities can apply their authority to limit the spread of harmful online speech that results from platform business practices. For example, the Federal Trade Commission can use its enforcement power against unfair and deceptive practices to investigate platforms for running ads with election disinformation despite having policies that prohibit such content. The Federal Election Commission can complete its longstanding rule-making to require greater disclosure of online political advertisement to provide greater transparency as to what entities are trying to influence our elections.

Outside of legislative and regulatory processes, the Biden administration should create a task force for the internet, consisting of representatives from federal, state and local governments, business, labor, public interest organizations, academia and journalists. The task force would identify tools to combat harmful speech online and make long-term recommendations for an internet that would better serve the public interest.

There is no silver bullet solution to eliminating disinformation, hate speech and other harmful online content. In addition to the policy ideas, federal lawmakers must provide greater support for local journalism to meet the information needs of communities.

But social media companies have proven that profits are more important to them than the safety and security of our democracy. Federal lawmakers and regulators must enact policies as part of a holistic approach to hold social media platforms accountable for the proliferation of harmful and false content.

Yosef Getachew is director of the Media & Democracy Program for Common Cause. He wrote this for InsideSources.com.

Tribune Content Agency

NO: Control over online speech should be in the hands of users, not the government

By Jillian C. York and Karen Gullo

The U.S. election and its dramatic aftermath have elevated the debate about how to deal with online misinformation and disinformation, lies and extremism. We saw social media companies permanently kick the president, some of his allies and conspiracy groups off their platforms for election misinformation, raising eyebrows around the world and leading to accusations that they're being robbed of their First Amendment rights. At the same time, people used social media to communicate plans to commit violence at the Capitol, drawing complaints that platforms don't do enough to censor extremism.

This has exacerbated calls by politicians and others to regulate online speech by imposing rules on Facebook, Twitter and other social media platforms. Lawmakers are backing various wrongheaded proposals for this. One would change the law to hold tech companies legally liable for the speech they host, by amending Section 230 of the Communications Decency Act - the thought being that platforms will remove harmful speech to avoid multiple lawsuits. Another would give state legislatures power to regulate internet speech. Last but not least, now-former president Donald Trump issued in May an executive order that would essentially insert the federal government into private internet speech, letting government agencies adjudicate platforms' decisions to remove a post or kick someone off. The Biden administration can rescind the order - but so far it has not.

It is important to note that the law as it currently exists gives platforms both the right to curate their content as they see fit (thanks to the First Amendment) and protects them from liability for the choices they make about what to remove or leave up. Without these protections, it is unlikely that we would have seen the growth of these platforms in the first place, nor are we likely to see further flourishing of competition in the space.

The purported remedies under consideration by lawmakers are highly and dangerously flawed and flout First Amendment speech protections. They would foster state censorship antithetical to democracy. Big tech companies would have more control over online speech than they already have because they can afford the legal fights that will scare off new entrants to the market. What's more, they would push legal, protected speech offline, and silence the voices of marginalized and less powerful people who rely on the internet to speak out - a diverse set of people that includes activists, journalists, LGBTQ individuals and many more.

Instead, users should have more power to control what they see on their feeds. They should be able to move freely with their data from one platform to another when they don't like what they see. There should be more competition and more choice of platforms so users can seek out the one that works for them. Mergers and acquisitions among social media companies should be more closely scrutinized, and our antitrust laws better enforced to foster competition. Instead of having one giant platform gobbling up its competitors, as Facebook did with Instagram and WhatsApp, we need multiple, diverse platforms for people to choose from.

Facebook, Twitter and Google have way too much control over public discourse and do a mostly horrendous job at moderating speech on their platforms. The decisions they make to take down posts or close accounts are inconsistent and vague, and lack transparency. That needs to change. Platforms should adopt standards like the Santa Clara Principles on Transparency and Accountability in Content Moderation (developed by civil society and endorsed by numerous companies), which frame content moderation practices around human rights considerations, including the right to appeal take down decisions and have humans, not algorithms, review removals.

Tech companies have a First Amendment right to edit and curate the content on their platforms, free of government interference. The government cannot force sites to display or promote speech they don't want to display or remove speech they don't want to remove. We support this right. The government shouldn't have the power to dictate what people can or cannot say online.

But until platforms embrace fairness, consistency and transparency in their editing practices, give users more power over their social media accounts, and embrace interoperability so users won't lose data if they decide to switch platforms, and until policymakers find ways to foster competition, we will continue to see misguided calls for the government to step in and regulate online speech.

Jillian C. York, director of International Freedom of Expression at the Electronic Frontier Foundation, is the author of "Silicon Values: The Future of Free Speech Under Surveillance Capitalism." Karen Gullo is an analyst and senior media relations specialist at EFF. They wrote this for InsideSources.com.

Tribune Content Agency

Upcoming Events