Trust and Safety Compliance Program at the Heart of Twitter Rift

Musk says he wants Twitter to foster a “public platform that is maximally trusted and broadly inclusive.” In other words, the trust and safety program shouldn’t be used to censor speech.

Photographer: Michael Nagle/Bloomberg

A disinformation war has broken out in the United States, and corporate compliance is in the middle of it. In the wake of Elon Musk’s deal to buy Twitter, Homeland Security Secretary Alejandro Mayorkas testified this week that the Biden administration had created a “Disinformation Governance Board.” It’s not difficult to find someone upset about one of the two events.

Musk is buying Twitter because he says he wants to make Twitter “politically neutral.” That’s what makes users trust the platform. Investors expect Twitter to be a platform for all users, not one group or another. Musk said, “It’s important to the function of the United States as a free country, and many other countries. And actually to help freedom in the world, more broadly than the U.S. The civilizational risk is decreased the more we can increase the trust of Twitter as a public platform.”

At the heart of the debate is Twitter’s trust and safety compliance program.

Trust and safety programs focus on removing harmful or illegal content from platforms. However, evaluating concepts like “harmful” and “offensive” pose challenges to program effectiveness. Compliance programs should, at their core, focus on building trust with customers. Programs that don’t get this can’t really accomplish their objective.

A compliance program is more than just a set of policies and training. Programs should map to regulations, and they should have strong organization, including leadership and tone at the top; an effective risk-assessment process that identifies gaps and opportunities to develop controls; governance, including policies, procedures, and controls; training to teach people about this governance; and processes to monitor, audit, and investigate compliance failures. When these things work together, they drive compliance culture. The end game is complying with applicable laws and regulations, as well as building customer trust.

Compliance programs have different subject matter areas based on risks to the company. A lot of ink is spilled about anti-corruption, privacy, and trade controls (especially recently with global political instability). The areas addressed by the compliance program depend on the company’s individual risk profile. For e-commerce companies, trust and safety is one of those risk areas. For companies that provide a social media platform, like Twitter, the trust and safety program is core to their business.

Like all compliance programs, trust and safety programs are tailored to risk. An e-commerce trust and safety program may focus heavily on anti-counterfeit controls. The program works with people who monitor regulatory, policy, and other trends that impact the business and customers, and it provides information to engineers who write rules that set parameters on the types of content allowed on the site. Other trust and safety areas include vetting of third parties who post content on the platform and monitoring the types of content on the platform.

Some of what trust and safety programs do is address legal risk, other areas would fall more into policy or reputational risk to the platform and its users. In the e-commerce context, this may mean a rule that prevents a third party from selling an illegal item versus an offensive item or one that the business does not want to sell, even though it may be lawful (e.g., adult items). A platform may have rules set up to protect users against fraudulent solicitations or platform manipulation. Protection of intellectual property (IP) is typically a key risk addressed by the rules.

Each platform’s trust and safety program is unique, based on the particular business. Across organizations, trust and safety of the users or customers remains the focus of the rules. Absent discrimination or a violation of federal regulations (such as the FTC rules), though, enforcement is largely up to the company.

Twitter’s business allows people to communicate with other people about different issues in short bursts of information. Part news organization, part social media hub, part advertising platform, it’s supposed to be a town hall of sorts. The trust and safety program at Twitter would be the team that sets parameters based on risk to the platform and other users. Their goal should be to analyze risk and develop rules to protect the platform, investors, and—importantly—the trust of their users.

For instance, Twitter has a rule prohibiting “hateful conduct.” The rule provides, “[y]ou may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.” The rule goes on to list how it applies and the consequences. Other social media platforms have similar rules to protect users and the platform.

The rule makes perfect sense and focuses on protecting the platform and its users. But Twitter’s enforcement of these rules has created some controversy. Perhaps most notably was the company’s decision to ban former President Donald Trump’s personal Twitter account over the concern that he may use the platform to incite further violence after the January 2021 Capitol riots. At the time, Trump was tweeting his views about the election results and that he wouldn’t attend Biden’s inauguration. Some articles have been critical of Twitter’s enforcement of the rules, because it has continued to allow the account of the Office of President of Russia in the midst of Russia’s war in Ukraine and permits accounts linked to the Taliban, while Trump remains banned. Other commentators have been supportive and see the trust and safety program as the only thing keeping wayward Twitter users from performing the equivalent of shouting “fire” in a crowded theater.

Musk says he wants Twitter to foster a “public platform that is maximally trusted and broadly inclusive.” What Musk is saying is that the trust and safety program shouldn’t be used to censor speech. He believes it’s broken. When it is used in this way, it harms user trust, which is the opposite of what it’s supposed to do. And his bid to buy Twitter, he believes, will unlock value and build trust that currently does not exist today within the platform. For Musk, before he can help the company better monetize what it’s doing and increase revenue for shareholders, he has to fix this basic ingredient (before addressing advertising, subscriptions, etc.).

Trust and safety programs involve difficult choices. In hindsight, people focus on individual actions that are the result of applying decisions about risk. It’s hard to strike the balance Musk wants with trust and safety rules because rules are outputs of identifying risk and the risk identification process. Each of us brings biases and viewpoints to how we identify and prioritize risks. And trust and safety program enforcement reflects this risk process. It’s unfair to blame Twitter or any individual for the outcome of a risk process and trust and safety program.

Musk bought the company, and if he believes the program is not building trust with platform users and is perceived as unfair, he should change it. How he does will be interesting to watch.


Ryan McConnell, Meagan Baker Thompson, and Matthew Boyden are lawyers at R. McConnell Group, a boutique law firm based in Houston and Austin that focuses on litigation and governance issues. Follow the firm on Twitter at @rmcconnellgroup.


From: Corporate Counsel