Policy Models

Robyn Caplan, a researcher at Data & Society, has proposed an insightful and useful framework that separates approaches to Trust and Safety into three buckets: 

  • Artisanal, where a small number of staff perform case by case governance
  • Industrial, where a large number of workers enforce rules made by a separate policy team
  • Community Reliant, where formal company level policy is combined with volunteer moderators

Because Community Reliance and Industrialization are not mutually exclusive in practice, it is helpful to build on Caplan’s approach by reframing these three separate categories into four. In this way the buckets can be seen as part of two binaries—Artisanal vs Industrial and Centralized vs Community Reliant—which can be used to describe approaches to Trust and Safety generally and policy specifically, as illustrated below:

ArtisanalIndustrial
CentralizedDefine policy and enforcement guidelines to a limited degree, intentionally relying on internal teams to expand and adapt rules as circumstances dictate.Extensively define both policy and enforcement guidelines, often including market specific nuances, attempting to account for as many foreseeable issues as possible.
Community ReliantDefine only the high-level policy and rely on trusted community members with a high level of information and context to make decisions.Extensively define red-line policies around fundamental safety and legal issues, and rely on the community to set localized policies and their own enforcement guidelines.

Centralized Artisanal Policy

Examples: Most Startups, Patreon, Vimeo

Centralized Artisanal policy approaches are the default for new platforms before they have developed a more explicit strategy. The nature of a new company almost always means that decision making power lies with the small number of employees and that an extensively elaborate set of rules does not yet exist.

The Centralized Artisanal model is generally the least expensive model to adopt and maintain because companies don’t have to spend energy (a) making a lot of rules and supporting an operations team to enforce them or (b) building a functioning community with the tools needed to self-regulate. This is an attractive model for platforms that don’t have a lot of issues or have a small user base.

This model also provides trust and safety teams with a lot of flexibility in how they structure their operations, allowing them to adapt their approach to the unique experience and skills of the moderation team. Some Centralized Artisanal teams invest a lot of time in defining policies in great detail, while others may stick with a more basic set of policies and rely on experienced team members to make appropriate moderation decisions based on the knowledge they’ve accrued about how users behave on the platform. 

Platforms using Centralized Artisanal models usually operate at a smaller scale than platforms that use Industrial models. Frequently, the teams writing and enforcing policy are employees of the platform, all working within the same team or organization. This provides lots of opportunity for internal discussion and team-based decisions on individual cases. Behind the scenes, this might look like a weekly “edge cases” meeting where moderators and policy teams convene to discuss content that was difficult to enforce on with the existing set of rules, with the goal of clarifying the policy.

This small scale also means that the people who write and enforce the policy are often generalists rather than specialists. They must be fluent in all policy areas, from pornography and hate speech to impersonation and regulated goods, because their small team is expected to cover all abuse areas. 

Community Reliant Artisanal Policy

Examples: Wikipedia, Goodreads, Substack

Community Reliant Artisanal policy models rely on community-written rules, feedback, and volunteer moderators. Platforms using this model largely depend on individual moderators or community action to enforce good conduct, and usually do not have detailed centrally-defined guidelines. Rather, these platforms have a general set of top-level policies that govern the entire platform (e.g. “Be respectful”), and community moderators then work together to interpret those policies into specific moderation decisions (e.g., “Deleting someone’s work without permission is disrespectful, and therefore not allowed”). This approach to policy-making enables platform providers to directly involve the public in conversations about what is and isn’t acceptable content or behavior in their community. 

Enforcement in a Community Reliant Artisanal model often uses crowdsourced action systems such as talk-pages, downvoting, blocking, or flagging. Since mechanisms like these allow the community to take direct action on problematic content, moderation is often faster than in centralized models, where a dedicated reviewer must evaluate individual pieces of content. At the same time, since it relies on the wisdom of crowds, crowdsourced enforcement can suffer from popularity-contest dynamics: content is promoted or rendered invisible based on how much a community likes or dislikes it, rather than whether it’s untrue or harmful.  This becomes especially problematic if a particular kind of abuse is socially acceptable in a sub-community, leading to non-enforcement or even harassment. This means that, depending on the platform’s purpose and architecture, it may be harder for Community Reliant models to evaluate the performance and enforcement of existing policies or to prevent organized behavior and scaled abuse.

Centralized Industrial Policy

Examples: Facebook, YouTube

The Centralized Industrial approach involves large numbers of reviewers, often distributed across different locations, guided by a centralized approach. With these large review teams, platforms using this model are often better-positioned to provide around-the-clock support across many languages and abuse types than platforms using an Artisanal approach. However, it can be hard to keep large review teams on the same page, especially if policies need to change frequently to adapt to evolving current events or user behaviors. There is inevitably a wide range of personal and cultural perspectives on how abuse should be handled across teams of this size as well. As a result, platforms often write much more detailed policies for these large review teams; extensive enforcement guidelines are regularly used to ensure consistency and quality of reviews at scale.

Because of the sheer volume of information contained in these policies and enforcement guidelines, it can be difficult for reviewers to absorb and retain every detail across every policy. It is therefore common for both policy writers and reviewers to specialize in specific sets of policies. Policy teams and review teams are also often structured to specialize in language- or country-specific policies. 

Platforms using the Centralized Industrial approach are often grappling with a very large amount of user-generated content; with so much content to get through, these platforms usually explore automated policy enforcement systems alongside their human review teams. Automated systems are, in general, faster than human review and can detect questionable content before it gets reported, but typically cannot perform nuanced or contextual review. Policy written for machines therefore tends to be much simpler than that written for humans. 

Creating scalable policy for an Industrial model involves finding a good compromise between ease of enforcement and more extensive and nuanced reviews. The Artisanal approach often makes a re-appearance within the Centralized Industrial model; for example, many T&S teams use an Industrial process with a relatively simple policy to evaluate if someone is making a threat of violence, and then a more detailed Artisanal process to evaluate if the threat is credible or imminent. High-profile incidents are also often evaluated using the Artisanal approach due to the potential costs of an error.

Community Reliant Industrial Policy

Examples: Reddit, Airbnb, Twitch

Community Reliant Industrial policy models often operate as a community-of-communities. The platform itself will create a general set of relatively permissive policies that govern the entire platform and will enforce those in ways that often mirror a Centralized Industrial approach, including moderation by trust and safety professionals using enforcement guidelines, strike systems, etc. Then, within those permissive policy boundaries set by the platform’s moderation team, people are free to construct spaces with further, and often much stricter, rules as a way to create particular kinds of environments. 

In this model, the platform often ends up devoting significant trust and safety resources to “moderating the moderators” to ensure community members are doing their part in the spaces they control. Often, this means meting out consequences to community moderators if the platform’s top-level policies are consistently violated within their spaces (e.g. banning of subreddits, removing a rental listing, etc)

A version of this model is also common among auction, sharing-economy, and event companies, since the real-world interaction inherent to these platforms is often outside of the companies’ direct knowledge or ability to control.  Within this subset of platforms, user ratings and reviews are commonly used to supplement user complaints as a basis for policy enforcement, since they are often the only substantial information the company has.