Policy Models

Robyn Caplan, a researcher at Data & Society, has proposed a useful framework that separates approaches to policy development for content moderation into three buckets. Although Robyn Caplan’s framework focuses largely on UGC, note that these approaches are relevant to moderating bad actors’ and their behavior too: 

  • Artisanal, where a small number of staff perform case by case governance;
  • Industrial, where a large number of workers enforce rules made by a separate policy team;
  • Community Reliant, where formal company level policy is combined with volunteer moderators.

Because Community Reliance and Industrialization are not mutually exclusive in practice, it is helpful to build on Caplan’s approach by reframing these three separate categories into four. In this way the buckets can be seen as part of two binaries—Artisanal vs. Industrial and Centralized vs. Community Reliant—which can be used to describe approaches to Trust and Safety generally and policy specifically, as illustrated below.

ArtisanalIndustrial
CentralizedDefine policy and enforcement guidelines to a limited degree, intentionally relying on internal teams to expand and adapt rules as circumstances dictate.Extensively define both policy and enforcement guidelines, often including market specific nuances, attempting to account for as many foreseeable issues as possible.
Community ReliantDefine only the high-level policy and rely on trusted community members with a high level of information and context to make decisions.Extensively define red-line policies around fundamental safety and legal issues, and rely on the community to set localized policies and their own enforcement guidelines.

Centralized Artisanal Policy

Examples: Most Early Stage Companies, Patreon, Vimeo

Centralized Artisanal policy approaches are the default for new platforms before they have developed a more explicit strategy. The nature of a new company almost always means that decision making power lies with a small number of employees and that an extensive set of rules does not yet exist.

The Centralized Artisanal model is generally the least expensive model to adopt and maintain because companies don’t have to spend energy (a) making a lot of rules and supporting an operations team to enforce them or (b) building a functioning community with the tools needed to self-regulate. This is an attractive model for platforms that don’t have a lot of issues or have a small user base.

This model also provides trust and safety teams with a lot of flexibility in how they structure their operations, allowing them to adapt their approach to the unique experience and skills of the moderation team. Some Centralized Artisanal teams invest a lot of time in defining policies in great detail, while others may stick with a more basic set of policies and rely on experienced team members to make appropriate moderation decisions based on the knowledge they’ve accrued about how users behave on the platform. 

Platforms using Centralized Artisanal models usually operate at a smaller scale than platforms that use Industrial models. Frequently, the teams writing and enforcing policy are employees of the platform, all working within the same team or organization. This provides lots of opportunity for internal discussion and team-based decisions on individual cases. Behind the scenes, this might look like a weekly “edge cases” meeting where moderators and policy teams convene to discuss content that was difficult to enforce on with the existing set of rules, with the goal of clarifying the policy.

This small scale also means that the people who write and enforce the policy are often generalists rather than specialists. They must be fluent in all policy areas, from pornography and hate speech to impersonation and regulated goods, because their small team is expected to cover all abuse areas. 

Community Reliant Artisanal Policy

Examples: Wikipedia, Goodreads, Substack

Community Reliant Artisanal policy models rely on community-written rules, feedback, and volunteer moderators. Platforms using this model largely depend on individual moderators or community action to enforce good conduct, and usually do not have detailed centrally-defined guidelines. Rather, these platforms have a general set of top-level policies that govern the entire platform (e.g., “Be respectful”), and community moderators then work together to interpret those policies into specific moderation decisions (e.g., “Deleting someone’s work without permission is disrespectful, and therefore not allowed”). This approach to policy-making enables platform providers to directly involve the public in conversations about what is and isn’t acceptable content or behavior in their community. 

Enforcement in a Community Reliant Artisanal model often uses crowdsourced action systems such as talk-pages, downvoting, blocking, or flagging. Since mechanisms like these allow the community to take direct action on problematic content, moderation is often faster than in centralized models, where a dedicated reviewer must evaluate individual pieces of content. At the same time, since it relies on the wisdom of crowds, crowdsourced enforcement can suffer from popularity-contest dynamics: content is promoted or rendered invisible based on how much a community likes or dislikes it, rather than whether it’s untrue or harmful. This becomes especially problematic if a particular kind of abuse is socially acceptable in a sub-community, leading to non-enforcement or even harassment. This means that, depending on the platform’s purpose and architecture, it may be harder for Community Reliant models to evaluate the performance and enforcement of existing policies or to prevent organized behavior and scaled abuse.

Centralized Industrial Policy

Examples: Facebook, YouTube

The Centralized Industrial approach involves large numbers of reviewers, often distributed across different locations, guided by a centralized approach. With these large review teams, platforms using this model are often better-positioned to provide around-the-clock support across many languages and abuse types than platforms using an Artisanal approach. However, it can be hard to keep large review teams on the same page, especially if policies need to change frequently to adapt to evolving current events or user behaviors. There is inevitably a wide range of personal and cultural perspectives on how enforcement guidelines should be interpreted across teams of this size as well. As a result, platforms often write much more detailed policies for these large review teams; extensive enforcement guidelines are regularly used to ensure consistency and quality of reviews at scale.

Because of the sheer volume of information contained in these policies and enforcement guidelines, it can be difficult for reviewers to absorb and retain every detail across every policy. It is therefore common for both policy writers and reviewers to specialize in specific sets of policies. Policy teams and review teams are also often structured to specialize in language- or country-specific policies. 

Platforms using the Centralized Industrial approach are often grappling with a very large amount of user-generated content; with so much content to review, these platforms usually explore automated policy enforcement systems alongside their human review teams. Automated systems are, in general, faster than human review and can detect questionable content before it gets reported, but typically cannot perform nuanced or contextual review. Policy written for machines therefore tends to be much simpler than that written for humans. 

Creating scalable policy for an Industrial model involves finding a good compromise between ease of enforcement and more extensive and nuanced reviews. The Artisanal approach often makes a re-appearance within the Centralized Industrial model; for example, many T&S teams use an Industrial process with a relatively simple policy to evaluate if someone is making a threat of violence, and then a more detailed Artisanal process to evaluate if the threat is credible or imminent. High-profile (and, comparatively, lower volume) incidents are also often evaluated using the Artisanal approach due to the potential costs of an error.

Community Reliant Industrial Policy

Examples: Reddit, Airbnb, Twitch

Community Reliant Industrial policy models often operate as a community-of-communities. The platform itself will create a general set of relatively permissive policies that govern the entire platform and will enforce those in ways that often mirror a Centralized Industrial approach, including moderation by trust and safety professionals using enforcement guidelines, strike systems, etc. Then, within those permissive policy boundaries set by the platform’s moderation team, people are free to construct spaces with further, and often much stricter, rules as a way to create particular kinds of environments. 

In this model, the platform often ends up devoting significant trust and safety resources to “moderating the moderators” to ensure community members are doing their part in the spaces they control. Often, this means meting out consequences to community moderators if the platform’s top-level policies are consistently violated within their spaces (e.g., banning of subreddits, removing a rental listing).

A version of this model is also common among auction, sharing-economy, and event companies, since the real-world interaction inherent to these platforms is often outside of the companies’ direct knowledge or ability to control. Within this subset of platforms, user ratings and reviews are commonly used to supplement user complaints as a basis for policy enforcement, since they are often the only substantial information the company has.

For more examples of how different companies organize and present their policies, below are links to different company’s community and policy guidelines:

Community Reliant Industrial

Airbnb

Eventbrite

Meetup

Reddit 

Twitch

Discord

VRBO

Community Reliant Artisanal

Clubhouse

Quora

Telegram

Wikipedia

Centralized Artisanal

Patreon 

Vimeo 

VSCO

Every new startup

In-Depth Look: Product Counseling

Policy teams are also often responsible for providing policy counseling to products, both for improvements to existing products or when launching new products. They support the respective product team’s strategic vision and enable successful and responsible launches. They also drive or contribute to product review processes to ensure that privacy, safety, integrity, and regional policy risks are being prevented and mitigated. To effectively preempt and mitigate such risks, they ideally start engaging in the early stage of a product development lifecycle.

A significant part of their engagement in this context involves identifying policy factors that contribute to the overall risk of a product. It also includes assigning a risk level and weighing different risks against each other based on several factors like probability, severity, or reach/scale. Product policy teams drive prevention and mitigation solutions based on a product’s risk assessment. While Product Policy teams are typically responsible for such counseling and risk management, this role could also be played by legal or other trust and safety teams (such as a new product evaluation team) depending on the scale and organizational structure of a company.

Key principles
Below are the principles commonly used by policy teams to counsel product teams and assess product risks. Please note that this is not an exhaustive list.

Safety, Equity, and Privacy

  • Situational Harms: There should be enough guardrails in place to prevent any situational harms that may arise due to the product being launched at a time of a high risk or a high priority event (e.g., elections, or at a time of civil unrest).
  • Fair Outcomes: Ensuring that the product outcomes do not lead to any unintended biases.
  • Eligibility: High risk products may not be immediately available to high risk users. Example if a messaging product for children is being rolled out, there should be prevention mechanisms to limit its access to potentially high risk users in the context of child safety.
  • Privacy and Regulation: Although most of the product teams will receive counsel from specific legal and privacy teams, depending on the organizational structure, Product Policy teams may also engage in this context.

Agency for Users

  • There should be appropriate and easily accessible reporting mechanisms for users to flag inappropriate content or behavior.
  • Users should be empowered with clear education and self remediation options to manage unexpected content or behavior.
  • Users should have the ability to easily turn off or adjust a feature if they do not like it, or have the ability to block/mute/silence certain features or users.

Transparency and Fairness in Enforcement

  • Policies and guidelines should be transparent and easily accessible externally, to make compliance easy for users.
  • Clear messaging or notifications to users about if and how they violated community standards while using the product, and its implications.
  • Users should have the option to disagree with and appeal any enforcement decisions impacting them.

Note: “Product” teams in this section refers to the teams that build or update core user facing products, and not integrity product teams.