Policy Development

What Is Policy and Why Does It Matter?

Policy

Policy is the set of rules and guidelines that a platform uses to govern the conduct of its users. It often, but not always, exists in two forms: an external document providing an overview of the company’s expectations of user behavior, and an internal document detailing exactly how and when to apply the policies in making specific decisions. How much internal and external documents vary depends on the product, service, or platform.

Community Standards

Community standards (also called community guidelines, acceptable use policy, and other names) are the external or user-facing expressions of policy. Since their primary focus is on user education and setting expectations, they are frequently found in or linked to a platform’s Help Center. They are typically written in a more accessible (but less precise) language than their internal counterparts. Due to the public nature of the document and the often upsetting nature of the subjects they discuss, they typically do not include specific examples, particularly visual examples.

Enforcement Guidelines

Enforcement guidelines (sometimes called operational guidelines) are extensions of the basic policies and include substantially more details so that they can be used to facilitate reasonably consistent, practical decision-making for reviewers (sometimes also referred to as moderators, agents, or administrators depending on the context). The intent of enforcement guidelines is to provide reviewers clear and specific instructions on the correct enforcement action to take against a piece of content. They often include detailed directions and flow charts called decision trees, and try to capture common issues as well as unusual situations. For example, if a set of Community Standards bans nudity, then the associated Enforcement Guidelines might contain details on exactly how much of which body parts are permitted or not permitted in uploaded content, and in which specific contexts. For additional information on enforcement guidelines, see the Content Moderation and Operations chapter.

Historically, internal enforcement guidelines have not been made public, although Facebook’s decision to publish their internal documentation in an attempt to be more transparent may herald a change in this norm. There are a several reasons companies have typically chosen not to publish their enforcement guidelines:

  • Malicious actors can use detailed enforcement guidelines to work around policy restrictions. 
  • Enforcement guidelines can change frequently in response to new trends or high profile incidents, especially at companies operating at significant scale. Keeping a public-facing document updated through frequent changes is significantly more time-intensive than maintaining internal-only material.
  • Policies may be provisionally launched in an unfinished state to collect data about how they perform; teams may not want to draw public attention to such provisional policies before they’ve decided to formally adopt them. 
  • Public policy and government relations teams may be concerned about public reaction to enforcement guidelines because these documents inherently deal with sensitive topics, usually in extensive detail.
  • Legal departments may be concerned about a possible increase in litigation as a result of the publication of the much more specific language that enforcement guidelines contain. 

Even under the current norm of keeping enforcement guidelines confidential, it is not unusual for companies to release relevant details publicly in response to incidents where the reason for an action (or lack of action) is unclear, particularly when there is a high level of public interest.

Supporting Information

Supporting information is any information outside of the platform’s main policy that might affect how, when, or to whom a policy is applied. Supporting information most commonly takes the form of lists of groups (e.g., terrorist groups), terms (e.g., slurs), prohibited goods (e.g., dangerous substances) that fit a particular definition used within a policy. For example, a platform’s policy may read “We prohibit groups and individuals promoting terrorism or hate groups”; that policy then uses supporting information like a government-issued designated terrorist organization list to determine which specific groups and individuals that policy affects. As groups are added to or removed from that government list, the group’s status on the platform will change even though the platform’s main policy has not changed. While most such supporting information is created based on government or official lists, or legal requirements, companies may also create their own lists to support policy enforcement.

Who Creates Policy?

Early versions of a platform’s policies are usually written by the same people who build the product: founders, early development teams, or early operations teams. These first efforts are typically quite simple, and may rely on the precedence set by more mature companies on what is and is not allowed, as well as unwritten norms specific to the platform. As a platform grows, responsibility for policy usually transitions to the company’s legal team or a dedicated policy team. Platforms that are developed by the user community rather than a centralized corporate development team, such as Wikipedia, often craft their own rules through a consensus process with key stakeholders.

In-depth Look: Policies Beyond Community Standards

While this chapter is focused on creating and enforcing UGC policies, usually expressed as community standards, it is important to emphasize that products, services, and features may have their own unique acceptable use policies, service-specific policies, or product policies. Like community standards, these policies lay out a set of rules of what is and is not allowed (often building on the community standards). Platforms that serve as marketplaces for buyers and sellers may prohibit the sale of certain types of goods or services based on quality standards or the platform’s vision, for instance. Product policies may also include specific limits on who can use the product and how content is seen. For example: 

  • Some products have policies that only permit particular users, such as those with a minimum number of followers or demonstrated track record of good behavior, to use the product. This is often the case for products that are highly susceptible to abuse, such as live streaming, as it limits usage to a more trusted set of users. For similar reasons, some products (e.g., fundraising, shopping, or marketplace) may have policies requiring verification, external security assessments, or that users meet certain safety standards before being allowed to post, share, or publish. 
  • Some products have policies that govern what type of content is eligible to be promoted, recommended, or featured (e.g., policies for algorithmic ranking on feed). Likewise, some products may have policies governing what type of content should receive reduced visibility or be demoted. For example, Facebook has guidelines about the kinds of content that may receive reduced visibility. 
  • Some products have policies that govern how content may be targeted. In addition to having more stringent policies for what type of content is allowed in advertising content, it is common for companies to also develop policies that restrict ads targeting certain demographics, or relating to sensitive information about a user, such as their health status. 

In larger companies, these policies may be developed and maintained by dedicated product policy teams or sub-teams that focus on a particular product, service, or feature, but in smaller and medium-sized companies, these types of specific policies may be created separately by legal or product managers.

Considerations When Creating Policies

Regardless of the exact approach taken, policy creators need to consider a significant number of influences; these are detailed in the sections below.

Platform Purpose, Architecture, Organizational Structure, and Business Goals

The purpose, architecture, organizational structure, and business goals of a platform fundamentally shape its policies as they create different sets of incentives, constraints, and vulnerabilities. As a result, different platforms may take very different stances on identical content due to differences in their purpose and goals.

These differences are often fairly intuitive, at least in their broad outlines, when comparing platforms that have very different purposes and designs. As concrete examples:

  • A child-focused multiplayer gaming company compared with an adult-oriented dating site;
  • A community created fan wiki compared with a centralized social network;
  • A sharing economy marketplace compared with a cloud file hosting service.

In the first case, the difference in audience and purpose of the two sites would nudge the companies towards different levels of restrictiveness in their content policy. In the second, the likely difference in both scale-of-operation and decision making processes will tend to result in different levels of nuance and flexibility. In the third, the difference in business goals between a public marketplace and a private repository of information will naturally incentivize different stances on many types of content.

Platform Core Principles

Core principles of a platform play a foundational role in the policies that are adopted. Products and services that have adopted a principle of free expression are likely to have fewer restrictions on what type of content is disallowed than products and services that believe in a finely curated aesthetic or that are aimed at a specific audience. Reddit encourages expression and debate, while requiring its communities to respect privacy and safety. Meta lays out its values in its Community Standards as: authenticity, safety, privacy, dignity. Wikipedia’s content is governed by three principal core content policies: neutral point of view, verifiability, and no original research.

Legal requirements often shape policy significantly. Authorities at the local, federal, and international levels require online platforms to conduct their businesses in specific ways, under threat of a variety of possible regulatory penalties. In addition, platforms may put certain policies in place to avoid civil liability and the cost of potential lawsuits.

These laws may instruct a platform to take an action that is already built into their policy—for example, many countries have laws that require platforms to remove child sexual abuse material, which virtually all mainstream platforms already prohibit. Other times, a law may attempt to require a platform to take action against content or behavior that does not otherwise violate its policies—for example, a law may require platforms to remove pro-LGBTQ content that is otherwise allowed within the platform’s policies.

News Article: Google loses landmark ‘right to be forgotten’ case

Example: YouTube’s Ad Policies for Alcohol

As a result, platforms that operate in multiple jurisdictions must decide if their policies are applied globally or if (and how) they are adjusted based on local laws. In certain circumstances (often those with significant human rights or ethical implications) platforms may choose to risk violating a law by not making any adjustments to their policies. Laws and regulations vary widely between jurisdictions around the world; companies should always research them thoroughly in collaboration with their legal teams to identify which apply to them and how they will choose to interpret them within their company.

News Article: Iranian government blocks Facebook access

User Opinion

User opinion can also have an impact on how policy is developed. Certain behavior and content can create an unfriendly environment for users and is likely to limit the size and growth of a user base. On the other hand, users may also reject a platform that is too restrictive for them, either because the content restrictions impede their use of the platform, or out of principle. Note that the user base isn’t the same as all of society, and user opinions will differ depending on demographics of the platform’s user base. For example, a platform designed to support a particular religious or political group may have policies that align with the values of that group.

Some consumer-facing platforms have a distinctly different purpose other than to primarily encourage social interaction among peers, and personal opinion of users may not have a strong effect on policies. For example, platforms that host general-purpose only, freely licensed, community-curated educational content for the benefit of the public often only factor in the opinions of users to help make progress towards their educational-charitable mission. In addition, the community-leveraged moderation process of these platforms limits space to exchange opinions in community self-moderated processes.

For platforms that host community-moderated subspaces, user opinions may create different policy regimes within the platform. Each subspace community will organically develop a landscape of rules across their different communities since user-moderators are empowered to set the standards for their own spaces. This effectively creates different policy regimes within the broader structure of the platform. 

Business Partners’ Opinions

Business partners’ opinions can also significantly affect policy. Most platforms rely on other businesses to some extent—for revenue, services, or both. As a result, those third parties can exert influence over policy by threatening to reduce, pause, or end the relationship. A classic example of this is advertiser-friendly policies that ensure an ad isn’t placed next to controversial or brand-unsafe content.

News Article: YouTube takes ads off ‘anti-vax’ video channels (2019)

Third-party services like website hosting and cloud services may also have their own rules for partners, including the requirement for moderating content.

News Article: Cloudflare pulls support For The Daily Stormer, a white supremacist site

Practical Constraints

Practical constraints also exist when designing policies. If a policy cannot be feasibly implemented, it will have no real impact and so does not exist in any meaningful sense. Some complex or overly subjective policies can be extremely difficult to enforce fairly and consistently. Others can be prohibitively expensive or take too long during time-sensitive situations. Policy designers may adjust policies to ensure that they can be enforced effectively.

Why Does Policy Change?

Policy development is a never-ending process of continuous revision and refinement. No matter what policies a platform starts out with, new considerations emerge constantly. What happens when a user behaves in a way that the policy didn’t anticipate? What happens when a new product is launched or an existing one is updated or redesigned? What happens when relevant laws or regulations change? What happens when users, advertisers, or investors react negatively to a policy outcome? These are a few of the most common factors that prompt policy to evolve over time.

As explained above in the “Who creates policy?” section, modern online service providers usually establish at least some conduct and content rules fairly early in the life of the platform. These typically start as a set of basic, high level policies, frequently using boilerplate terms of service templates or mirroring the basic policies of an already established provider. Policies are then organically elaborated on, in response to new circumstances, new products/features, growing volumes of complaints, the ways in which a service is used, or the company becoming more extensive or complex. Eventually, this leads most T&S teams to introduce enforcement guidelines.

Policy creators often find it necessary to sharpen a definition or refine the scope of a policy in the face of unanticipated results, user confusion, moderator inconsistency, or—frequently—all three. As a result, these enforcement guidelines often become extremely specific, even when the policies they illustrate seem self-explanatory at first glance, in an attempt to more precisely guide decisions and outcomes.

For example, consider a simple policy of “No nudity.” Below are just a handful of the questions that might surface when enforcing such a policy. (Warning: Links to examples include depictions of nudity.)

  • Does the designation of “nudity” require a body to be entirely unclothed?
  • What if the body is partially covered?
  • What if the body is fully covered but by a transparent material or body paint?
  • Does the “no nudity” policy extend to art?
  • How do you recognize that a piece of content is “art” in the first place?
  • Does it matter if the “art” is a photograph or in some other medium? (Example: Farnese Hercules, a nude sculpture)
  • What if the medium is non-photographic but hyperrealistic? (Example: L’Origine du monde, a painting by Courbet which shows nude genitals in detail)
  • What if content involving nudity also covers a major newsworthy or historical event?
  • Does the intention of a content creator matter in answering any or all of these questions? If it does, how can we reliably know the creator’s intentions?

Without specific answers to these sorts of questions, it is challenging for even small moderation teams to make consistent decisions, and for very large moderation teams it is practically impossible.

In answering these questions, every policy seeks to create a balance between “false positives” and “false negatives.” False positives, in this context, are cases where content violates the policy as currently written, but does not match the intuitive boundaries of the abuse the policy is meant to address. False negatives are cases where content seems like it should violate a particular policy, but in fact does not trigger the policy as currently written.

For example, a platform may prohibit nude female nipples in an attempt to discourage sexualized imagery, but that prohibition would also ensnare some photographs of breastfeeding—these non-sexual photographs would be considered “false positives.” Perhaps the platform would attempt to correct for those false positives by prohibiting nude female nipples except when breastfeeding. With this adjustment, the platform now allows images of people breastfeeding infants, but also allows images of people breastfeeding adults or animals, which are typically sexualized fetish images—these would be considered “false negatives.” Too many false positives may remove content that provides substantial social, artistic, or educational value. Too many false negatives may result in missing significant abuse. There is no universally correct balance between the two, simply a tradeoff between two different pitfalls.

Similarly, there is a tradeoff between the brevity and the specificity of a policy. Policies that are too short on detail will increase the number of both false positives and false negatives, since they will provide less guidance. At the same time, increasing specificity usually increases the length and complexity of the guidelines. This makes them harder to maintain and more challenging to learn and remember. It also constrains reviewers’ discretion when moderating new and unusual content, reducing the team’s collective ability to adapt to new issues.

Returning to the example above regarding the female nipple rule, the policy team might change the rule to state “nude female nipples are prohibited except when breastfeeding a human child.” The policy would then include information on how to assess the age of a child, the point at which a person is no longer considered a child, whether the breastfeeding clause requires the child to be actively latched to the breast, and so forth. Adding more details to the policy will likely help the policy and the moderation teams avoid the unintended consequences they’ve encountered so far, but will also make the policy much longer, more complex, and thus more challenging to enforce consistently.

These interlocking sets of inherent tensions play a significant role in nearly all policy changes. Tradeoffs that were once tolerable become unacceptable due to changes in company leadership, product design, scale, business conditions, law, and cultural expectations, requiring a constant renegotiation and rewriting of the rules.

Viewed through this lens, every request for a “policy exception” can be more clearly understood as either a request to reconsider the false positive and false negative balance in a policy or a request to add more specificity to that policy at the cost of reducing brevity. Once an exception is documented, it is no longer an exception: it’s simply an additional rule.

Case Study: Facebook attracts international attention when it removes a historic Vietnam War photo posted by the editor-in-chief of Norway’s biggest newspaper (2016)

Historical examples of policies that have been adjusted in this way by many platforms include breastfeeding and medical guidance, both affected by nudity policies, and documentation of war crimes affected by violence policies.

Case Study: Documenting police brutality (2007)

Policies are often updated over time to reflect shifting social attitudes and societal perspectives. These changes tend to be gradual and heavily discussed and debated both inside and outside of the trust and safety community. A recent example of this is the development or updates of “unwanted sexualization” policies by various online service providers following the rise of the “Me Too” movement and the increased public awareness and engagement on the issue.

Case Study: Reclaiming a hashtag (2020)

For many abuse types, users will actively attempt to push against, find loopholes in, or otherwise circumvent specific policies, thus requiring further iteration. For example, since 2012, Google has issued warnings around the use of guest posts that contained several links. Buying and selling links to boost a website’s ranking in Google Search was forbidden, so some users had begun offering guest posts to reputable websites, which would be filled with keywords and spammy behavior. In situations like this, policies may be forced to change repeatedly as bad actors find new strategies to circumvent policies.

Just as users adapt how they operate on platforms over time, platforms also change their capabilities and features. Product teams constantly launch new features, expand capabilities, and tweak existing user functionality to improve the user experience and make the platform more engaging. These changes often necessitate the revision of existing policies or the creation of new policies. 

Another company-initiated change that may prompt a revision of policies is a change in a platform’s mission or vision. A platform originally aimed at all users may narrow its mission to focus only on a specific subset of creators. This may necessitate a change in policies around what types of posts are allowed, with a stricter topical policy that disallows irrelevant posts on astronomy, cooking, or coffee-making, for example. Another example may be a general purpose education platform that evolves into an online tutoring platform. Such a platform may need new policies on who can provide tutoring services, what education level or certifications they need, and what they may charge. 

Finally, legal or regulatory regimes and changes may also compel policy teams to adjust policy, or, in some instances, create locally-specific or regionally-specific policies. For example, Holocaust denial is illegal in certain circumstances in some countries, including Germany and France, but legal in many other areas, including the country where a service may be legally based. Therefore, a platform may choose to not allow this type of content in the countries where it is illegal by geo-blocking it (e.g., remove it from visibility within that particular country/region) under an illegal activities policy, or choose to adopt a global policy to prohibit denial or minimization of the Holocaust everywhere. 

This chart summarizes the most common reasons policies change:

To improve effectiveness or enforceabilityTo adapt to fundamental changes or reasoning
Boost human reviewer accuracyShift in societal expectations
Respond to unanticipated product usage and user behaviorShift in legal/regulatory environment
Clarify ambiguityChange in product mission/vision
Improve detection capabilitiesChange in product capabilities
Streamline enforcement operations
Why does policy change? A brief overview.

For a walkthrough on how policy can evolve in content moderation, check out this podcast episode from Radiolab on policy at Facebook: Post No Evil Redux | Radiolab. YouTube’s blog post, “On Policy Development at YouTube” is another useful source for understanding how the company thinks through policy evolution over time.

A Checklist for Making Policy Changes

In most cases, it is fairly obvious when community standards need to be adjusted, but in some cases, reviewing a checklist of questions is a useful exercise to determine if adjustments need to be made. For large, dedicated policy teams, it is increasingly common to conduct a regular policy audit, often on an annual basis, to determine whether policy changes are necessary. Below is a non-exhaustive set of questions that can be used to determine the need to adjust or evolve policies:

  • Has there been an increase in user complaints regarding the readability, comprehension, or interpretation of community standards?
  • Has there been an increase in moderator escalations of edge cases, or an uptick in questions related to whether content violates policy? Or has there been a drop in enforcement accuracy due to lack of clarity on the intent of the policy?
  • Has there been an emerging trend of abuse on the platform which is not fully covered in the current policy?
  • Have there been any new laws or regulations that necessitate changes to the community standards?
  • Have new features or capabilities been introduced that have led to new forms of content or unanticipated product usage?
  • Has there been a change to the product’s ethos, core mission, values, or principles?
  • Has there been a shift in social attitudes or expectations since the policy was created or last adjusted?

How Does Policy Change?

Given the frequency of the need to adjust policy and enforcement guidelines, creating a consistent and repeatable process for doing so is essential. Because this often requires a dedicated policy team, creating such a process often is not fully developed until the product or service reaches a high level of maturity. It is common to use a policy development template when devising new or revising existing policies and enforcement guidelines. Adelin Cai, Pinterest’s former Head of Policy and a co-founder of TSPA, has developed a Policy Development and Launch Checklist that provides a generalized guide for this process broken down into four phases: foundation and alignment, task management, training and communications, and launch day.