What Is Policy?
Policy is the set of rules and principles that a platform uses to govern the conduct of its users. It often, but not always, exists in two forms: an external document providing an overview of the company’s expectations of user behavior and an internal document detailing exactly how and when to apply the policies in making specific decisions. The degree of difference between internal and external documents can vary substantially on a product-by-product basis.
Community standards (also called community guidelines and any number of other names) are the external expression of policy. They are frequently found in, or interlinked with, a platform’s Help Center, since their primary focus is on user education and setting expectations. They are typically written in a more accessible, but less precise, language than their internal counterparts. Due to the public nature of the document and the often upsetting nature of the subjects they discuss, they typically do not include specific examples, particularly visual examples.
Enforcement guidelines (sometimes called operational guidelines) are the extension of the basic policies and include substantially more detail in order to facilitate reasonably consistent, practical decision-making. The intent of enforcement guidelines is to provide reviewers clear and specific instructions on the correct enforcement action to take against a piece of content. They often include detailed directions and flow charts called decision trees, and also try to capture unusual situations. For example, if a set of Community Standards ban nudity, then the associated Enforcement Guidelines might contain details on exactly how much of which body parts are permitted or not permitted in uploaded content, and in which specific contexts.
Historically, internal enforcement guidelines have not been made public, although Facebook’s recent decision to publish their internal documentation in an attempt to be more transparent may herald a change in this norm. There are a few reasons companies have typically chosen not to publish their enforcement guidelines:
- Malicious actors can use detailed enforcement guidelines to work around policy restrictions.
- Enforcement guidelines can change frequently in response to new trends or high profile incidents, especially at companies operating at significant scale. Keeping a public-facing document updated through frequent changes is significantly more time-intensive than maintaining internal-only material.
- Policies may be provisionally launched in an unfinished state to collect data about how they perform; teams may not want to draw public attention to such provisional policies before they’ve decided to formally adopt them.
- Public policy and government relations teams may be concerned about public reaction to enforcement guidelines because these documents inherently deal with sensitive topics, usually in extensive and dispassionate detail.
- Legal departments may be concerned about a possible increase in litigation as a result of the publication of the much more specific language that enforcement guidelines contain.
Even under the current norm of keeping enforcement guidelines confidential, it is not unusual for companies to release relevant details publicly in response to incidents where the reason for an action (or lack of action) is unclear, particularly when there is a high level of public interest.
Supporting information is any information outside of the platform’s main policy that might affect how, when, or to whom a policy is applied. Supporting information most commonly takes the form of lists of groups, terms, substances, etc. that fit a particular definition used within a policy. For example, a platform’s policy may read “We prohibit groups and individuals promoting terrorism or hate groups”; that policy then uses supporting information like a government-issued designated terrorist organization list to determine which specific groups and individuals that policy affects. As groups are added to or removed from that government list, the group’s status on the platform will change even though the platform’s main policy has not changed.
Who Creates Policy?
Early versions of a platform’s policies are usually written by the same people who built the platform: founders, early development teams, or early operations teams. These first efforts are typically quite simple, and may rely on the precedence set by more matured companies on what is and is not allowed, as well as unwritten norms specific to the platform. As a platform grows, responsibility for policy usually transitions to the company’s legal team or a dedicated policy team. Platforms that are developed by the user community rather than a centralized corporate development team often craft their own rules through a consensus process with key stakeholders.
Considerations While Creating Policies
Regardless of the exact approach taken, policy creators need to consider a significant number of influences:
Purpose, Architecture, Organization, and Business Goals
The purpose, architecture, organizational structure, and business goals of a platform fundamentally shape its policies as they create different sets of incentives, constraints, and vulnerabilities. As a result, different platforms may take very different stances on identical content due to differences in their purpose and goals.
These differences are often fairly intuitive, at least in their broad outlines, when comparing platforms that have very different purposes and designs. As concrete examples:
- A child-focused multiplayer gaming company compared with an adult-oriented dating site
- A community created fan wiki compared with a centralized social network
- A sharing economy marketplace compared with a cloud file hosting service
In the first case, the difference in audience and purpose of the two sites would nudge the companies towards different levels of restrictiveness in their content policy. In the second, the likely difference in both scale-of-operation and decision making processes will tend to result in different levels of nuance and flexibility. In the third, the difference in business goals between a public marketplace and a private repository of information will naturally incentivize different stances on many types of content.
Legal requirements often shape policy significantly. Legal authorities at the local, federal, and international level require online platforms to conduct their business in specific ways, under threat of a variety of possible regulatory penalties. In addition, platforms may put policies in place to avoid civil liability and the cost of potential lawsuits.
These laws may instruct a platform to take an action that is already built into their policy—for example, many countries have laws that require platforms to remove child sexual abuse material, which virtually all mainstream platforms already prohibit. Other times, a law may attempt to require a platform to take action against content or behavior that does not otherwise violate its policies—for example, a law may require platforms to remove pro-LGBTQ content that is otherwise allowed within the platform’s policies.
News Article: Google loses landmark ‘right to be forgotten’ case
As a result, platforms that operate in multiple jurisdictions must decide if their policies are applied globally or if (and how) they are adjusted based on local laws. In certain circumstances (often those with significant human rights or ethical implications) platforms may choose to risk violating a law by not making any adjustments to their policies. Laws and regulations vary widely between jurisdictions around the world; companies should always research them thoroughly in collaboration with their legal team.
News Article: Iranian government blocks Facebook access
User opinion can also have an impact on how policy is developed. Certain behavior and content can create an unfriendly environment for users and is likely to limit the size and growth of a user base. Similarly, users may also reject a platform that is too restrictive for them, either because the content restrictions impede their use of the platform, or out of principle. Note that the user base isn’t the same as all of society, and user opinions will differ depending on demographics of the platform’s user base; for example, a platform designed to support a particular religious or political group may have policies that align with the values of that group.
Some consumer-facing platforms have a distinctly different purpose other than to primarily encourage social interaction among peers, and personal opinion of users may not have a strong effect on policies. For example, platforms that host general-purpose only, freely licensed, community-curated educational content for the benefit of the public often only factor in the opinions of users to help make progress towards their educational-charitable mission. In addition, the community-leveraged moderation process of these platforms limits space to exchange opinions in community self-moderated processes.
For platforms that host community-moderated sub-spaces, user opinions may create different policy regimes within the platform. Each subspace community will organically develop a landscape of rules across their different communities since user-moderators are empowered to set the standards for their own spaces. This creates, effectively, different policy regimes within the broader structure of the platform.
Business Partners’ Opinions
Business partners’ opinions can also significantly affect policy. Most platforms rely on other businesses to some extent—for revenue, services, or both. As a result, those third parties can exert influence over policy by threatening to reduce, pause, or end the relationship. A classic example of this is advertiser-friendly policies that ensure an ad isn’t placed next to controversial or brand-unsafe content.
News Article: YouTube takes ads off ‘anti-vax’ video channels (2019)
Third-party services like website hosting and cloud services may also have their own rules for partners, including the requirement for moderating content.
Practical constraints also exist when designing policies. If a policy cannot be feasibly implemented, it will have no real impact and so does not exist in any meaningful sense. Some complex or overly subjective policies can be extremely difficult to enforce fairly and consistently. Others can be prohibitively expensive or take too long during time-sensitive situations. Policy designers may adjust policies to ensure that they can be enforced effectively.
How Does Policy Change?
Policy development is a never-ending process of constant revision and refinement. No matter what policies a platform starts out with, new considerations emerge constantly – What happens when a user behaves in a way that the policy didn’t anticipate? What happens when a new product is launched or an existing one is updated or redesigned? What happens when relevant laws or regulations change? What happens when users, advertisers, or investors react negatively to a policy outcome? These are a few of the most common factors that prompt policy to evolve over time and, as a result, the policies at any large company are in a state of more or less constant change.
As explained above in “Who Creates Policy?”, modern online service providers usually establish at least some conduct and content rules fairly early in the life of the platform. These typically start as a set of basic, high level policies, frequently using boilerplate terms of service templates or mirroring the basic policies of an already established provider. Policies are then organically elaborated on, in response to new circumstances and growing volumes of complaints, becoming more extensive or complex. Eventually, this leads most T&S teams to introduce enforcement guidelines.
Policy creators often find it necessary to sharpen a definition or refine the scope of a policy in the face of unanticipated results, user confusion, moderator inconsistency, or—frequently—all three. As a result, these enforcement guidelines often become extremely specific, even when the policies they illustrate seem self-explanatory at first glance, in an attempt to more precisely guide decisions and outcomes.
For example, consider a simple policy of “no nudity.” Below are just a handful of the questions that might surface when enforcing such a policy (warning: links to examples include depictions of nudity):
- Does the designation of “nudity” require a body to be entirely unclothed?
- What if the body is partially covered?
- What if the body is fully covered but by a transparent material or body paint?
- Does the “no nudity” policy extend to art?
- How do you recognize that a piece of content is “art” in the first place?
- Does it matter if the “art” is a photograph or in some other medium? (Example: Farnese Hercules, a nude sculpture)
- What if the medium is non-photographic but hyperrealistic? (Example: L’Origine du monde, a painting by Courbet which shows nude genitals in detail)
- What if content involving nudity also covers a major newsworthy or historical event?
- Does the intention of a content creator matter in answering any or all of these questions? If it does, how can we reliably know the creator’s intentions?
Without specific answers to these sorts of questions, it is challenging for even small moderation teams to make consistent decisions, and for very large moderation teams it is practically impossible.
In answering these questions, every policy seeks to create a balance between “false positives” and “false negatives.” False positives, in this context, are cases where content violates the policy as currently written, but does not match the intuitive boundaries of the abuse the policy is meant to address. False negatives are cases where content seems like it should violate a particular policy, but in fact does not trigger the policy as currently written.
For example, a platform may prohibit nude female nipples in an attempt to discourage sexualized imagery, but that prohibition would also ensnare some photographs of breastfeeding – these non-sexual photographs would be considered “false positives.” Perhaps the platform would attempt to correct for those false positives by prohibiting nude female nipples except when breastfeeding. With this adjustment, the platform now allows images of people breastfeeding infants, but also allows images of people breastfeeding adults or animals, which are typically sexualized fetish images – these would be considered “false negatives.” Too many false positives may remove content that provides substantial social, artistic, or educational value. Too many false negatives may result in missing significant abuse. There is no universally correct balance between the two, simply a tradeoff between two different pitfalls.
Similarly, there is a tradeoff between the brevity and the specificity of a policy. Policies that are too short on detail will increase the number of both false positives and false negatives, since they will provide less guidance. At the same time, increasing specificity usually increases the length and complexity of the guidelines. This makes them harder to maintain and more challenging to learn and remember. It also constrains reviewers’ discretion when moderating new and unusual content, reducing the team’s collective ability to adapt to new issues.
Let’s return to the team above struggling with their female nipple rule. They might change the rule to “nude female nipples are prohibited except when breastfeeding a human child” and include information on how to assess the age of a child, the point at which a person is no longer considered a child, whether the breastfeeding clause requires the child to be actively latched to the breast, and so forth. Adding more details to the policy will likely help them avoid the unintended consequences they’ve encountered so far, but will also make the policy much longer, more complex, and thus more challenging to enforce consistently.
These interlocking sets of inherent tensions play a significant role in nearly all policy changes. Tradeoffs that were once tolerable become unacceptable due to changes in company leadership, product design, scale, business conditions, law, and cultural expectations, requiring a constant renegotiation and rewriting of the rules.
Viewed through this lens, every request for a “policy exception” can be more clearly understood as either a request to reconsider the false positive and false negative balance in a policy or a request to add more specificity to that policy at the cost of reducing brevity. Once you document an exception, it is no longer an exception – it’s simply an additional rule.
Historical examples of policies that have been adjusted in this way by many platforms include breastfeeding and medical guidance, both affected by nudity policies, and documentation of war crimes affected by violence policies.
Case Study: Documenting police brutality (2007)
Policies are often updated over time in order to reflect shifting social attitudes and societal perspectives. These changes tend to be gradual and heavily discussed and debated both inside and outside of the trust & safety community. A recent example of this is the development or updates of “unwanted sexualization” policies by various online service providers following the rise of the Me Too movement and the increased public awareness and engagement on the issue.
Case Study: Reclaiming a hashtag (2020)
For many abuse types, users will actively attempt to push against, find loopholes in, or otherwise circumvent specific policies. For example, since 2012 Google has issued warnings around the use of guest posts that were full of links. Buying and selling links to boost a website’s ranking in Google Search was forbidden, so some users had begun offering guest posts to reputable websites, which would be filled with keywords and spammy behavior. In situations like this, policies may be forced to change repeatedly as bad actors find new strategies to circumvent policies.
For a walkthrough on how policy can evolve in content moderation, check out this podcast episode from Radiolab on policy at Facebook: Post No Evil Redux | Radiolab