Glossary: Common Terms and Definitions

A

Accuracy: Accuracy is used as a measure of the quality of decisions, treating all mistakes (false positives and false negatives) as the same. Out of all enforcement decisions made, what fraction of them were correct (Number of correct decisions / Total number of decisions).

Action notice: A message in place of removed content or user pages stating that the content or user was removed due to violating content policy. Common synonyms: tombstone notice

Adversarial behavior: Deliberate actions of an actor or a network of actors intended to circumvent detection or disrupt moderation rules and systems.

Allowlisting: Allowlisting is a system security control that identifies known files, applications, or processes and allows them to execute. For instance, allowlisting is used to control which applications, websites, IP addresses, and email addresses can be used in an organization’s domain. Conversely, unknown activities are blocked or restricted, which prevents them from opening up and spreading within a system or environment in an attack mode.

Artificial intelligence (AI): Computer systems that can solve tasks that would traditionally require human level intelligence, or otherwise simulate intelligent behavior.

At scale: Enforcement at scale requires the application of policies at large scale to large volumes of potentially policy-violating activity. At scale enforcement can maximize efficiency and speed, but can be traded off against nuance and contextual decision-making. Common synonyms: scaled enforcement

Automation: Technologies that assist human review but do not require human review in proactively flagging activity that violates the policies of a service, before users are exposed to the violating activity. These technologies operate at a speed and scale impossible to replicate with manual operations.

B

Banning: Permanent removal of a user, entity or account from a platform. Common synonyms: disable

Blocking (content-level): Prevent certain content from being posted.

Blocking (user-level): (a) A service can block or restrict the access of users (e.g. block from posting content); (b) users within a service can block each other from view/communication.

Blocklisting: Automatic blocking of users or content based on matching against a predefined list, such as a list of banned keywords or a list of previously violating users. Common synonyms: denylisting, disallow list

C

Checkpointing: Presenting the user with a “roadblock” to continuing to use a service, wherein they must clear some challenges in order to get back into their account. Most common examples of checkpoints include ID-verification, selfie verification and captcha challenges.

Child sexual abuse material (CSAM): Any visual depiction of the sexual abuse and exploitation of children, such as media depicting sexually explicit conduct with a minor (a person who has not reached the age of consent). Common synonyms: child pornography (discouraged synonym)

Community standards (or Community guidelines): A public version of a company’s policies, meant to educate and set expectations on content and behavior that is or is not acceptable on the platform.

Community moderation: When community members of the platform assume the responsibility of moderating content.

Consistency: A measure of how often reviewers or systems faced with the same question will agree on the same decision, regardless of whether that decision is correct.

Content (see also User-Generated Content): Refers to links, text, images, or videos shared by a user.

Cross-check: Automatically enqueueing potentially violating content or behavior from known and trusted sources for a closer manual review, designed to to prevent accidental penalties; Additional layer of review and scrutiny.

D

Deep learning: A subset of machine learning that uses complex brain-like models to identify important features in a data set and use them to make decisions and predictions.

De-index: Removal of a site or other entity from the listings of a search engine or other search-like functionality so it cannot be easily found, but completely deleting it.

Decision tree: A tree-shaped model for describing a set of possible decisions and their consequences.

Demonetization: Blocking a user or entity’s ability to earn money via a service, while allowing them to stay on the service. E.g. by preventing them from running ads or selling goods.

Disinformation: Distinct from misinformation, disinformation refers to deceptive and misleading information, deliberately disseminated as part of a larger influence operation.

Downranking: Where content or a user is ranked lower in searches and recommendations, to reduce the visibility of that user or content to other users of a service.

Downvoting: Where content receives “down” votes, limiting the visibility of that content.

Doxxing: Publishing private information (including phone number, physical address) with the intent of harassing or harm against an individual.

E

Enforcement: Actions companies take on content that violates policy, which can include hiding, limiting exposure, or removing content.

Explainability: A property of predictive automated models that can provide clear information about the reasons they make specific predictions and decisions. Explainability can be traded off against performance and nuance in the case of more complex structures with a larger number of variables.

F

Flagging: Users can flag or note a piece of content as potentially violating policy or as problematic. Common synonyms: reporting

G

Geo-blocking: Making an individual piece of content inaccessible in a specific geolocation (country/region), meaning users accessing the service from that country can no longer view the content. Geo-blocking is a tool used by companies to comply with local legislation regarding content restrictions, instead of applying global censorship by taking down the content.

Greylisting: Weak or temporary restrictions and delays on users or content that is suspicious or unknown but not confirmed as violating or harmful.

H

Hash: Numerical representations of the original content, or a digital signature for an online image.

Hash-matching: Leveraging image hashes for the rapid identification of visually similar content which has already been removed elsewhere, enabling for re-review or automated removal of violating content.

Hash-sharing: Sharing of hash data as part of cross-industry collaboration initiatives, such as the hash-sharing database of the Global Internet Forum to Counter Terrorism (GIFCT).

Human rights impact assessment: A process in which companies can assess the impacts of their services or products on human rights principles and conventions. More information on human rights impact assessment.

I

Intimate media: Images and videos of people who are naked, showing their genitals, engaging in sexual activity or poses, or wearing underwear in compromising positions.

L

M

Machine learning (ML): The use of computer systems that can learn, adapt, and draw inferences from data without following specific and explicit instructions.

Misinformation: Content that contains incorrect or misleading information, typically unwittingly posted or re-shared by people.

N

Non-consensual sharing of intimate images (NCII): Intimate images or videos of someone that are shared without their consent. Generally excludes commercial or artistic material when consensually produced. Common synonyms: “revenge porn” (discouraged synonym)

O

Object detection: Automation techniques used to identify and locate objects in an image or video.

P

Personally identifiable information (PII): Data that could potentially identify an individual. More information on PII.

Phishing: Sending a deceptive message pretending to be from a legitimate company or individual to trick a target into sharing personal or confidential information, such as credit card information or bank account information.

Platform: Refers to a product or service. Sometimes used interchangeably to refer to a company, although a company can have more than one platform.

Policy (as in content policy): The set of rules and principles that a platform uses to govern the conduct of its users.

Precision: What fraction of enforcement actions were correctly applied to harmful/policy-violating content or activity. This is a measure of how much or how little collateral damage is done when content or activity is enforced upon (True positives / True positives + False positives).

Product: Refers to the different service(s), platform(s), or system(s) offered by a company.

Protected characteristics: A set of traits that are used to discriminate against a person or a group of people. Protected characteristics include but are not limited to, race, ethnicity, national origin, disability, religious affiliation, sexual orientation, gender identity, age. This term can invoke legal obligations in jurisdictions that provide legal protection from discrimination based on these traits. Common synonyms: protected class, protected group

R

Recall: Out of all policy violating content or activity, what fraction of it was successfully identified / actioned. Used as a measure of how much of the violating content in a population is being successfully dealt with (True positives / True positives + False negatives).

Recidivism: The evasion of temporary suspensions or permanent bans, such as by creating a new account under a different name to replace a previously disabled account.

S

Sextortion: The act of seeking financial gains, favors, or private content by threatening to share sexually intimate information about a target. Because this is a coercive act, often seen in the form of threats or coercion.

Spam (or spaming): Mass-distributed, low quality or malicious unwanted content, distributed in large volumes typically with financial motive.

Slur: Derogatory or insulting terms, particularly those referencing a particular group or targeting people based on their protected characteristic(s).

T

Terms of Service/Terms of Use: A set of rules that a user agrees to when accessing a service or platform.

Torrent websites: Websites that allow peer-to-peer (P2P) sharing of files, which often include copyrighted material.

Transparency report: A public report released by a company that includes key metrics and information about their digital governance and enforcement measure.

Trolling: Making statements or taking actions that are deliberately offensive, harmful, or annoying for the purpose of provoking a negative reaction, attracting attention, or causing disruption.

Trust principles: Trust principles refer to concepts such as transparency, data ethics, and freedom from harm that tie directly back to what is needed for users or customers to be confident in the product or service they are using.

U

User data: Includes user-generated content as well as information about the user.

User-generated content (UGC): Refers to links, text, images, or videos created and/or shared by a user.

V

Virality: Image or text that has high, rapid and wide reach amongst the users of a service. Viral content typically spreads widely in a matter of seconds/minutes/hours, especially in the realm of social networks and other connected platforms.

A note about discouraged synonyms

The usage of specific terms related to T&S issues are actively discouraged within the industry. These terms may have been used historically, and may still be used by the wider public or in legal contexts, but have mostly been replaced within the industry for a variety of reasons. These terms are included in the Glossary as discouraged synonyms and should generally be avoided in a professional context.

For example, the term “child porn” is discouraged as it creates an inappropriate equivalency with sexually explicit material involving consenting adults and does not reflect severity as strongly as Child Sexual Abuse Material (CSAM). The term fails to describe the true nature of the material and undermines the seriousness of the abuse from the child’s perspective. Using this term in the context of children risks normalizing, trivializing, and even legitimizing the sexual abuse and exploitation of children.