Advancing the trust and safety profession through a shared community of practice
The Trust & Safety Professional Association supports the global community of professionals who develop and enforce principles and policies that define acceptable behavior and content online.
Explore what we do
- Our 2021 Roadmap: H1 Recap and H2 PlansWe’re excited to give you some updates on our 2021 annual plan. Back in February, we shared our plan for the first half of the year, and we’re here to report back on our progress and share our plans for the rest of the year. In the first six months of 2021, we welcomed five
- Introducing the Trust and Safety CurriculumToday, we are proud to publish the first two chapters of TSPA’s Trust and Safety Curriculum: Creating and Enforcing Policy, and Transparency Reporting. This curriculum is written by TPSA’s Curriculum Working Group, and we are excited to share what they have produced. Online trust and safety is still a relatively new profession, and people who
- TSPA’s 1-Year AnniversaryTSPA turns one today, and we’d like to pause and celebrate the community we are building together. As we reflect back on this past year, it’s safe to say that the Year 1 we got was really different from the Year 1 we’d planned. For so many of our members, the past 365 days have
Workshops, seminars, and events that are relevant to trust & safety professionals; some are hosted by TSPA and TSF, while others are not.
WhenOctober 26, 2021 – October 27, 2021
EU Disinfo Lab’s annual 2-day conference in Brussels will cover topics such as ‘Disinfo for hire: micro-influencers, troll armies, and influence buying in the modern disinformation economy’, ‘Measuring the impact of disinformation’, ‘How should platforms tackle disinformation in global political speech?’, ‘Climate Change and Anti-Science Disinformation’ and ‘Facing Disinformation as a Cybersecurity Threat’ among other panel discussions.
WhenOctober 28, 2021
For this Tech Against Terrorism & GIFCT webinar, we investigate terrorist use of the internet for financing purposes and the challenges it represents to the existing global counter terrorist financing framework. Together with an expert panel, we will examine how terrorists and violent extremists exploit not only financial and payment services, but also social media platforms and online marketplaces, to raise funds online. We will also examine how tech companies can respond to this threat, and outline how the global counter terrorist financing framework could adapt to these emerging challenges.
WhenOctober 28, 2021
People who use messaging services need to be able to exercise agency in how they communicate. This includes being able to manage privacy trade-offs and also to address unwanted or abusive content such as spam, mis- and disinformation, harassment, and sexually exploitative content.
In the current debates around addressing child sexual abuse material (CSAM) in end-to-end encrypted (E2EE) environments, technical experts have proposed a variety of approaches to addressing abusive content, including user reporting, metadata analysis, and automated scanning of user-generated content.
Given that there are many different kinds of users with unique needs and perceived risks to their online communications, how can we enable meaningful user choice and control around E2EE communications to address unwanted or abusive content?
Towards psychological herd immunity: Cross-cultural evidence for two prebunking interventions against COVID-19 misinformationWhenOctober 28, 2021
Misinformation about the novel coronavirus (COVID-19) is a pressing societal challenge. Across two studies, we assess the efficacy of two ‘prebunking’ interventions aimed at improving people’s ability to spot manipulation techniques commonly used in COVID-19 misinformation across three different languages. We find that Go Viral!, a novel five-minute browser game, (a) increases the perceived manipulativeness of misinformation about COVID-19, (b) improves people’s attitudinal certainty (confidence) in their ability to spot misinformation and (c) reduces self-reported willingness to share misinformation with others. The first two effects remain significant for at least one week after gameplay. We also find that reading real-world infographics from UNESCO improves people’s ability and confidence in spotting COVID-19 misinformation. Limitations and implications for fake news interventions are discussed.
WhenNovember 1, 2021 – November 5, 2021
The explosive growth of the tech sector has allowed private sector companies to amass an extraordinary amount of power, to the point where these entities exercise control over virtually every aspect of our day-to-day lives. In response, scholars, regulators, and civil society advocates have advanced a range of proposals aimed at boosting public accountability across this sector.
This event will host a series of conversations aimed at framing our understanding of power and accountability in the tech space and generating common understandings of the goal of regulation in this space. The speakers will address a range of topics related to the consolidation of power and will reflect a diversity of perspectives on these vital issues.
WhenNovember 3, 2021
For TSPA members only, and geared towards managers of trust and safety teams.
Content moderators arguably do one of the most important jobs on the internet, and their work is incredibly difficult. In the past years, the need for content moderation has exponentially grown as is the increased concern about their wellness and resiliency. Managers play a significant role in supporting team wellbeing and preventing the escalation of adverse reactions to disturbing content. This session will provide managers with a greater understanding and awareness of how to lead and develop resilient content moderators, spotting concerning signs that their team members may need wellbeing support, and equipping managers with the tools to have supportive wellbeing conversations.
WhenNovember 4, 2021
As algorithms increasingly automate decision-making processes, filter information flows, and mediate our social interactions, ethical concerns immediately follow. Questions of fairness, accountability, and transparency permeate the growing set of concerns around the social and ethical implications of algorithms. As a growing community of scholars grapples with the ethics of algorithms, how can their findings be applied by technologists engaged in machine learning and the cutting edge of AI development in order to minimize unintended bias and the risk of harm? Put simply, how can businesses apply algorithmic ethics?
WhenNovember 4, 2021
This session explores the concept of infodemics during the COVID-19 pandemic, focusing on the propagation of false or inaccurate information proliferating worldwide throughout the SARS-CoV-2 health crisis.
WhenNovember 10, 2021
The DSI Race + Data Science Lecture Series aims to advance research in the areas of race and data, engineering, and computational science. This event features guest speaker Danaë Metaxa, Director, PhD Candidate, Computer Science, Stanford University.
WhenNovember 12, 2021
The Stanford Digital Economy Best Practices Conference is the premier educational event for in-house counsel and practitioners who work in or for Internet, e-commerce, mobile, social networking and cloud companies. Leading experts from industry, legal practice and academia will address current issues facing the industry and offer practical solutions for dealing with the many legal uncertainties that arise when doing business online. The program will feature a roundtable of general counsel from leading e-commerce companies.