Community Manager, Trust & Safety

  • Individual Contributor
  • Anywhere
  • Experience level: 3-5 years

Stack Overflow

This content was reproduced from the employer’s website on July 25, 2021. Please visit their website below for the most up-to-date information about this position.

Stack Overflow is the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. More than 50 million professional and aspiring programmers visit Stack Overflow each month to help solve coding problems, develop new skills, and find job opportunities.

Additionally, Stack Exchange is a community of 170+ sites that cover topics from parenting, to DevOps, to crypto, to role-playing-games. Our network of communities hosts millions of users every month from all over the world that are working to establish the largest knowledge base of questions and answers that the world has ever seen.

We have a network of over 500 community-elected moderators who volunteer their time handling issues raised by site users. Moderators monitor our sites for posts and comments that have been flagged for moderator attention, resolve disputes between users, and escalate serious issues to the Community Management team.

As  Community Manager,  Trust & Safety, you’ll be helping build a safe environment for our users by identifying and addressing challenges to the safety and integrity of our communities. You’ll need to assess behavior, social contracts, and rules that drive our distinct communities in order to defend our users from all manner of unwanted content and behavior. You’ll need to be on the frontlines talking to users and moderators to understand community dynamics and what needs to be improved. You’ll collaborate with data scientists, user researchers, community operations, and legal to craft and implement policies. This team needs to foster a close relationship with our Product Teams to provide input into features and help design & deliver major safety changes.

What you’ll do:

Day-to-day, you will be monitoring and managing the current health of our platform as well as user feedback regarding Trust & Safety issues to keep people safe and help foster consistent positive experiences on our platform. You’ll need to help craft and implement policies that take into consideration the needs of our global community and analyze existing ones for improvement. We have an international audience and navigating through different cultural contexts is common. Sometimes you’ll need to act as an escalation point to our community operations & support teams for all product policy matters, including responding to emergencies and content safety-related cases. This may include exposure to sensitive or graphic content, including but not limited to vulgar or derogatory language, violent threats, hate speech, and other forms of abuse.

You’ll be responsible for advocating for product features that encourage positive behavior and reduce harm and abuse.  You will need a proactive, detail-oriented attitude to work with team members, product, people enforcing policies (both employees and volunteer moderators), and other teams to advance online safety. This team is expected to identify potential harm & safety concerns around product changes and then help design & implement mitigations and protections. We also  communicate about these initiatives with our users & moderators.

This is not a support and/or ticketing based role; we’re looking for an experienced self-starter with a passion for fostering positive behavior and reducing harm in an online platform at scale. We empower our users to largely self-govern and our elected moderators can field most of the exception handling when users can’t. You’ll be there to help guide them in finding solutions and resolutions when things need further escalation as well as to reduce friction and abuse introduced by product & systems design.

What you’ll need to have:

  • 3-5  years in a similar online safety role within an organization with millions of users
  • Experience in enforcement, online behavior, abuse prevention, or policy development in an online social setting. It would be nice if you have previous academic research into policy, behavioral sciences, online abuse & moderation, or related fields
  • Experience with process modeling to propose solutions to address product and user needs by ensuring fairness and protecting user safety
  • Deep understanding of technology, Internet, and platform content issues as well as platform moderation
  • Good communication skills, able to explain decisions and policy both internally and externally in clear and understandable terms
  • Ability to think about problem-solving, to effectively triage and identify longer-term solutions
  • Exceptional interpersonal skills and ability to provide balanced and actionable feedback that influences decision-making without managerial authority.
  • Experience with SQL is a plus.
  • An understanding of Stack Overflow and other Stack Exchange network sites is a significant plus

What you’ll get in return:

  • Competitive base salary
  • 20 days paid vacation
  • Flexible hours
  • Stock options
  • Completely free health insurance (no copay, no premiums)
  • Gym membership reimbursement
  • Transportation reimbursement or home internet reimbursement depending on if you are remote or commuting to our London or NYC office
  • Employment is conditioned upon successful completion of a background check and upon having the appropriate legal right to work.

To apply for this job please visit stackoverflow.com.