GPA
·
August 28 2020

Safety Tools and Policies for Elected Officials and Candidates

Ensuring a safe and positive experience on Facebook for Elected Officials and Candidates

As a candidate running for office, or an elected official who’s already in office, Facebook can help you communicate and engage with constituents or potential voters, share information about your work and receive feedback. There are many tools available to you to maximize your presence on the platform, and to ensure that your constituents feel informed about and involved in the important work you do.

However, we know that some individuals can use the platform to behave inappropriately towards public figures. As a political representative we understand that you might experience such behavior via your own Page or the platform more broadly.

At Facebook we employ a range of measures to support politicians on safety, security and reporting malicious behavior, including our policies regarding public figures, enabling content to be reported and a variety of safety and moderation tools.

Our policies: what is and isn’t acceptable and when you should report something

We recognize how important it is for Facebook to be a place where people feel empowered to communicate, and we take our role seriously in keeping abuse off our platforms. That's why we've developed a set of Community Standards which are routinely reviewed and updated and outline what is and isn't allowed on Facebook. Our policies are based on feedback from our community and the advice of experts in fields such as technology, public safety and human rights. To track our progress and demonstrate our continued commitment to making Facebook and Instagram safe and inclusive, we regularly release this Community Standards Enforcement Report. This report shares metrics on how we are doing at preventing and taking action on content that goes against our Community Standards. These actions may include removing content or covering content with a warning screen.

Bullying and harassment can happen in many different forms, from making threats to releasing personally identifiable information. This kind of behavior prevents people from feeling safe and respected on Facebook, and we don’t tolerate it.

In our policies, we distinguish between public figures and private individuals because we want to allow discussion, which can sometimes include critical commentary of people who are featured in the news or who have a large public audience. For public figures, we remove attacks that are severe as well as certain attacks where the public figure is directly tagged in the post or comment. For example, gendered cursing will now be removed for public figures if they are purposefully exposed to this. We will also remove posts which include this language and the name of a public figure, when reported to us by the target, even where they are not purposefully exposed.

We also recently made changes to our Violence & Incitement Policy, expanding the definition of “lethal violence” to include more forms of violence and renaming it “high-severity”. Now, we will remove more threats of violence, even if they lack credibility to ensure everyone is protected from threats of high-severity violence. Over the past year, our conversations with external experts, academics and others have demonstrated that violent statements online, regardless of credibility, have a negative effect on people. While we understand that people commonly express disdain or disagreement by threatening or calling for violence in non-serious ways, we will remove language that incites or facilitates violence.

We will also remove threats of mid-severity violence against vulnerable people (which includes nationally elected politicians), private individuals and all minors. These changes reflect our effort to enhance protections against violent speech and make Facebook a safer space.

We support political expression and robust political debate, but there is no place on Facebook for incitement of violence or threatening behavior.

Reporting content that may violate these rules

Every piece of content on Facebook has a report button that sends it for review against our Community Standards. In addition to removing content that violates our policies, we refer cases to appropriate law enforcement authorities when we become aware of an imminent threat. Our Community Operations teams are available 24/7, and we now have 35,000 people worldwide working on safety and security related issues. We are also continuing to invest more in automated techniques and AI for content removal to help us remove as much of this content as quickly and proactively as possible.

Learn more about how to report things on Facebook in our Help Center.

Using our Page moderation tools to protect your Page and ensure you and your followers have a positive experience

To help ensure that negative content does not appear on your Page in the first place we have developed a range of tools that allow public figures to moderate and filter the content that people put on their Pages.*

People who help manage Facebook Pages can hide or delete individual comments. They can also proactively moderate comments and posts by visitors by turning on the profanity filter, or blocking specific words or lists of words that they don’t want to appear on their Page. Page admins can also remove or ban people from their Pages using the straightforward tools available to them as administrators. For more guidance on protecting your Page, we’ve created the Safety Guide for Page Admins.

Understanding what we do and don’t allow on the platform, how to report it and how to ensure your own Page is a safe place for community discussion, are all key to having a positive experience on the platform. If you have any issues, concerns or questions you can reach out to our team.

*There may be restrictions on the ability of government or political officials to take these actions pursuant to applicable laws and regulations. Please consult with your ethics or legal counsel if you have questions.