Inside Facebook: Protecting Your Brand on Social Media

The following is a resource announcement (Q4, 2019) made available to The Social Group™ as a Facebook Partner Agency.
One of our goals is for Facebook to be a platform that gives people a voice, while keeping them–and businesses like yours–safe. That’s why we’re focusing on brand safety in our latest Good Questions, Real Answers blog post, defined by the Internet Advertising Bureau (IAB) as “…keeping a brand’s reputation safe when they advertise online.” At Facebook, we work to create transparent policies and relevant controls to ensure you feel informed and in control of your brand’s reputation. Today, we’ll address some questions around how content is removed from our platforms to maintain a safe space; how we work with brand safety leaders in the industry; and additional details around a new control we’re testing, White Lists.

How much content does AI remove automatically?

Starting in Q2 2019, we began removing some posts automatically, but only when content is either identical or near-identical to text or images previously removed by our content review team as violating our policy, or where content very closely matches common attacks that violate our policy. We only do this in select instances, and it has only been possible because our automated systems have been trained on hundreds of thousands, if not millions, of different examples of violating content and common attacks. In all other cases, the content is still sent to our review teams to make a final determination. While our systems’ abilities to correctly detect violations continue to progress, people will continue to play an important part in keeping our platform safe: both the people who report content to us, and the people on our team who review that content.

We use automation and artificial intelligence to stop spam attacks, remove fake accounts, and identify additional instances of content we’ve already removed for violating policies including child nudity, sexual activity, terrorism and now, hate speech. But a lot of content is very contextual and nuanced, like determining whether a particular comment is bullying. That’s why we have people to look at those reports and make decisions based on our Community Standards.

Do you release any information about the types of content that are removed?

The Community Standards Enforcement Report (CSER) holds the company accountable when it comes to showing progress in removing harmful content from our services. CSERs detail how we’re doing on enforcing our policies by providing metrics across a number of policy areas, including: the prevalence of harmful content, the amount of content we took action on, and how effectively we proactively detected harmful content.

In November, Instagram was included in the report for the first time, and we released metrics on how well we’re enforcing our policies in four areas: child nudity and child exploitative imagery, drug and gun sales, terrorism, and suicide and self-injury content. Facebook also shared metrics for these areas, among others. You can see the report and metrics here.

Why don’t you include time to take down content?

We measure how often violating content is seen on Facebook (the views) as opposed to reporting on time (how quickly we removed the content) because one post that we pull down in 2 hours could have been seen by 1 million people while another post that took us 24 hours may have only been seen by 100 people. The prevalence number is based on how often violating content is seen on Facebook relative to the how often any content is seen on Facebook — by estimating the views, not the amount of violating content, and dividing it by the views of all content at a given moment. So while we work to ensure violating content is up for as little time as possible, what really matters is how many people could have seen the post. We believe it’s a meaningful measurement of people’s experience on Facebook.

facebook partner agency

What’s your approach to collaborating with brand safety industry bodies?

We collaborate with industry partners to share knowledge, build consensus and work towards making all online platforms safer for businesses.

In addition to our work with Global Alliance for Responsible Media, we recently completed JICWEBS’ Digital Trading Standards Group’s Brand Safety audit, receiving the IAB UK Gold Standard. Industry partners are a valuable source of feedback for us. Working with these industry bodies allow us to share knowledge industry-wide and collaboratively make Facebook and all online platforms safer for people and businesses.

When will Facebook be rolling out White Lists?

We recently announced we’re starting with a small test for select advertisers. We’ll plan to learn from this test before rolling this out more broadly next year. The test will apply to all sites / apps for Audience Network and Facebook pages for in-stream ads. Advertisers are responsible for building their own White Lists and each advertiser has access to their own list only.

We understand that for advertisers, protecting the integrity of your brand is never done. We’ll keep finding new ways to ensure our platforms continue to safely give people a voice while helping brands thrive.

If you’re interested in exploring how The Social Group™ can help amplify and quantify your brand across various social platforms, send us a message!