Connect with us
Picture credit: Facebook

Featured

Inside Facebook: What happens when you report a post

Gadget spoke to Facebook Africa’s team in Nairobi to find out what happens when offending posts are reported on the platform. By BRYAN TURNER.

The report button is where many users go when they witness an offending post on Facebook. But what happens after you tap “report”?

Representatives from the platform assured journalists at a workshop in Nairobi last week that they’ve been refining Facebook’s Community Standards to make sure users are protected. Fadzai Madzingira, public policy associate manager for content at Facebook described these Community Standards as being “the law of Facebook, even when the content doesn’t violate the law of the country [where it is shown]”.

“When my team writes about standards, we have to think about scale. We also have to keep in mind that 80% of our users are outside of North America. That’s why our Content Policy team in EMEA is made up of 13 people from 10 different countries across the region. This way we have a finger on the pulse in many different parts of the world.”

Madzingira explained why Facebook’s policies aren’t fragmented and custom-tailored to each country where it operates.

“If you have a Zimbabwean sitting in London, talking to a South African sitting in Tokyo on Facebook, which country’s laws would apply? This is why we have a global set of rules, called our Community Standards,” says Madzingira.

She pointed out that users in the Sub-Saharan Africa region are not reporting offending posts as much as the rest of the world.

Once users decide to report an offending post, they are required to select what the post is doing to offend, which ranges from hate speech to nudity. The site also urges the user to call emergency services if someone involved is in immediate danger. The process has a few steps to clarify which part of the post is offensive before the user can submit.

“We need our community to report content to us if they think it breaks our rules, so we can remove it,” says Madzingira. “If you do not click “submit for review”, it won’t be reviewed, even after going through the reporting flow. That’s why it’s so important to go through the flow, to highlight to us what part of Facebook’s content policies you think it violates, so we can investigate and, if necessary, take action.” 

Once the report has been submitted, it makes its way to one of the 15,000 content reviewers around the world, where they make a decision based on the Community Standards. These content reviewers have had more than 80 hours of training on the content policy. If a decision can’t be made, it will be sent higher up to policy experts. If a report about borderline content goes to the top, it may end up changing the policy for future decisions.

“There’s a lot of content we are now able to remove faster, before people see it and report it to us”,  says Toby Partlett, policy comms manager for EMEA at Facebook. “For example, the vast majority of content which breaks our rules on adult nudity is now detected proactively by artificial intelligence technology. We’ve trained our systems to know what that looks like, to deal with it before it gets to the reporting stage.”

“It’s not as easy as ‘remove all nudity’,” adds Madzingira. “We’ve worked with curators from art galleries and other prominent artists to create policies that make the distinction between gratuitous nudity and artistic expression. These systems then have to be trained to spot content which might violate these policies.”

Dealing with posted content is one aspect of the Content Policy, and Facebook says its users post over 1-billion pictures every day. What about moderating live streams?

After the mass shooting that was live-streamed in Christchurch, New Zealand earlier this year, Facebook says it has been working hard to reduce the amount of abuse used on its live platform.

“We’re working on ways to proactively detect violating content quickly, so we can remove before people see it,” says Partlett. “To do this, we need to train our AI technologies to spot this type of content, based on certain signals. This is difficult though, as thankfully, there aren’t many of these events. That’s why we are working with law enforcement in the UK and US to get similar footage from training exercises to train our AI technologies.

“We hope this will improve our artificial intelligence technology, helping us more quickly identify and remove dangerous content before it has a chance to be viewed.”

Subscribe to our free newsletter
To Top