Gadget

Facebook tackles fakes

Facebook has responded vigorously to challenges around fake news and its community standards.

As one of the world’s biggest communication platforms, Facebook acts as a virtual space for around a quarter of the world’s population, but until now the company has been loath to police its own world. It has been vigorous on non-violent issues like nudity, but slow to respond to widely-shared content like fake news and promotion of violence.

This came to a head in Myanmar, when messages, posts and comments attacking the Rohingya, a Muslim minority group, created an atmosphere in which violence flourished.

“We were too slow to get started in Myanmar,” admitted Facebook CEO Mark Zuckerberg in a statement last week.

This statement came in the wake of a report of the state of Facebook-incited violence in Myanmar released earlier this month. Facebook commissioned Business for Social Responsibility (BSR), an independent, non-profit organisation with expertise in human rights practices, to evaluate the level of Facebook’s involvement in the recent spread of violent attacks in Myanmar. The focus of the BSR’s report was on Facebook’s role in the persecution of the Rohingya, a Myanmari Muslim minority group.

A strong message throughout the report was that Facebook struggles to manage the spread of fake news, allowing an atmosphere that promotes violence. The misinformation ultimately affects the perceptions of what’s true and what’s false to Facebook users in Myanmar.

“Facebook has become a means for those seeking to spread hate and cause harm,” said BSR in its 60-page report. A long list of potential fixes has been given to Facebook for implementation. The organisation warned that, if the situation was not remedied before Myanmar’s parliamentary elections in 2020, it would be “likely to be a flashpoint for hate speech, harassment, misinformation, incitement to violence, and other actions designed to undermine the political process”.

Zuckerberg’s said that Facebook had taught him painful lessons about the way people connect: “One of the most painful lessons I’ve learned is that when you connect two billion people, you will see all the beauty and ugliness of humanity.”

According to a Facebook statement released with the BSR report, there were still challenges ahead for the platform’s remedy to the situation, because Myanmar hasn’t standardised Unicode in its digital representation of the Burmese language. Facebook claimed that this makes it difficult to detect posts which contain hate speech. As a fix, the company will be encouraging the country’s transition to Unicode, and will not allow new users to make use of non-Unicode typefaces.

Unicode is an international standard for representing languages and scripts where each letter, digit or symbol is represented by a unique number combination.

Facebook also said in a statement issued on Thursday that it realises that nuances exist in how the platform is used in different countries. In Myanmar, it is primarily used as an information source; users in Myanmar often refer to Facebook and the Internet interchangeably. This is due to Facebook’s Free Basics service, which allows users to make use of the platform without paying for data.

This highlights the power Facebook has over the South-East Asian country’s communications and level of information. For this reason, the company said, it would be launching the Facebook Journalism Project, which includes journalist safety training for Myanmar’s journalists.

While this news comes as a relief to many, many within Myanmar are less convinced. An activist who goes only by the name Zarni has called the report “a complete whitewash”. He said that Facebook was “resorting to PR” instead of taking stronger measures to stop the spread of fake news on its platform.

Read on: The effect on Facebook’s fake news situation

Zuckerberg outlined four strategies for addressing fake news, namely updating community standards, proactively identifying harmful content, discouraging content which is sensationalist, and increasing transparency of Facebook’s internal affairs to the public.

The community standards have been changed in accordance with culture, he said. The teams are based in more than 10 offices across six countries to reflect “the different cultural norms of our community“.

Policies of what can and cannot be posted are changed by “getting input from outside experts and organisations” with close relationships to human rights and free expression.

Zuckerberg referred to a social border which posts may not cross if they do not wish to be removed from the platform. Most of the time, removal of posts that cross this boundary can be automated through the company’s AI trained-system.

This shift from reactive to proactive is expected to become more apparent in other areas: “we have trained our systems to proactively detect the vast majority of problematic content,” he said pointing out that “99% of the terrorist content we remove is flagged by our systems before anyone on our services reports it to us“.

Sensationalist content, referred to as “borderline content”, will be penalised, which results in less distribution and engagement.

“Sensationalist and provocative content is widespread on cable news today and has been a staple of tabloids for more than a century. At scale it can undermine the quality of public discourse and lead to polarisation“, said Zuckerberg.

Setting an example for transparency and accountability, Facebook will be releasing meeting minutes which concern changes in community standards and will report on the extent to which the community standards are working for Facebook users.

We plan to expand this work to share more information on our policy-making and appeals processes, as well as working on additional research projects,” said Zuckerberg.

Exit mobile version