Facebook announced new measures on Wednesday August 15th to tackle hate speech in Myanmar amid criticism that the social media platform is not making enough efforts to counter the ethnic violence against the Rohingya Muslim minority.
Facebook announced in a blog post that employees travelled to Myanmar over the summer in an attempt to gain a clear idea about the situation.
According to the Washington Post, Facebook has also hired more than 60 Myanmar language experts to re-examine content and considers extending that to 100 by the end of the year.
The initiative came as lawmakers, human rights activists and the United Nations have slammed the role of Facebook in the Myanmar crisis.
Meanwhile, Facebook vows to be more committed in its defence against the spread of controversial or false information on its network.
Facebook product manager Sara Su said that people alone are not able to catch all bad content as much of Facebook’s effort relies on artificial intelligence (AI).
Analysts warn that AI is far from capable of monitoring and examining hate speech or false information. Facebook CEO Mark Zuckerberg has said that it will take five or ten years to train AI to recognise the nuances.
The website said it is also implementing in Myanmar a recently updated policy addressing “credible violence,” which sets standards to remove content that has the “potential to contribute to imminent violence or physical harm.”