in ,

Facebook will remove content that induces violence in the real world

The last two years for Facebook have been the years of the Cambridge Analytica scandal , the milestone of the 2 billion active users , the step forward of WhatsApp and Instagram or the ubiquitous instauration of ephemeral content such as the one that marks the generation step . But also, undoubtedly, that of Zuckerberg becoming increasingly controlled and cornered by the controversy surrounding disinformation and disguised propaganda in its social network.

In the middle of that debate, an interview by the CEO of Facebook with Recode has put him back into a controversy as a result of his statements about Facebook users denial of the Holocaust:

“I do not think they are intentionally wrong.”

This phrase is framed in a specific context: the decision of Facebook not to veto the presence of Infowars , a group of extreme right between whose messages are, among others, conspiracy theories. A context and a phrase that rekindle the controversy over Facebook’s role in freedom of expression, disinformation and hate speech.

Facebook, as a company and as a platform for social communications, has adopted a position that shows that Zuckerberg’s statements are not hot. With this graphic commented in a session with journalists in his campus of Menlo Park is better explained how Facebook interprets the disinformation level of publications.

 

  • Right . Objectively truthful information and without disinformation interests behind. No action is taken against it.
  • Wrong . Fake information but no disinformation interests behind. The scope of these publications is reduced.
  • Bulo / Hoax . Here enter hate speech or directly falsified news, such as those generated from a photomontage that will cause a strong controversy to try to pass as authentic. Facebook can enter to eliminate this content considering that it violates its conditions of use.
  • Propaganda / Cherry-picking . The most delicate quadrant, according to Facebook’s own words: we must value it without losing sight of freedom of expression, since they are not false by themselves although they do contain a bias. The user is informed of the context of the medium that has published it, or is shown articles related to the topic that he has read so that he can contrast it with other media.

Where would it fit what was commented by Zuckerberg, that offensive speech based on personal beliefs? According to him, these are unintentional mistakes, so the elimination could not be considered. But with a “but”.

The difficult approach of Facebook and its immersion in the real world

The “but” of Zuckerberg through which the elimination of this offensive content could be given is when it endangers the integrity of people in the real world , and not only in messages exchanged on Facebook.

“We are moving towards disinformation policies that signal or induce violence, we are going to demolish it, if they cause real damage, real physical damage, or if they attack people, that content should not be on the platform.”

This is an exceptionally complicated company. Facebook has been difficult for two years to alleviate a problem that affects exclusively its platform. Now he wants to be expectant also of what happens in the physical world to act accordingly. Here we recall, for example, the Pizzagate case , which resulted in a man sentenced to four years in prison for opening fire in a pizzeria epicenter of a hoax … on Facebook. Would it also preemptively apply false content like this?

The measure will begin to be applied in Sri Lanka, a country that ended up blocking access to social networks after outbreaks of anti-Muslim violence

Facebook has not detailed to what level of “physical danger” should be exposed someone to take into consideration the withdrawal of content that could affect it, but has commented that it is working with civil society groups to understand how this misinformation it can serve as a fuel for local tensions. Even so, he believes that he must still establish concrete criteria to determine what constitutes violence and what does not.

The company of Zuckerberg has specified that it will begin to apply this measure in Sri Lanka , and that later it will be possible to study the need to carry it out in other countries. It’s not a casual choice: a few months ago, the Sri Lankan government blocked social networks across the country and accused Facebook of spreading hate messages (videos that incited violence against Muslims and Buddhists) following the deadly riots in the country.

We still need to determine what constitutes violence and what is not, and in what other countries we could act

On the other hand, the company has also been challenged by such a role in Myanmar, where Facebook was accused of playing a key role in spreading hate speech in Myanmar at a time when 650,000 Rohingyas had to flee to Bangladesh. The UN saw signs of genocide in the performance of the Burmese army and the Buddhist clergy saw in the social network a speaker for his Islamophobic speech.

With this measure, Facebook seems committed to admit its responsibility as a transmission belt in boundary scenarios : “We are starting to work in countries where we have seen recent cases in which it has been perceived that misinformation encourages physical violence beyond the online “said the company. How it evolves and in what other countries it may end up being applied is an unknown today.

Leave a Reply

Your email address will not be published.

Difference between leopard, cheetah and jaguar

Anabolic steroids effects