Have you seen a war crime

Social media platforms remove evidence of war crimes

Social media platforms remove online content that they classify as terrorist, violent or hateful. This will prevent the potential use of this content to investigate serious crimes, including war crimes, Human Rights Watch said in a report released today. It is understandable that the platforms remove content that incites or promotes violence. However, it should be ensured that the material is archived so that it can, if necessary, be used to hold those responsible to account.

The 42-page report "Video Unavailable: Social Media Platforms Remove Evidence of War Crimes" urges all stakeholders, including social media platforms, to band together to develop an independent mechanism so that there is potential evidence of serious Crimes persist. All parties involved should ensure that the content is available for national and international criminal investigations as well as for research by non-governmental organizations, journalists and academics. Human rights groups have been pushing social media companies to improve transparency and accountability in connection with content removal since 2017.

"Some of the content that Facebook, YouTube and other platforms are removing is critical and irreplaceable evidence of human rights violations," said Belkis Wille, crisis region researcher at Human Rights Watch. "As prosecutors, researchers and journalists increasingly rely on photos and videos posted publicly on social media, these platforms should do more to ensure that access to potential evidence of serious crimes is preserved."

Social media content, especially photos and videos posted by perpetrators, victims and witnesses of human rights abuses, is increasingly popular in some prosecutions of war crimes and other serious crimes, including by the International Criminal Court (ICC) and national proceedings in Europe Moved into focus. This content also helps the media and civil society document atrocities and other human rights violations, such as a chemical weapons attack in Syria, the brutal crackdown on protests by security forces in Sudan, and police violence in the United States.

For this report, Human Rights Watch interviewed seven people working in civil society organizations, three lawyers, two archivists, a statistician, two journalists, a former prosecutor with experience in international courts, five people involved in international investigations, and three national law enforcement officers, a European Union official and a Member of the European Parliament.

It also checked Facebook, Twitter and YouTube content, which Human Rights Watch has cited in its reports since 2007, to support allegations of human rights violations. Out of a total of 5,396 pieces of content referenced in 4,739 reports (the vast majority published within the last five years), 619 (or 11 percent) have been removed.

In letters sent to Facebook, Twitter, and Google in May 2020, Human Rights Watch provided links to the content that had been removed. Companies were also asked whether Human Rights Watch could regain access to the content for archiving purposes. None of the companies complied with this request.

In recent years, social media companies like Facebook, YouTube, and Twitter have stepped up efforts to remove posts from their platforms that they believe violate their policies, community guidelines, or terms of use. This includes content that is classified as terrorist or glorifies violence, hate speech, hate-driven behavior and threats of violence.

Companies remove posts that are flagged by users and reviewed by content moderators. However, they are also increasingly using algorithms to identify and remove objectionable posts. In some cases, this happens so quickly that sometimes content is removed before users have even seen it. Governments around the world have encouraged this trend and urged companies to remove dangerous content as quickly as possible. It is unclear if and for how long social media companies store different types of content that they block or remove from their websites.

Businesses have the right to immediately take offline content that incites violence, otherwise harms individuals, or threatens national security or public order, as long as the standards they apply comply with international human rights and fair trial principles. However, the permanent removal of such content may prevent it from being available for criminal investigations and thus not being used to hold those responsible for crimes accountable.

There is still no mechanism in place to hold and archive remote content on social media that could provide critical evidence of human rights violations. Furthermore, there is no mechanism to ensure access for those investigating international crimes. In most countries, national law enforcement agencies can use search warrants, subpoenas and court orders to force social media companies to release the content. But international investigators have limited options to access the content due to the lack of proper powers.

Independent organizations and journalists have always played an important role in documenting atrocities around the world, especially in cases where no investigations by the relevant judicial authorities have taken place. In some cases, this documentation has led to legal proceedings. However, these organizations also do not have the ability to access removed content, and, like criminal investigators, do not learn of content that has been removed by automated systems before it is even visible to users.

A European law enforcement officer investigating war crimes told Human Rights Watch that “removed content is now part of my daily work. I am constantly confronted with potentially crucial evidence that I can no longer access. "

Holding individuals accountable for serious crimes can deter future human rights violations and promote respect for the rule of law. Prosecution efforts can also help restore victims' dignity by recognizing their suffering. Events can also be historically documented, which can protect against revisionism by those who deny that atrocities were committed. International law obliges states to prosecute genocide, crimes against humanity and war crimes.

It is critical that social media companies and all relevant stakeholders work together to develop a plan for an independent mechanism to hold and archive content from social media platforms. In such an archive, the content should be sorted accordingly and access for research and investigation purposes should be guaranteed - in accordance with human rights and data protection standards.

In parallel to these efforts, social media platforms should ensure more transparency when it comes to their existing methods of removing content, including through the use of algorithms. You should ensure that your own systems are not frivolous or biased, and that they provide meaningful ways to object to content removal.

"We recognize that the task for social media companies is not easy - including finding the right balance between protecting freedom of expression, privacy and removing content that can cause serious damage," said Wille. “Consultations based on the experience of other historical archives could lead to a real breakthrough. They could help the platforms protect freedom of speech and public safety while ensuring that accountability efforts are not hampered. "