Disturbing Videos, Images and Posts Proliferate on Google, Facebook, Instagram and YouTube Portraying Violence
Violent Images Run Contrary to Facebook CEO Mark Zuckerberg’s Claim That Company Takes Down “99 Percent” of Terrorist Content Before Users Can See It
May 17, Washington, DC – After months of promises that they would stop being a megaphone for terrorists and their supporters, a new investigation has found that digital platforms such as Google, Facebook, Instagram and YouTube continue to host hundreds of violent and disturbing terrorist videos, images and posts. Many remain online for weeks to be seen by thousands of users – giving terrorists a valuable platform to spread hate and recruit members.
The “Fool Me Once…” report, which is the result of an investigation by Digital Citizens Alliance and Global Intellectual Property Enforcement Center, found posts showing violence against terrorist targets, messages on how to manipulate digital platforms to spread terror messages online and recruiting videos. Among the disturbing and offensive videos, images and posts found on platforms are:
- Numerous images of violence, including beheadings, throwing victims off rooftops and the caging, maiming and torture of terrorist targets.
- Posts on how to use digital platforms to serve the terrorist agenda. In one instance terrorists posted, “A Guide to Social Media Platforms and how to manipulate each Social Media Platform.”
- Recruiting videos promoting terrorists groups. In one example, Digital Citizens monitored a YouTube video promoting the Islamic State and watched as its viewership increased from 15,000 on April 26 to over 34,000 by May 8.
While Facebook CEO Mark Zuckerberg claimed during recent congressional testimony that the company’s systems take down “99 percent” of terrorist content, the investigation found numerous examples of disturbing images. In one example detailed in the report, a May 1 post shows the horrifying ways in which the Islamic State torture, maim, and execute its victims.
“If digital platforms are unable to effectively police their own content when they are under scrutiny from Congress and state policymakers and facing serious trust issues with their users, it’s time for someone else to do it for them,” said Tom Galvin, executive director of the Digital Citizens. “That means Congress, state attorneys general and regulators need to step in and protect Americans. Fool us once, shame on the platforms. Fool us twice, shame on us.”
“Social media companies such as Google and Facebook say there is no place for terrorism and hate speech on their platforms,” said Eric Feinberg, CEO of GIPEC. “They continue to tell the public that they are employing human and technological solutions to stop terrorist accounts and posts to their platforms. This research shows that despite their promises their efforts are not cutting it and they still have a long way to go.’
The investigation underscores that the business model utilized by these digital platforms is at the root of offensive, illegal and illicit content. That business model – to harvest user information so it can be sold to advertisers and third parties and to enable nearly anyone to post content with essentially no ramifications – creates the ideal environment for terrorists, criminals and bad actors.
A 2016 memo recently surfaced in which a top Facebook executive stated the company shouldn’t let negative consequences get in the way of its mission demonstrates that the company is focused on its business vision above all else. “Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools. And still we connect people,” wrote Andrew “Boz” Bosworth, Facebook Vice President.
These flaws have led to a downturn in trust in digital platforms. A majority of Americans view these digital platforms as irresponsible companies that should be regulated. Seventy-one percent of Americans said trust in the platforms had dropped in the last year.
While Digital Citizens holds out hope that digital platforms will address the issues of offensive, illegal and illicit content, it also believes it’s time for Congress, regulators and state policymakers to look at steps they can take. It seems the window for companies to act on their own to satisfy the public is closing.
There are multiple avenues for government to pursue and it could be a mix of federal and state actions in the United States:
- Congress could revise privacy and other laws that govern how content is treated online to force platforms to take greater legal responsibility.
- Regulators and watchdog agencies such as the Federal Trade Commission could investigate to understand just how much digital platforms know about the offensive and illegal content on their platforms. There is reason to wonder: in 2011 Google paid $500 million to close a U.S. investigation that the company knew that rogue online pharmacies were illegally marketing medications through Google AdWords as early as 2003.
- States could intervene as they have with big tobacco and other industries – especially as we learn the impact of digital platforms on young teens. For example, state attorneys general could work together to negotiate agreements in which digital platforms agree to take more steps to block illegal and offensive content.
And in Europe new regulations are due to take effect this month to protect user privacy and other efforts are underway to force digital platforms to be more accountable for the content that appears on their sites.
“After everything that has happened in the last two years, the discovery of offensive and hateful Jihadi videos and posts on leading digital platforms is an indicator that perhaps the companies simply don’t have the capability or the will to police themselves,” added Galvin. “In that case, someone will have to do it for them.”