Social media firms should face fines for hate speech failures, urge UK MPs
By Natasha Lomas From TechCrunch
Social media giants Facebook, YouTube and Twitter have once again been accused of taking a “laissez-faire approach” to moderating hate speech content on their platforms.
This follows a stepping up of political rhetoric against social platforms in recent months in the UK, following a terror attack in London in March — after which Home Secretary Amber Rudd called for tech firms to do more to help block the spread of terrorist content online.
In a highly critical report looking at the spread of hate, abuse and extremism on Facebook, YouTube and Twitter, a UK parliamentary committee has suggested the government looks at imposing fines on social media forms for content moderation failures.
It’s also calling for a review of existing legislation to ensure clarity about how the law applies in this area.
“Social media companies currently face almost no penalties for failing to remove illegal content. There are too many examples of social media companies being made aware of illegal material yet failing to remove it, or to do so in a timely way. We recommend that the government consult on a system of escalating sanctions to include meaningful fines for social media companies which fail to remove illegal content within a strict timeframe,” the committee writes in the report.
Last month, the German government backed a draft law which includes proposals to fine social media firms up to €50 million if they fail to remove illegal hate speech within 24 hours after a complaint is made.
A Europe Union-wide Code of Conduct on swiftly removing hate speech, which was agreed between the Commission and social media giants a year ago, does not include any financial penalties for failure — but there are signs some European governments are becoming convinced of the need to legislate to force social media companies to improve their content moderation practices.
The UK Home Affairs committee report describes it as “shockingly easy” to find examples of material intended to stir up hatred against ethnic minorities on all three of the social media platforms it looked at for the report.
It urges social media companies to introduce “clear and well-funded arrangements for proactively identifying and removing illegal content — particularly dangerous terrorist content or material related to online child abuse”, calling for similar co-operation and investment to combat extremist content as the tech giants have already put into collaborating to tackle the spread of child abuse imagery online.
The committee’s investigation, which started in July last year following the murder of a UK MP by a far right extremist, was intended to be more wide-ranging. However, because the work was cut short by the UK government calling an early general election the committee says it has published specific findings on how social media companies are addressing hate crime and illegal content online — having taken evidence for this from Facebook, Google and Twitter.
“It is very clear to us from the evidence we have received that nowhere near enough is being done. The biggest and richest social media companies are shamefully far from taking sufficient action to tackle illegal and dangerous content, to implement proper community standards or to keep their users safe. Given their immense size, resources and global reach, it is completely irresponsible of them to fail to abide by the law, and to keep their users and others safe,” it writes.
“If social media companies are capable of using technology immediately to remove material that breaches copyright, they should be capable of using similar content to stop extremists re-posting or sharing illegal material under a different name. We believe that the government should now assess whether the continued publication of illegal material and the failure to take reasonable steps to identify or remove it is in breach of the law, and how the law and enforcement mechanisms should be strengthened in this area.”
The committee flags multiple examples where it says extremist content was reported to the tech giants but these reports were not acted on adequately — calling out Google, especially, for “weakness and delays” in response to reports it made of illegal neo-Nazi propaganda on YouTube.
It also notes the three companies refused to tell it exactly how many people they employ to moderate content, and exactly how much they spend on content moderation.
The report makes especially uncomfortable reading for Google with the committee directly accusing it of profiting from hatred — arguing it has allowed YouTube to be “a platform from which extremists have generated revenue”, and pointing to the recent spate of advertisers pulling their marketing content from the platform after it was shown being displayed alongside extremist videos. Google responded to the high-profile backlash from advertisers by pulling ads from certain types of content.
“Social media companies rely on their users to report extremist and hateful content for review by moderators. They are, in effect, outsourcing the vast bulk of their safeguarding responsibilities at zero expense. We believe that it is unacceptable that social media companies are not taking greater responsibility for identifying illegal content themselves,” the committee writes.
“If social media companies are capable of using technology immediately to remove material that breaches copyright, they should be capable of using similar content to stop extremists re-posting or sharing illegal material under a different name. We believe that the government should now assess whether the continued publication of illegal material and the failure to take reasonable steps to identify or remove it is in breach of the law, and how the law and enforcement mechanisms should be strengthened in this area.”
The committee suggests social media firms should have to contribute to the cost to the taxpayer of policing their platforms — pointing to how football teams are required to pay for policing in their stadiums and the immediate surrounding areas under UK law as an equivalent model.
It is also calling for social media firms to publish quarterly reports on their safeguarding efforts, including —
- analysis of the number of reports received on prohibited content
- how the companies responded to reports
- what action is being taken to eliminate such content in the future
“It is in everyone’s interest, including the social media companies themselves, to find ways to reduce pernicious and illegal material,” the committee writes. “Transparent performance reports, published regularly, would be an effective method to drive up standards radically and we hope it would also encourage competition between platforms to find innovative solutions to these persistent problems. If they refuse to do so, we recommend that the government consult on requiring them to do so.”
The report, which is replete with pointed adjectives like “shocking”, “shameful”, “irresponsible” and “unacceptable”, follows several critical media reports in the UK which highlighted examples of moderation failures on social media platforms, and showed extremist and paedophilic content continuing to be spread on social media platforms.
Responding to the committee’s report, a YouTube spokesperson told us: “We take this issue very seriously. We’ve recently tightened our advertising policies and enforcement; made algorithmic updates; and are expanding our partnerships with specialist organisations working in this field. We’ll continue to work hard to tackle these challenging and complex problems”.
In a statement, Simon Milner, director of policy at Facebook, added: “Nothing is more important to us than people’s safety on Facebook. That is why we have quick and easy ways for people to report content, so that we can review, and if necessary remove, it from our platform. We agree with the Committee that there is more we can do to disrupt people wanting to spread hate and extremism online. That’s why we are working closely with partners, including experts at Kings College, London, and at the Institute for Strategic Dialogue, to help us improve the effectiveness of our approach. We look forward to engaging with the new Government and parliament on these important issues after the election.”
Nick Pickles, Twitter’s UK head of public policy, provided this statement: “Our Rules clearly stipulate that we do not tolerate hateful conduct and abuse on Twitter. As well as taking action on accounts when they’re reported to us by users, we’ve significantly expanded the scale of our efforts across a number of key areas. From introducing a range of brand new tools to combat abuse, to expanding and retraining our support teams, we’re moving at pace and tracking our progress in real-time. We’re also investing heavily in our technology in order to remove accounts who deliberately misuse our platform for the sole purpose of abusing or harassing others. It’s important to note this is an ongoing process as we listen to the direct feedback of our users and move quickly in the pursuit of our mission to improve Twitter for everyone.”
The committee says it hopes the report will inform the early decisions of the next government — with the UK general election due to take place on June 8 — and feed into “immediate work” by the three social platforms to be more pro-active about tackling extremist content.
Commenting on the publication of the report yesterday, Home Secretary Amber Rudd told the BBC she expected to see “early and effective action” from the tech giants.
For more on this story go to; https://techcrunch.com/2017/05/02/social-media-firms-should-face-fines-for-hate-speech-failures-urge-uk-mps/?ncid=rss&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29