fbpx

Instagram Suicides, Facebook Violence and Twitter Censorship: who’s to Blame?

Blogs-19th-august-cleartwo-3

As UK MPs Propose Regulations on Social Media to Curb Suicide, Extremism, we Discuss the Best Way to Tackle These Issues

Discussions about the effects of social media have gained popularity after the parents of 14-year-old Molly Russell claimed that Instagram was partly to blame for her suicide. They believe that access to images and posts about self-harm and suicide encouraged Molly to complete suicide in 2017, and that Instagram should have done more to remove graphic content.

But they’re not the only ones: A growing number of politicians and the public blame social media for wider issues in society and call for more government regulation and more government oversight.

In January, a cross-party group of MPs published a report titled Impact of Social Media and Screen-Use on Young People’s Health which suggests that social media companies should have a legal duty of care to their users. The report recommends:

  • Establishing a regulator to minimise harm from social media and enforce relevant laws.
  • Implementing consistent rules and policies across social networks, search engines and video-sharing sites.
  • Creation of a new partnership between the government, tech companies and law enforcement to tackle online child exploitation.

What the report fails to mention is that these ideas are nothing new. Similar measures have been in place for years and are far from perfect.

Facebook has dedicated teams of moderators as well as relying on self-reporting from users, but it still suffers from violent and extremist content:

A prime example of this is the “2017 Chicago Torture Incident” in which a young, autistic man was gagged, bound and tortured while his attackers broadcasted it to over 16,000 people on Facebook Live. The video was broadcast for 30 minutes before being taken down, all while the victim was physically attacked and racially harassed by his 4 attackers.

 

If the Facebook’s moderation policy worked as intended, the footage would have been taken down and local police would have been notified immediately. Obviously, the moderators were not able to do their job as intended, and the attackers were able to avoid immediate arrest.

As more and more people use social media to share content, discuss opinions and talk to each other, it’s just not realistic to monitor and moderate it all.

Similarly, it would be impossible to implement consistent rules for social media sites, as much of their policy is based in subjective judgements (i.e. what counts as ‘hate speech’) or national law which vary greatly. Twitter is a prime example of this, as it enforces a number of rules on its platform that aren’t part of international law or official company policy.

 

For example, Twitter recently reported a number of users for violating Pakistan laws surrounding blasphemy and political speech, even though they were not Pakistani citizens or even speaking about Pakistan. One notable example was a tweet from the progressive Muslim scholar Imam Mohamad Tawhidi asking Australian police to investigate extremism in mosques following a deadly knife attack in Melbourne in November.

 

He responded by pointing out that the laws of the Islamic Republic of Pakistan shouldn’t affect him:

Tawhidi, Mohamad (@imamofpeace).

“Twitter’s Legal Department has sent me an Email informing me that I have broken Pakistan’s Law. I have attached the Tweet in question, so judge for yourselves. I am not from Pakistan nor am I a Pakistani citizen. Pakistan has no authority over what I say. Get out of here.”

  • 03/12/2018, 9:07AM. Tweet (my emphasis)

If Twitter is willing to enforce Pakistani law across the globe, how can we hope to pass universal regulations across social media? Rules should be specific to organisations or nations.

While most people can agree that governments partnering with tech companies to influence and control speech on a global scale is bad, partnerships designed to help law enforcement, for example, are usually received positively. The ‘Impact of Social Media’ report also calls for new partnerships between the UK government and tech companies, aimed at tackling online child exploitation.

 

However, critics point out that the UK government doesn’t have the best record on combatting child abuse and exploitation, especially regarding FGM (Female Genital Mutilation) and child sexual abuse. While FGM has been banned in the UK since 1985, it was only in 2012 that the first case was brought to court, and “It is thought over 60,000 women in the UK have been mutilated and more than 20,000 girls are at risk.”

 

Similarly, the UK government is perceived to be soft on paedophiles, as many high-profile perpetrators have worked within the BBC itself. The problem was (and potentially still is) so severe that it required a multi-million pound investigation now known as Operation Yewtree. It doesn’t help that the state-run British Broadcast Company continues to defend paedophilia as a “sexual orientation” in opinion pieces like ‘Paedophiles need help, not condemnation – I should know’ as recently as 2017.

The failings of the UK government are too many to detail here, but the common theme is failings on the part of officials law enforcement, NOT a lack of co-operation from tech companies.

It is my opinion that the measures proposed in the new Impact of Social Media and Screen-Use on Young People’s Health report don’t address the real causes of the issues it discusses.

 

It is true that suicide is a growing problem in the UK, especially among men aged 18-40 where it is the leading cause of death. But is banning images of scars on Instagram going to stop people from committing suicide?

Not while the root causes of depression and mental illness persist. What we need is better mental healthcare services and a society that makes suicidal people (especially young men) feel valued.

 

It is true that social media giants and tech companies like Twitter, Facebook and Google are not as transparent as they should be or held accountable for their actions often. But is more moderation and observation going to fix this when these issues were caused by moderation and observation by the social media companies’ own moderation teams acting without accountability?

What we need is more transparency and better methods to punish social media companies for causing active harm or unfairly privileging one ideology or one narrative in what is the public square of our era.

 

It is true that child exploitation is a serious problem, especially online, as more parents allow their children to browse the internet and play online with little education or supervision. But is founding a (likely very expensive) government partnership the best way to tackle it when the problem is linked to modern parenting?

It’s not right to put the blame of child exploitation solely at the feet of social media companies when parents are partly to blame. Vulnerable children should not be allowed online without proper guidance and supervision, and this needs to be factored in to any effort at tackling child exploitation.

In short, social media companies ARE NOT politically neutral, effective at combatting extremism and mental health issues or transparent and accountable, and it’s very unlikely they will ever be.

So, why do we keep trying to make that happen with more government influence?

 

The measures proposed in Impact of Social Media and Screen-Use on Young People’s Health are not going to solve the issues of suicide, extremism or child abuse. How could they, when they fail to address the root causes of these issues?

 

Banning graphic images won’t reduce suicides, more moderation isn’t going to solve issues with poor moderation, and more government task-forces aren’t going to solve issues that have persisted through previous government task-forces. It doesn’t take a genius to notice this.

 

What these propositions will do, however, is make the politicians who proposed them look good. After all, they are taking action to tackle serious issues. Who cares if the policy actually makes things better when you get to scribble your name in the margins of history?

Call Now Button