Categories
Alfie Fernie Insights

Sticks and Stones in the Information Age 

Since November 2017, Molly Russell’s death has pressured social media companies to think of the children.

As the inquest into her death concluded this September, depressive content viewed by 14-year-old Molly contributed to suicidal thoughts. Her experience resonates with many. A national survey conducted by The Cybersmile Foundation indicates that 89% of 16-24-year-olds feel social media negatively impacts their mental health. And as Molly’s father, Ian, has so bravely shown, parents fear how easily their children can access harmful content. But to date, no social media company has adequately answered the following question:

“How best to safeguard the vulnerable from online harms, on platforms hosting user-generated content?”

Just over a week ago, Culture Secretary Michelle Donelan (the seventh office holder since Molly’s death) announced that content encouraging self-harm is to be criminalised. Companies will be required to remove such content from their platforms or face multi-million pound fines, and identifiable users sharing it face prosecution. These are positive steps towards tougher safeguards, that work to prevent repeats of Molly’s death.

But this was just one of several amendments made to the government’s Online Safety Bill.  

The Bill has endured a laboured development, faced with long delays and strong disagreements. Its intentions: introduce an independent regulator to set new safety standards, and rules to minimise the visibility of harmful content. It is the first attempt globally to regulate a wide spectrum of online harms, in one framework. This spectrum includes harms with clear definitions, like terrorist content, hate crimes, or encouraging self-harm. But it also covers harms with less clear definitions, like disinformation, intimidation, or trolling.

For critics, to “think of the children” is a rhetorical tactic promoting censorship by stealth. Where should a line be drawn on political opinions, and who has the right to draw it? Surely, all user-generated content could be considered harmful in degrees to different users. During this year’s first Conservative Party leadership election, Kemi Badenoch tweeted, “We should not be legislating for hurt feelings.” Another amendment made to the Bill this week bolsters Badenoch’s narrative.

The controversial “legal but harmful” provision has been replaced, a duty of care that would have required companies to introduce mitigation processes on all legal content considered harmful by any user. Now, companies must only remove content that is illegal, breaches their own terms of service, and provide filtering tools to users for greater control over what is viewed. A compromise that still leaves children exposed to online harms.

Other amendments made this week include new child risk assessments that companies must publish, and age-appropriate protection enforcement plans. But the capability of companies to remove harmful content has been questioned. In November, Meta cut 11,000 jobs soon after Twitter announced it had halved its workforce. Citing economic uncertainty, cuts at the largest companies have already damaged their content moderation teams.

Their interest in the Bill will lie with new costs and complexities incurred when regulating content. The legal but harmful provision’s replacement will be a relief to companies; they are no longer expected to be the judge, jury, and executioner of content with less clear definitions. But other amendments considered, they are still expected to do more with less.

Recent displays by executive leaders have also cast doubt on the credibility of companies, to ensure online safety. Elon Musk’s desire to see a ‘counter-narrative against the woke mind virus’ stands in contention with safeguarding expectations. His amnesty plan for suspended Twitter users certainly seems irresponsible. But in a blog published this week, Twitter assured its users that the company’s terms of service will continue to be upheld. Automated detection technology and education will play an important role, in encouraging positive conduct amongst bad actors. But only time will tell if companies can reduce the visibility of their harmful content and mitigate its negative impacts.

The Bill in its current form feels like a missed opportunity. A legal but harmful provision was the means to better and more proactive company governance, which could have been practiced globally. Can adults, let alone children, use filters to prevent themselves from viewing troll content? Are companies’ terms of service comprehensive enough to mitigate instances of intimidation? With online harms proving to do more than hurt feelings, more must be done to protect those most vulnerable in the information age.