Categories
Insights Michael Baker

Can a new ‘webfare state’ solve the crisis of trust and truth online?

It was Beveridge’s identification in 1942 of “five evils” afflicting the nation that led to the creation of the UK’s welfare state and the LSE’s Trust, Truth and Technology Commission has used the same framework to dimension the problems caused by our modern crisis of trust and truth online.

It demonstrates the astounding progress we have made as a society that the five modern ills the LSE identifies – confusion, cynicism, fragmentation, irresponsibility and apathy – compare so favourably with the problems of squalor, ignorance, want, idleness and disease that Beveridge sought to banish.

Nevertheless, the technological revolution has reshaped our communications networks and torn apart our media model to such an extent that it challenges our ability to have an informed – and reasoned – debate. The LSE’s report is an exhaustive review of the evidence and the description of the problem provides a compelling framework for policymakers, technology companies, media and civil society to use as a vehicle for action.

The answer touted by the LSE of a new “Independent Platform Agency” is in step with public opinion and other recent reviews and reports. More than half of people surveyed in Ofcom research say that they want to see tighter regulation. But the test of any proposed solution is whether it can tackle the problems of today and confront the dangers of tomorrow.

A new centralised institution to protect our ‘webfare’ seems anachronistic in our networked era where we trust our peers more than governments and institutions, which were found so wanting in their oversight of the banks, the clergy and national celebrities.

The LSE’s proposed agency would observe trends in news and information and provide policy advice to regulators, including recommending tighter legislation if platforms like Facebook don’t act to improve the quality of information online. It would also coordinate media literacy programmes for children and adults. It’s a top-down approach which characterises citizens as highly passive. Instead of providing frameworks to help them to act – and in so doing reduce confusion and apathy – it concentrates on what platforms, governments and regulators can do on their behalf.

Imagine how consumers might have responded differently to the Facebook-Cambridge Analytica scandal if, unhappy with the way their data was being sold or the way they were being targeted by news or ads, they could have easily ported their information to an alternative platform which pays for their data? While such a platform does not exist yet, new data rules enshrine users’ ability to move their data. Legislating to give users ownership over that data and the right to be paid for it might boost competition to platforms like Facebook.

Not enough credit is given to the public – and our existing institutions – for our ability to adapt. As the LSE authors recognise, one of the positive outcomes of a greater understanding of ‘fake news’ is that people are becoming more likely to question things seen online. And media staffed by professional journalists are reinventing themselves as brands people can trust as sources of authoritative, verified news. People working in public relations are seeing that it is still possible to debunk “twitter myths”. While people rightly expect evidence and sources, the online community does step up and correct others if the untruth is repeated elsewhere.

What’s more, one of the great benefits of ‘fragmentation’ is the dramatic collapse in costs of publication and broadcasting, putting them within everyone’s reach. This enables the public to hold institutions – and each other – to account in previously unimaginable ways. You only need to look at the way Uber drivers and riders rate each other to recognise that we are witnessing an accountability and peer-review revolution. If anything, this encourages people to be more responsible rather than less.

Are we seeing market forces and consumer demands reshaping networks in response to the problems that have arisen through our communications revolution? And if so, should we concentrate our efforts on areas where the market and individual user are poorly equipped to protect themselves?

The reality is that an era of deepfakes is dawning where organised deception, particularly using video, will be possible from practical jokers looking for cheap laughs through to foreign powers attempting to sow discord. These fakes are almost impossible to detect without technology or expertise and therefore is where users and citizens are most vulnerable. This is the new – arguably more dangerous – frontier in the battle of truth and trust online. Tackling the deepfakes is in the interests of all responsible actors and could provide common ground to bring agencies together. While we should be wary about encouraging an arbitration of ‘truth’, if something can be shown to false it would not be unreasonable to know from where it was posted and when it arrived on line. Perhaps if there is to be a new ‘webfare state’ – this is where it should begin.