Today’s papers report that the Government is planning to abandon controversial plans to ban social media companies from hosting “legal but harmful” speech.
Instead, they will need to give users the option to screen out harmful (but legal) content that the user doesn’t wish to see. This is part of a broader effort to focus on making Facebook, Twitter et al actually enforce their terms of service, especially with regards to things like minimum age limits.
Critics of the move claim that by watering down the Online Safety Bill in this way, the Government is reducing protections for users online; Lucy Powell reportedly said that it “gives a free pass to abusers and takes the public for a ride.”
Honest opponents of Nadine Dorries’ plans to regulate legal (but harmful) speech should acknowledge that they’re probably right, at least in part. But so is Paul Scully when he says that the change “will stop Silicon Valley using the Bill as an excuse to delete legitimate opinions or censoring people with whom they don’t agree.”
There is no perfect balance to be struck between freedom and security, in speech as anywhere else. Whatever relative priority we collectively decide to take on each, it will always either restrict what many consider to be legitimate speech or expose others to what they at least consider to be unacceptable offence, and in many cases real distress.
Where we draw the line between freedom and security is an extremely important decision that says a lot about – and substantially impacts – what sort of society we are. Which is why that debate ought to be conducted openly, and the outcome legislated for clearly by our political institutions.
The whole concept of “legal but harmful” blurred this line unacceptably. It allowed politicians to offer up rubrics about freedom of speech whilst outsourcing the enforcement of the nation’s actual speech codes to private companies. As a consequence, the exact location of those boundaries would be opaque, enforced differently on different media, and always upheld (or not) with half an eye on the bottom line.
Social media companies are private organisations; there is a separate (and important) debate about the extent to which they should be free to impose their own limits on speech when they constitute such an important part of the modern public square.
But the debate about the Online Safety Bill was not about that; it was about content which the State feels strongly enough should not be online that it mandates those companies to remove it. It seems fair to argue if that if the Government feels so strongly about such content, it should step up and have a proper debate about making it illegal.
If it is not prepared to do that, then it has no business tasking corporations to shadow-ban such content on the sly. Given the growing centrality of the internet to political life, “you can say what you want, but not online” is no foundation for genuine freedom of speech.