Toby Young is the General Secretary of the Free Speech Union.
In an article published last week, Chris Philp says the House of Commons committee that is scrutinising the Online Safety Bill has heard evidence from a ‘wide range of people… welcoming this pioneering internet safety law’.
Had the committee invited the Free Speech Union to give evidence – or, indeed, Index on Censorship, Big Brother Watch, or any other pro-free speech organisations – the response would have been less enthusiastic.
I take issue with the minister’s claim that the Bill will ‘enhance’ freedom of speech.
As is well known, the Bill will require the big social media companies like YouTube, Facebook and Twitter to address ‘legal but harmful’ content, including ‘content that is harmful to adults’. If they fail to do so, they will face fines of up to ten per cent of their annual global turnover, which in Facebook’s case amounts to $11.7 billion, based on its 2001 revenue.
Philp rightly points out that while these ‘Category 1’ providers will have an obligation to stipulate how they intend to respond to ‘legal but harmful’ content in their terms and conditions, they will not be required to remove it. But he is being disingenuous when he says they could ‘choose to allow it on their platforms’.
The different responses available to the providers in the Bill are described in such a way as to deter them from taking such a laissez-faire approach. These are the four choices, as set out in Clause 13:
- taking down the content;
- restricting users’ access to the content;
- limiting the recommendation or promotion of the content;
- recommending or promoting the content.
Choosing to allow content that is ‘legal but harmful’ to adults on their platforms is not one of the four options. Rather, if the companies do not want to remove or restrict this content they will have to recommend or promote it.
It is frankly inconceivable that YouTube, Facebook or Twitter will choose option (d) after the Government has designated the content in question ‘harmful’.
Had the Free Speech Union been invited to give evidence to the Bill committee it would have urged the Government to add a fifth option – ‘do nothing’. But I doubt that would have been taken up because the Government is clearly hoping it can get social media companies to remove tendentious material without making it unlawful.
In this way it can claim, as Philp does, that it is making the internet ‘safe’ without trespassing on our free speech.
This goes to the heart of what’s wrong with this Bill. Not only is the concept of ‘legal but harmful’ content a weaselly way of trying to restrict free speech – of trying to square a circle that cannot be squared – but it is a breach of the fundamental principle of English Common Law that unless something is explicitly prohibited it is permitted.
It introduces a new grey area in which certain speech is deemed by the Government to be harmful and social media companies are encouraged to remove it, but it is not explicitly forbidden.
This sets a dangerous precedent – will the concept of ‘legal but harmful’ be extended to what we’re allowed to say in print? – and is bound to have a chilling effect on free speech.
To make matters worse, there is no concrete, non-circular definition of content that is ‘legal but harmful’ to adults in the Bill. Rather, this will be included in supplementary legislation – a statutory instrument to be brought forward by the Culture Secretary.
We do not yet know what content will be included in this Index Librorum Prohibitorum, which makes informed discussion of the Bill difficult, but in the DCMS press release about the latest version of the Bill (which uses the phrase ‘legal but harmful’ eight times) ‘harassment’ is given as an example.
That set alarm bells ringing. The Free Speech Union has just successfully concluded a protracted dispute with Essex University over its anti-discrimination and harassment policies which were invoked to no-platform two feminist law professors three years ago.
After we threatened Essex with a law suit, it reluctantly agreed to amend its polices so they cannot be used in the same way again. But the practice of shutting down opposition to trans rights dogma on the grounds that challenging it constitutes ‘harassment’ of trans people is widespread across the higher education sector.
I am concerned that if ‘harassment’ is designated as ‘legal but harmful’ in the secondary legislation, that will be exploited by trans rights activists to get social media platforms to censor people defending women’s rights.
Even if the statutory instrument brought forward by Nadine Dorries is not overly censorious, what guarantee do we have her successors will be equally restrained?
The power the Bill confers on the Secretary of State at DCMS to draw up a list of ‘legal but harmful’ material he or she would like social media platforms to restrict is likely to be taken advantage of by a future Labour government to stifle criticism of its agenda. In that respect, this Bill is a hostage to fortune.
In his article, Philp points to the protections the Bill will provide for free speech, namely, additional safeguards for journalistic content and content of democratic importance, as well as an obligation on Category 1 providers to ‘have regard’ for freedom of speech. “These provisions taken together will strengthen free speech online, not undermine it,” he writes.
But these clauses are woefully inadequate. In a recent article for The Critic, I explained how the provisions in the Bill designed to protect journalistic content would require only the slightest of tweaks to bring in state regulation of the press through the back door – something John Whittingdale has also expressed concern about.
And ‘content of democratic importance’ is likely to be defined quite narrowly, just protecting commentary on current party political disputes, not contributions to broader, cultural debates.
A parent wanting to challenge the promotion of gender ideology at her child’s primary school would in all likelihood not be protected by this clause.
As for the duty to ‘have regard’ for free speech, any parliamentarian reading this will know that ‘have regard’ is the least onerous of legal duties – the kind of meaningless concession that troublesome backbenchers are fobbed off with by wily ministers.
If Twitter considers the free speech implications of permanently banning JK Rowling from its platform and then completely disregards them, it would be fully discharging its ‘have regard’ duty.
Had I been asked to give evidence to the committee, I would have urged the Government to strengthen this clause so it can compete on a level playing field with the duty the Bill imposes on social media companies to protect their adult users from ‘legal but harmful’ content.
But it would have fallen on deaf ears. The Online Safety Bill is designed to strong-arm social media companies to remove vast swathes of lawful speech, and to pretend otherwise is misleading.
Toby Young is the General Secretary of the Free Speech Union.
In an article published last week, Chris Philp says the House of Commons committee that is scrutinising the Online Safety Bill has heard evidence from a ‘wide range of people… welcoming this pioneering internet safety law’.
Had the committee invited the Free Speech Union to give evidence – or, indeed, Index on Censorship, Big Brother Watch, or any other pro-free speech organisations – the response would have been less enthusiastic.
I take issue with the minister’s claim that the Bill will ‘enhance’ freedom of speech.
As is well known, the Bill will require the big social media companies like YouTube, Facebook and Twitter to address ‘legal but harmful’ content, including ‘content that is harmful to adults’. If they fail to do so, they will face fines of up to ten per cent of their annual global turnover, which in Facebook’s case amounts to $11.7 billion, based on its 2001 revenue.
Philp rightly points out that while these ‘Category 1’ providers will have an obligation to stipulate how they intend to respond to ‘legal but harmful’ content in their terms and conditions, they will not be required to remove it. But he is being disingenuous when he says they could ‘choose to allow it on their platforms’.
The different responses available to the providers in the Bill are described in such a way as to deter them from taking such a laissez-faire approach. These are the four choices, as set out in Clause 13:
Choosing to allow content that is ‘legal but harmful’ to adults on their platforms is not one of the four options. Rather, if the companies do not want to remove or restrict this content they will have to recommend or promote it.
It is frankly inconceivable that YouTube, Facebook or Twitter will choose option (d) after the Government has designated the content in question ‘harmful’.
Had the Free Speech Union been invited to give evidence to the Bill committee it would have urged the Government to add a fifth option – ‘do nothing’. But I doubt that would have been taken up because the Government is clearly hoping it can get social media companies to remove tendentious material without making it unlawful.
In this way it can claim, as Philp does, that it is making the internet ‘safe’ without trespassing on our free speech.
This goes to the heart of what’s wrong with this Bill. Not only is the concept of ‘legal but harmful’ content a weaselly way of trying to restrict free speech – of trying to square a circle that cannot be squared – but it is a breach of the fundamental principle of English Common Law that unless something is explicitly prohibited it is permitted.
It introduces a new grey area in which certain speech is deemed by the Government to be harmful and social media companies are encouraged to remove it, but it is not explicitly forbidden.
This sets a dangerous precedent – will the concept of ‘legal but harmful’ be extended to what we’re allowed to say in print? – and is bound to have a chilling effect on free speech.
To make matters worse, there is no concrete, non-circular definition of content that is ‘legal but harmful’ to adults in the Bill. Rather, this will be included in supplementary legislation – a statutory instrument to be brought forward by the Culture Secretary.
We do not yet know what content will be included in this Index Librorum Prohibitorum, which makes informed discussion of the Bill difficult, but in the DCMS press release about the latest version of the Bill (which uses the phrase ‘legal but harmful’ eight times) ‘harassment’ is given as an example.
That set alarm bells ringing. The Free Speech Union has just successfully concluded a protracted dispute with Essex University over its anti-discrimination and harassment policies which were invoked to no-platform two feminist law professors three years ago.
After we threatened Essex with a law suit, it reluctantly agreed to amend its polices so they cannot be used in the same way again. But the practice of shutting down opposition to trans rights dogma on the grounds that challenging it constitutes ‘harassment’ of trans people is widespread across the higher education sector.
I am concerned that if ‘harassment’ is designated as ‘legal but harmful’ in the secondary legislation, that will be exploited by trans rights activists to get social media platforms to censor people defending women’s rights.
Even if the statutory instrument brought forward by Nadine Dorries is not overly censorious, what guarantee do we have her successors will be equally restrained?
The power the Bill confers on the Secretary of State at DCMS to draw up a list of ‘legal but harmful’ material he or she would like social media platforms to restrict is likely to be taken advantage of by a future Labour government to stifle criticism of its agenda. In that respect, this Bill is a hostage to fortune.
In his article, Philp points to the protections the Bill will provide for free speech, namely, additional safeguards for journalistic content and content of democratic importance, as well as an obligation on Category 1 providers to ‘have regard’ for freedom of speech. “These provisions taken together will strengthen free speech online, not undermine it,” he writes.
But these clauses are woefully inadequate. In a recent article for The Critic, I explained how the provisions in the Bill designed to protect journalistic content would require only the slightest of tweaks to bring in state regulation of the press through the back door – something John Whittingdale has also expressed concern about.
And ‘content of democratic importance’ is likely to be defined quite narrowly, just protecting commentary on current party political disputes, not contributions to broader, cultural debates.
A parent wanting to challenge the promotion of gender ideology at her child’s primary school would in all likelihood not be protected by this clause.
As for the duty to ‘have regard’ for free speech, any parliamentarian reading this will know that ‘have regard’ is the least onerous of legal duties – the kind of meaningless concession that troublesome backbenchers are fobbed off with by wily ministers.
If Twitter considers the free speech implications of permanently banning JK Rowling from its platform and then completely disregards them, it would be fully discharging its ‘have regard’ duty.
Had I been asked to give evidence to the committee, I would have urged the Government to strengthen this clause so it can compete on a level playing field with the duty the Bill imposes on social media companies to protect their adult users from ‘legal but harmful’ content.
But it would have fallen on deaf ears. The Online Safety Bill is designed to strong-arm social media companies to remove vast swathes of lawful speech, and to pretend otherwise is misleading.