PARLIAMENTARY DEBATE
Online Harms Consultation - 15 December 2020 (Commons/Commons Chamber)
Debate Detail
Those interactions may be virtual, but they are causing real harm. More than three quarters of UK adults express concerns about logging on, while a declining number of parents believe the benefits for their children of being online outweigh the risks. Trust in tech is falling. That is bad for the public and bad for the tech companies, so today the Government are taking decisive action to protect people online.
Through our full response to the online harms White Paper, we are proposing groundbreaking regulations that will make tech companies legally responsible for the online safety of their users. That world-leading regime will rebuild public trust and restore public confidence in the tech that has not only powered us through the pandemic, but will power us into the recovery.
I know that this legislation is keenly anticipated on both sides of the House. I want to reassure hon. Members that, when drafting our proposals, I sought to strike a very important balance between shielding people, particularly children, from harm and ensuring a proportionate regime that preserves one of the cornerstones of our democracy—freedom of expression. I am confident that our response strikes that balance.
Under our proposals, online companies will face a new and binding duty of care to their users, overseen by Ofcom. If those platforms fail in that duty of care, they will face steep fines of up to £18 million or 10% of annual global turnover. A number of people, including Ian Russell, the father of Molly Russell, have expressed concerns about that point; I want to reassure him and Members of this House that the maximum fine will be the higher of those two numbers, and platforms will no longer be able to mark their own homework.
To hold major platforms to their responsibilities, I can also announce to the House that they will be required to publish annual transparency reports to track their progress, which could include the number of reports of harmful content received and the action taken as a result. This will be a robust regime, requiring those at the top to take responsibility. I can therefore confirm that we will legislate to introduce criminal sanctions for senior managers, with Parliament taking the final decision on whether to introduce them. Of course, we hope not to use those powers, and for tech companies to engineer the harm out of their platforms from the outset, but people should have no doubt that they remain an option and we will use them if we need to.
Together, those measures make this the toughest and most comprehensive online safety regime in the world. They will have a clear and immediate effect: a 13-year-old should no longer be able to access pornographic images on Twitter; YouTube will not be allowed to recommend videos promoting terrorist ideologies; and antisemitic hate crime will need to be removed without delay. Those are just a few examples, but the House will take a keen interest in the details of the legislation, so I shall lay out a few key areas of action.
Our first focus is on illegal content, including child sexual abuse, terrorism and posts that incite violence and hatred. Sadly, many Members present today have been the target of online abuse, some of which might have been illegal, such as threats of violence. Unfortunately, that is particularly true for female Members of the House. This is not a problem suffered only by people in the public eye; close to half of adults in the United Kingdom say that they have been exposed to hateful content online in the past year.
Under the new laws, all companies in scope will need to take swift and effective action to remove criminal posts—if it is illegal offline, it is illegal online. Users will be better able to report this abhorrent content and can expect to receive more support from platforms. Crucially, the duty of care will apply even when communications are end-to-end encrypted. Encryption cannot serve as a protection blanket for criminals. Given the severity of certain threats, Ofcom will also be given powers to require companies to use technology proactively to identify and remove illegal content involving child sexual abuse or terrorism—that is a power of last resort.
Of course, not all harmful content is illegal. Every day, people are exposed to posts, images and videos that do not break any laws, but still cause a significant amount of harm. We all know that cyber-bullying can ruin a child’s life, but I want first to address one particularly horrific form of legal content. Sadly, too many Members present will be aware of cases in which children are drawn into watching videos that can encourage self-harm. Some find themselves bombarded with that content, sometimes ending ultimately in tragedy. It is unforgivable that that sort of content should be circulating unchecked on social media. Given the severity of its consequences, I believe that there is a strong case for making it illegal.
I can therefore announce that the Government have asked the Law Commission to examine how the criminal law will address the encouragement or assistance of self-harm. This is an incredibly sensitive area. We need to take careful steps to ensure that we do not inadvertently punish vulnerable people, but we need to act now to prevent future tragedies.
Many Members are particularly concerned about the effect online harm has on children. We have reserved our strongest and toughest protections for them. All companies will need to consider seriously the risks their platforms may pose to children and to take action. They will no longer be able to abdicate responsibility by claiming that children do not use their services when that is manifestly untrue—we all know examples of that—and we also expect them to prevent children from accessing services that pose the highest risk of harm, including online pornography. Cutting-edge age assurance or verification technologies will be a vital part of keeping children safe online.
At the same time, we are going further than any other country to tackle other categories of legal but harmful content accessed by adults. Major platforms will face additional obligations to enforce their own terms and conditions against things such as dangerous vaccine misinformation and cyber-bullying. Where the platforms fall short, they will face the legal consequences.
I know that some hon. Members are worried that the regulations may impose undue burdens on smaller, low-risk companies, so I can reassure them that we have included exemptions for such companies. As a result, less than 3% of UK businesses will fall within the scope of the legislation.
In this House we have always ardently championed freedom of expression. Robust and free debate is what gives our democracy its historic strength. So let me be clear: the purpose of the proposed regime is not to stop adults accessing content with which they disagree. It is not our job to protect people against being offended. I will not allow this legislation to become a weapon against free debate. Therefore, we will not prevent adults from accessing or posting legal content. Companies will not be able arbitrarily to remove controversial viewpoints, and users will be able to seek redress if they feel that content has been removed unfairly.
Nor will I allow this legislation to stifle media freedoms or become a charter to impose our world view and suppress that of others. I can confirm that news publishers’ own content on their sites is not in scope, nor are the comments of users on that content. This legislation is targeted exactly where it needs to be and tightly focused on delivering our core manifesto pledge to empower adult users to stay safe online while ensuring that children are protected.
We have engaged extensively to get to this point and this process is by no means over. We want all parliamentarians to feed into this significant piece of work and will continue to listen to their concerns as we go through pre-legislative scrutiny and beyond. However, I am confident that today’s measures mark a significant step in the continual evolution of our approach to life online, and it is fitting that this should be a step that our country takes. The world wide web was, of course, invented by a Brit, and now the UK is setting a safety standard for the rest of the world to follow. I commend this statement to the House.
We have been calling on the Government to introduce this legislation for almost two years. The publication of the online harms White Paper seems almost a lifetime ago. The legislation is long overdue, and I would like the Secretary of State to tell us when in 2021 the House can expect to see the Bill, because until it is on the statute book, the real harm that he has just described, which has been able to flourish online through a lack of regulation, will continue. Ireland has already published its legislation. France has produced legislation dealing with hate speech. Germany has had legislation in place since 2018, and the European Commission is expected to publish its proposed Digital Services Act today.
The Secretary of State has said that the UK will lead the way with this legislation, but I am afraid that the response today is lacking in ambition. It feels like a missed opportunity. This is a once-in-a-generation chance to legislate for the kind of internet we want to see that keeps both children and adult citizens safe and allows people to control what kind of content they see online. Instead, the Government have been timid, or maybe the Secretary of State was persuaded by Sheryl Sandberg and Nick Clegg in his meeting with them last month to water down the original proposals. Social media platforms have failed for years to self-regulate. The Secretary of State knows that, everyone in this House knows that, and the public know that.
On legal but harmful material, why are companies being left to set their own terms and conditions and then judged on their own enforcement of those terms and conditions? It is exactly the wrong incentive. It will actively encourage less strict terms and conditions, so the platforms can more easily say that they are being properly enforced. When the Secretary of State says that companies will no longer be marking their own homework, I am afraid that he is wrong, because that is exactly what they will be doing.
The financial penalties described are welcome, but the Government have given in to big tech lobbying on criminal liability for senior executives for repeated breaches being properly built into the forthcoming legislation and implemented straight away. Rather, that will be left hanging to a possible future date through additional secondary legislation. Ireland’s legislation will include criminal sanctions rather than the vague threat that the Secretary of State has decided on. Will he explain what is to be gained by waiting? Never mind one last chance—repeat offenders have had chance after chance after chance.
The Secretary of State has referred to the novel concept of age assurance. Is that the same as age verification—the age verification that has been accepted by both the platforms and users as being unenforceable—or is it something different?
We know that online harms can easily become real harm. Encouragement and assistance of self-harm is one example, as the Secretary of State has mentioned. Harmful anti-vaccination disinformation impacting on public health is another. The Government have said today that they are asking the Law Commission to examine how criminal law will address the issue of encouragement or assistance of self-harm, but the Government could have asked the Law Commission to do that nearly two years ago when the White Paper was published. They have not done the hard work of deciding what should perhaps be illegal, which would have made their response today a better one.
There are other notable absences from the response, including reference to financial harm and online scams. This is a growing area of concern for millions of people across the United Kingdom, so why has it been ignored in the response? The Secretary of State has referred to failing public trust in tech. He says that he wants to rebuild it, but, sadly, today’s statement does not live up to that aspiration.
On criminal liability, I want tech firms to comply with this, and if they do not do so, they will face steep fines. If they still do not comply, Members should be in no doubt that their senior managers will face criminal sanction. We will take the power in this Bill—we will not have to come back to the House for primary legislation—and enact it through secondary legislation.
The hon. Lady asks about what we have been doing so far. We have taken many steps already to protect people online. For example, just a couple of months ago, the Information Commissioner’s age appropriate design code was put before Parliament. Today, alongside this full response to the White Paper, we are publishing, through the Home Office, an interim code of practice on online child sexual exploitation and abuse, and we will do so similarly in relation to terrorist content and activity online. We will expect tech firms to start complying with that now. It is clear what the Government’s intent is and if those firms fail to comply, we will have the powers through this legislation to ensure that that happens.
The hon. Lady asks about letting tech firms mark their own homework. We are empowering Ofcom to hold these tech firms to account. First of all, we will make sure that the terms and conditions are robust, and if they are not, those firms will face consequences. If they do not enforce those terms and conditions, they will face consequences, and the House will set out what those legal but harmful things are through secondary legislation. We will propose the sort of harms that those tech firms should guard against. Members will be able to vote on them, and those firms will have to take action appropriately. I believe that this marks a significant step forward, and Opposition Members should welcome this important step in protecting children, particularly online.
The Secretary of State mentioned transparency reports. How can we ensure that these transparency reports do not become another exercise in public relations for the tech firms? Will there be independent outside academic oversight? When it comes to news publishing exemptions, will that also apply to video sharing?
Finally, does the Secretary of State recognise that a system of dynamic, ongoing enforcement through a financial services-style compliance regime in tier 1 social media companies provides a good belt and braces for retrospective enforcement action on what prelegislative scrutiny is planned?
My hon. Friend talks about video sharing; the exemption for news publishers to protect freedom of speech will apply to all their output and will include that.
My hon. Friend asks about disinformation; if disinformation—for example, anti-vax content—causes harm to individuals, it will be covered by the legislation, and I very much expect to set that out as one of the priority areas that would have to be addressed in secondary legislation.
At a time when anti-vax disinformation floods social media, when hate is spouted at minority groups under the cowardly veil of anonymity, often without consequence for the perpetrators, and when more children than ever before are using the internet and need to be shielded from harmful content, the proposed online harms Bill is welcome.
We welcome, too, the requirement that companies must accept a duty of care, and the fact that Ofcom will be the independent regulator—but it must be a regulator with teeth. As Dame Melanie Dawes, Ofcom’s boss, told the Digital, Culture, Media and Sport Committee a short while ago, Ofcom needs much-enhanced powers to be effective; what additional powers will she have?
To enjoy maximum support in the House, the Bill must, while balancing the right to free expression, tackle illegal content as well as content that is potentially harmful but not illegal. In particular, companies must protect all children from harm, and the Government are right to recognise that.
The covid epidemic and lockdown have seen a surge in homophobia and transphobia online. The TIE—Time for Inclusive Education—campaign stated that 72% of LGBT+ young people had reported attacks or cyber-bullying, with organisations such as the so-called LGB Alliance leading the onslaught. In that context, surely there is a case for looking again at social media anonymity. Noms de plume are fine, but we believe that users’ identities should be known to the social media publishers—they should not be completely anonymous in all circumstances. Does the Secretary of State agree with that?
Social media disinformation has been especially pernicious during the covid pandemic. Experts tell us that the disinformation during this crisis is unparalleled in the internet era, and the consequences of online harm can be catastrophic, undermining public trust, faith in health officials and acceptance of the value of the vaccine now being rolled out.
In principle, we welcome much in the proposals. Of course, the proof of the pudding will be in the eating—exactly how tough the Government are prepared to be in reality, how hard they will be on the social media companies, and whether they will enforce some of the proposals—but we welcome it.
The hon. Gentleman talked about Ofcom; over the years, we have seen Ofcom rise to the challenge of increased responsibilities, and I am confident that it will continue to do so. We will of course look to Ofcom to bring in independent expertise to help it in that process. It will clearly require a step change from Ofcom, but Dame Melanie Dawes and others are very much alert to that.
The hon. Gentleman talked about misinformation and disinformation. There are three things that we have to do to address those. First, we have to rely on trusted sources. We are so fortunate in this country to have well-established newspapers and broadcasters that are committed to public service information. We have seen that through the covid crisis, which is why we have supported them through this period. Secondly, we have to rebut false information. Through the Cabinet Office, we are working 24/7 to do that. Finally, we have to work with the tech companies themselves. For example, the Health Secretary and I have recently secured commitments to remove misinformation and disinformation within 48 hours and, crucially, not to profit from it.
As for hon. Gentleman’s central concern, I think these measures really do mark a step change in our approach to tech firms. The old certainties are changing, and we are taking decisive action.
My hon. Friend makes a very important point about antisemitic abuse. I have met organisations to discuss that in framing the legislation. Most antisemitism is illegal and should be addressed through the provisions made for illegality. Beyond that, we will be setting out, as a priority, harms to be addressed through this legislation.
The hon. Gentleman asked why we are delaying taking powers. We are not delaying taking powers: from the get-go, these enormous fines of up to 10% of global turnover will be imposed. If that is still not effective, we will have taken the power to use criminal sanctions for senior managers, and it will simply be a case of passing secondary legislation to bring that into force. As it is such a big step to have criminal liability, if we can avoid criminal liability I would like to do so. I believe the fines will be sufficient, but if they are not, then we will have taken those powers.
May I also return to the point made by the right hon. Member for Barking (Dame Margaret Hodge) about anonymity, because it is key? Whether it is hate speech, extremism, antisemitism or grooming sites, the perpetrators hide behind anonymity. When they get taken down, they reappear under a different name. Is it not possible for them to have to reveal their identity, and prove their identity to the platform providers only, so it does not involve whistleblowers revealing themselves, so that they cannot get away with it, they cannot keep reposting, and they can be referred to the police where necessary?
Does the Minister agree with the Committee’s recommendation that Twitter should remedy that omission, so that the protected characteristic of sex is protected by its hateful conduct policy? Does he agree that all the protected characteristics deserve equal protection in any online harms legislation?
“the regulator will have the power to require companies to use automated technology…to identify illegal child sexual exploitation and abuse content or activity on their services, including, where proportionate, on private channels.”
Will he confirm that that means major platforms will need to use this automated technology on the end-to-end encrypted private channels? What proportionality test is he applying here, given that child sexual abuse is clearly so abhorrent and wrong in all circumstances? When will it ever be disproportionate to pursue this?
As I said to the Chair of the Select Committee, the right hon. Member for Normanton, Pontefract and Castleford (Yvette Cooper), end-to-end encryption takes a whole other level of challenge. The Home Secretary and I are actively engaging with Facebook, for example, to discourage it from using end-to-end encryption unless it can put appropriate protections in place. Those conversations are ongoing.
Virtual participation in proceedings concluded (Order, 4 June).
Contains Parliamentary information licensed under the Open Parliament Licence v3.0.