PARLIAMENTARY DEBATE
Online Safety Bill (Sixteenth sitting) - 28 June 2022 (Commons/Public Bill Committees)
Debate Detail
Chair(s) † Sir Roger Gale, Christina Rees
Members† Ansell, Caroline (Eastbourne) (Con)
† Bailey, Shaun (West Bromwich West) (Con)
† Blackman, Kirsty (Aberdeen North) (SNP)
† Carden, Dan (Liverpool, Walton) (Lab)
† Davies-Jones, Alex (Pontypridd) (Lab)
† Double, Steve (St Austell and Newquay) (Con)
† Fletcher, Nick (Don Valley) (Con)
† Holden, Mr Richard (North West Durham) (Con)
† Keeley, Barbara (Worsley and Eccles South) (Lab)
† Leadbeater, Kim (Batley and Spen) (Lab)
† Miller, Dame Maria (Basingstoke) (Con)
† Mishra, Navendu (Stockport) (Lab)
Moore, Damien (Southport) (Con)
Nicolson, John (Ochil and South Perthshire) (SNP)
† Philp, Chris (Parliamentary Under-Secretary of State for Digital, Culture, Media and Sport)
† Russell, Dean (Watford) (Con)
† Stevenson, Jane (Wolverhampton North East) (Con)
ClerksKatya Cassidy, Kevin Maddison, Seb Newman, Committee Clerks
† attended the Committee
Public Bill CommitteeTuesday 28 June 2022
(Morning)
[Sir Roger Gale in the Chair]
Online Safety Bill
New Clause 4
Duty to disclose information to OFCOM
“(1) This section sets out the duties to disclose information to OFCOM which apply in relation to all regulated user-to-user services.
(2) A regulated user-to-user service must disclose to OFCOM anything relating to that service of which that regulator would reasonably expect notice.
(3) This includes —
(a) any significant changes to its products or services which may impact upon its performance of its safety duties;
(b) any significant changes to its moderation arrangements which may impact upon its performance of its safety duties;
(c) any significant breaches in respect of its safety duties.”—(Barbara Keeley.)
This new clause creates a duty to disclose information to Ofcom.
Brought up, and read the First time.
Good morning, Sir Roger. The new clause would require regulated companies to disclose proactively to the regulator material changes in its operations that may impact on safety, and any significant breaches as a result of its safety duties. Category 1 services should be under regulatory duties to disclose proactively to the regulator matters about which it could reasonably expect to be informed. For example, companies should notify Ofcom about significant changes to their products and services, or to their moderation arrangements, that may impact on the child abuse threat and the company’s response to it. A similar proactive duty already applies in the financial services sector. The Financial Conduct Authority handbook states:
“A firm must deal with its regulators in an open and cooperative way, and must disclose to the FCA appropriately anything relating to the firm of which that regulator would reasonably expect notice.”
The scope of the duty we are suggesting could be drawn with sufficient clarity so that social media firms properly understand their requirements and companies do not face unmanageable reporting burdens. Such companies should also be subject to red flag disclosure requirements, whereby they would be required to notify the regulator of any significant lapses in, or changes to, systems and processes that compromise children’s safety or could put them at risk. For example, if regulation had been in place over the last 12 months, Facebook might reasonably have been expected to report on the technology and staffing issues to which it attributes its reduced detection of child abuse content.
Experience from the financial services sector demonstrates the importance of disclosure duties as a means of regulatory intelligence gathering. Perhaps more importantly, they provide a useful means of hard-wiring regulatory compliance into company decisions on the design and operation of their sites.
I do not have any queries or problems with the new clause; it is good. My question for the Minister is—I am not trying to catch anyone out; I genuinely do not know the answer—if a company makes significant changes to something that might impact on its safety duties, does it have to do a new risk assessment at that point, or does it not have to do so until the next round of risk assessments? I do not know the answer, but it would be good if the direction of travel was that any company making drastic changes that massively affected security—for example, Snapchat turning on the geolocation feature when it did an update—would have to do a new risk assessment at that point, given that significant changes would potentially negatively impact on users’ safety and increase the risk of harm on the platform.
The Bill already has extremely strong information disclosure provisions. I particularly draw the Committee’s attention to clause 85, which sets out Ofcom’s power to require information by provision of an information notice. If Ofcom provides an information notice—the particulars of which are set out in clause 86—the company has to abide by that request. As the Committee will recall, the strongest sanctions are reserved for the information duties, extending not only to fines of up to 10% or service discontinuation—unplugging the website, as it were; there is also personal criminal liability for named executives, with prison sentences of up to two years. We take those information duties extremely seriously, which is why the sanctions are as strong as they are.
The hon. Member for Aberdeen North asked what updates would occur if there were a significant design change. I draw the Committee’s attention to clause 10, which deals with children’s risk assessment duties, but there are similar duties in relation to illegal content and the safety of adults. The duty set out in clause 10(2), which cross-refers to schedule 3, makes it clear. The relevant words are “suitable and sufficient”. Clearly if there were a massive design change that would, in this case, adversely affect children, the risk assessment would not be suitable and sufficient if it were not updated to reflect that design change. I hope that answers the hon. Lady’s question.
Turning to the particulars of the new clause, if we incentivise companies to disclose information they have not been asked for by Ofcom, there is a danger that they might, through an excessive desire to comply, over-disclose and provide a torrent of information that would not be very helpful. There might also be a risk that some companies that are not well intentioned would deliberately dump enormous quantities of data in order to hide things within it. The shadow Minister, the hon. Member for Worsley and Eccles South, mentioned an example from the world of financial services, but the number of companies potentially within the scope of the Bill is so much larger than even the financial services sector. Some 25,000 companies may be in scope, a number that is much larger—probably by one order of magnitude, and possibly by two—than the financial services sector regulated by the FCA. That disparity in scale makes a significant difference.
Given that there are already strong information provision requirements in the Bill, particularly clause 85, and because of the reasons of scale that I have mentioned, I will respectfully resist the new clause.
Question put, That the clause be read a Second time.
Question put, That the clause be read a Second time.
Brought up, and read the First time.
Question put, That the clause be read a Second time.
Brought up, and read the First time.
Question put, That the clause be read a Second time.
Brought up, and read the First time.
Question put, That the clause be read a Second time.
Brought up, and read the First time.
Throughout these debates it has been clear that we agree on both sides that the Online Safety Bill must be a regime that promotes the highest levels of transparency. This will ensure that platforms can be held accountable for their systems and processes. Like other regulated industries, they must be open and honest with the regulator and the public about how their products work and how they keep users safe.
As we know, platforms duck and dive to avoid sharing information that could make life more difficult for them or cast them in a dim light. The Bill must give them no opportunity to shirk their responsibilities. The Bill enables the largest platforms to carry out a risk assessment safe in the knowledge that it may never see the light of day. Ofcom can access such information if it wants, but only following a lengthy process and as part of an investigation. This creates no incentive for platforms to carry out thorough and proper risk assessments. Instead, platforms should have to submit these risk assessments to Ofcom not only on request but as a matter of course. Limiting this requirement to only the largest platforms will not overload Ofcom, but will give it the tools and information it needs to oversee an effective regime.
In addition, the public have a right to know the risk profile of the services they use. This happens in all other regulated industries, with consumers having easy access to the information they need to make informed decisions about the products they use. At present, the Bill does not give users the information they deserve about what to expect online. Parents in particular will be empowered by information about the risk level of platforms their children use. Therefore, it is imperative that risk assessments are made publicly available, as well as submitted to the regulator as a matter of course.
With iPhones, if a kid wants an app, they have to request it from their parent and their parents needs to approve whether or not they get it. I find myself baffled by some of them because they are not ones that I have ever heard of or come across. To find out whether they have that level of functionality, I have to download and use the app myself in the way that, hopefully, my children would use it in order to find out whether it is safe for them.
A requirement for category 1 providers to be up front and explain the risks and how they manage them, and even how people interact with their services, would increase the ability of parents to be media literate. We can be as media literate as we like, but if the information is not there and we cannot find it anywhere, we end up having to make incredibly restrictive decisions in relation to our children’s ability to use the internet, which we do not necessarily want to make. We want them to be able to have fun, and the information being there would be very helpful, so I completely agree on that point.
My other point is about proportionality. The Opposition moved new clause 4, relating to risk assessments, and I did not feel able to support it on the basis of the arguments that the Minister made about proportionality. He made the case that Ofcom would receive 25,000 risk assessments and would be swamped by the number that it might receive. This new clause balances that, and has the transparency that is needed.
It is completely reasonable for us to put the higher burden of transparency on category 1 providers and not on other providers because they attract the largest market share. A huge percentage of the risk that might happen online happens with category 1 providers, so I am completely happy to support this new clause, which strikes the right balance. It answers the Minister’s concerns about Ofcom being swamped, because only category 1 providers are affected. Asking those providers to put the risk assessment on their site is the right thing to do. It will mean that there is far more transparency and that people are better able to make informed decisions.
“to summarise in the terms of service the findings of the most recent adults’ risk assessment of a service”,
including the levels of risk, and the nature and severity of those risks. That relates specifically to adults, but there is an equivalent provision relating to children as well.
In addition to the duty to summarise the findings of the most recent risk assessment in relation to adults in clause 13(2), clause 11 contains obligations to specify in the terms of service, in relation to children, where children might be exposed to risks using that service. I suggest that a summary in the terms of service, which is an easy place to look, is the best way for parents or anybody else to understand what the risks are, rather than having to wade through a full risk assessment. Obviously, the documents have not been written yet, because the Bill has not been passed, but I imagine they would be quite long and possibly difficult to digest for a layperson, whereas a summary is more readily digestible. Therefore, I think the hon. Lady’s request as a parent is met by the duties set out in clause 11, and the duties for adults are set out in clause 13.
So what are the issues with the new clause? First, for the reasons that I have set out, the Bill already addresses the point. However, exposing the entire risk assessment publicly also carries some risks itself. For example, if the risk assessment identifies weaknesses or vulnerabilities in the service—ways that malfeasant people could exploit it to get at children or do something else that we would consider harmful—then exposing to everybody, including bad actors, the ways of beating the system and doing bad things on the service would not necessarily be in the public interest. A complete disclosure could help those looking to abuse and exploit the systems. That is why the transparency duties in clause 64 and the duties to publish accessible summaries in clauses 11 and 13 meet the objectives—the quite proper objectives—of the shadow Minister, the hon. Member for Worsley and Eccles South, and the hon. Member for Aberdeen North, without running the risks that are inherent in new clause 9, which I would therefore respectfully and genuinely resist.
Question put, That the clause be read a Second time.
Question put, That the clause be read a Second time.
Brought up, and read the First time.
Question put, That the clause be read a Second time.
Brought up, and read the First time.
Good morning, Sir Roger. As my hon. Friend the Member for Worsley and Eccles South mentioned when speaking to new clause 11, Labour has genuine concerns about supply chain risk assessment duties. That is why we have tabled new clause 13, which seeks to ensure enforcement of liability for supply chain failures that amount to a breach of one of the specified duties drawing on existing legislation.
As we know, platforms, particularly those supporting user-to-user generated content, often employ services from third parties. At our evidence sessions we heard from Danny Stone of the Antisemitism Policy Trust that this has included Twitter explaining that racist GIFs were not its own but were provided by another service. The hands-off approach that platforms have managed to get away with for far too long is exactly what the Bill is trying to fix, yet without this important new clause we fear there will be very little change.
We have already raised issues with the reliance on third party providers more widely, particularly content moderators, but the same problems also apply to some types of content. Labour fears a scenario in which a company captured by the regulatory regime established by the Bill will argue that an element of its service is not within the ambit of the regulator simply because it is part of a supply chain, represented by, but not necessarily the responsibility of, the regulated services.
The contracted element, supported by an entirely separate company, would argue that it is providing business-to-business services. That is not user-to-user generated content per se but content designed and delivered at arm’s length, provided to the user-to-user service to deploy to its users. The result would likely be a timely, costly and unhelpful legal process during which systems could not be effectively regulated. The same may apply in relation to moderators, where complex contract law would need to be invoked.
We recognise that in UK legislation there are concerns and issues around supply chains. The Bribery Act 2010, for example, says that a company is liable if anyone performing services for or on the company’s behalf is found culpable of specific actions. We therefore strongly urge the Minister to consider this new clause. We hope he will see the extremely compelling reasons why liability should be introduced for platforms failing to ensure that associated parties, considered to be a part of a regulated service, help to fulfil and abide by relevant duties.
As drafted, the Bill ensures legal certainty and clarity over which companies are subject to duties. Clause 180 makes it clear that the Bill’s duties fall on companies with control over the regulated service. The point about who is in control is very important, because the liability should follow the control. These companies are responsible for ensuring that any third parties, such as contractors or individuals involved in running the service, are complying with the Bill’s safety duties, so that they cannot evade their duties in that way.
Companies with control over the regulated service are best placed to keep users safe online, assess risk, and put in place systems and processes to minimise harm, and therefore bear the liability if there is a transgression under the Bill as drafted. Further, the Bill already contains robust provisions in clause 161 and schedule 14 that allow Ofcom to hold parent and subsidiary companies jointly liable for the actions of other companies in a group structure. These existing mechanisms promote strong compliance within groups of companies and ensure that the entities responsible for breaches are the ones held responsible. That is why we feel the Bill as drafted achieves the relevant objectives.
Question put, That the clause be read a Second time.
New clause 15—Media literacy strategy—
“(1) OFCOM must prepare a strategy which sets out how they intend to undertake their duty to promote media literacy in relation to regulated user-to-user services and regulated search services under section (Duty to promote media literacy: regulated user-to-user services and search services).
(2) The strategy must—
(a) set out the steps OFCOM propose to take to achieve the pursuit of the objectives set out in section (Duty to promote media literacy: regulated user-to-user services and search services),
(b) set out the organisations, or types of organisations, that OFCOM propose to work with in undertaking the duty;
(c) explain why OFCOM considers that the steps it proposes to take will be effective;
(d) explain how OFCOM will assess the extent of the progress that is being made under the strategy.
(3) In preparing the strategy OFCOM must have regard to the need to allocate adequate resources for implementing the strategy.
(4) OFCOM must publish the strategy within the period of 6 months beginning with the day on which this section comes into force.
(5) Before publishing the strategy (or publishing a revised strategy), OFCOM must consult—
(a) persons with experience in or knowledge of the formulation, implementation and evaluation of policies and programmes intended to improve media literacy;
(b) the advisory committee on disinformation and misinformation, and
(c) any other person that OFCOM consider appropriate.
(6) If OFCOM have not revised the strategy within the period of 3 years beginning with the day on which the strategy was last published, they must either—
(a) revise the strategy, or
(b) publish an explanation of why they have decided not to revise it.
(7) If OFCOM decides to revise the strategy they must—
(a) consult in accordance with subsection (3), and
(b) publish the revised strategy.”
This new clause requires Ofcom to publish a strategy related to their duty to promote media literacy of the public in relation to regulated user-to-user services and search services.
New clause 16—Media literacy strategy: progress report—
“(1) OFCOM must report annually on the delivery of the strategy required under section (Duty to promote media literacy: regulated user-to-user services and search services).
(2) The report must include—
(a) a description of the steps taken in accordance with the strategy during the year to which the report relates; and
(b) an assessment of the extent to which those steps have had an effect on the media literacy of the public in that year.
(3) The assessment referred to in subsection (2)(b) must be made in accordance with the approach set out by OFCOM in the strategy (see section (Duty to promote media literacy: regulated user-to-user services and search services) (2)(d).
(4) OFCOM must—
(a) publish the progress report in such manner as they consider appropriate; and
(b) send a copy of the report to the Secretary of State who must lay the copy before Parliament.”
This new clause is contingent on NC15.
Good media literacy is our first line of defence against bad information online. It can make the difference between decisions based on sound evidence and decisions based on poorly informed opinions that can harm health and wellbeing, social cohesion and democracy. Clause 103 of the draft Bill proposed a new media duty for Ofcom to replace the one in section 11 of the Communications Act 2003, but sadly the Government scrapped it from the final Bill.
Media literacy initiatives in the Online Safety Bill are now mentioned only in the context of risk assessments, but there is no active requirement for internet companies to promote media literacy. The draft Bill’s media literacy provision needed to be strengthened, not cut. New clauses 14, 15 and 16 would introduce a new, stronger media literacy duty on Ofcom, with specific objectives. They would require the regulator to produce a statutory strategy for delivering on it and then to report on progress made towards increasing media literacy under the strategy. There is no logical reason for the Minister not to accept these important new clauses or work with Labour on them.
Over the past few weeks, we have debated a huge range of issues that are being perpetuated online as we speak, from vile, misogynistic content about women and girls to state-sponsored disinformation. It is clear that the lessons have not been learned from the past few years, when misinformation was able to significantly undermine public health, most notably throughout the pandemic. Harmful and, more importantly, false statistics were circulated online, which caused significant issues in encouraging the uptake of the vaccine. We have concerns that, without a robust media literacy strategy, the consequences of misinformation and disinformation could go further.
The issues that Labour has raised about the responsibility of those at the top—the Government—have been well documented. Only a few weeks ago, we spoke about the Secretary of State actually contributing to the misinformation discourse by sharing a picture of the Labour leader that was completely out of context. How can we be in a position where those at the top are contributing to this harmful discourse? The Minister must be living in a parallel universe if he cannot see the importance of curbing these harmful behaviours online as soon as possible. He must know that media literacy is at the very heart of the Bill’s success more widely. We genuinely feel that a strengthened media literacy policy would be a huge step forward, and I sincerely hope that the Minister will therefore accept the justification behind these important new clauses.
I pay tribute to all those involved in media literacy—all the educators at all levels, including school teachers delivering it as part of the curriculum, school teachers delivering it not as part of the curriculum, and organisations such as CyberSafe Scotland in my constituency, which is working incredibly hard to upskill parents and children about the internet. They also include organisations such as the Silver City Surfers in Aberdeen, where a group of young people teaches groups of elderly people how to use the internet. All those things are incredibly helpful and useful, but we need to ensure that Ofcom is at the top of that, producing materials and taking its duties seriously. It must produce the best possible information and assistance for people so that up-to-date media literacy training can be provided.
As we have discussed before, Ofcom’s key role is to ensure that when threats emerge, it is clear and tells people, “This is a new threat that you need to be aware of,” because the internet will grow and change all the time, and Ofcom is absolutely the best placed organisation to be recognising the new threats. Obviously, it would do that much better with a user advocacy panel on it, but given its oversight and the way it will be regulating all the providers, Ofcom really needs to take this issue as seriously as it can. It is impossible to overstate the importance of media literacy, so I give my wholehearted backing to the three new clauses.
We all know that the Bill will not eliminate all risk online, and it will not entirely clean up the internet. Therefore, ensuring that platforms have robust tools in place, and that users are aware of them, is one of the strongest tools in the Bill to protect internet users. As my hon. Friend the Member for Pontypridd said, including the new clauses in the Bill would help to ensure that we all make decisions based on sound evidence, rather than on poorly informed opinions that can harm not just individuals but democracy itself. The new clauses, which would place a duty on Ofcom to promote media literacy and publish a strategy, are therefore crucial.
I am sure we all agree about the benefits of public health information that informs us of the role of a healthy diet and exercise, and of ways that we can adopt a healthier lifestyle. I do not want to bring up the sensitive subject of the age of members of the Committee, as it got me into trouble with some of my younger colleagues last week, but I am sure many of us will remember the Green Cross Code campaign, the stop smoking campaigns, the anti-drink driving ads, and the powerful campaign to promote the wearing of seatbelts—“Clunk click every trip”. These were publicly funded and produced information campaigns that have stuck in our minds and, I am sure, protected thousands of lives across the country. They laid out the risks and clearly stated the actions we all need to take to protect ourselves.
When it comes to online safety, we need a similar mindset to inform the public of the risks and how we can mitigate them. Earlier in Committee, the right hon. Member for Basingstoke, a former Secretary of State for Digital, Culture, Media and Sport, shared her experience of cyber-flashing and the importance of knowing how to turn off AirDrop to prevent such incidents from occurring in the first place. I had no idea about this simple change that people can make to protect themselves from such an unpleasant experience. That is the type of situation that could be avoided with an effective media literacy campaign, which new clauses 14 to 16 would legislate for.
I completely agree that platforms have a significant duty to design and implement tools for users to protect themselves while using platforms’ services. However, I strongly believe that only a publicly funded organisation such as Ofcom can effectively promote their use, explain the dangers of not using them and target such information at the most vulnerable internet users. That is why I wholeheartedly support these vital new clauses.
Ofcom already has a statutory duty to promote media literacy in relation to electronic media, which includes everything in scope of the Bill and more beyond. That is set out in the Communications Act 2003, so the statutory duty exists already. The duty proposed in new clause 14 is actually narrower in scope than the existing statutory duty on Ofcom, and I do not think it would be a very good idea to give Ofcom an online literacy duty with a narrower scope than the one it has already. For that reason, I will resist the amendment, because it narrows the duties rather than widens them.
I would also point out that a number of pieces of work are being done non-legislatively. The campaigns that the hon. Member for Batley and Spen mentioned—dating often, I think, back to the 1980s—were of course done on a non-legislative basis and were just as effective for it. In that spirit, Ofcom published “Ofcom’s approach to online media literacy” at the end of last year, which sets out how Ofcom plans to expand, and is expanding, its media literacy programmes, which cover many of the objectives specified in the new clause. Therefore, Ofcom itself has acted already—just recently—via that document.
Finally, I have two points about what the Government are doing. First, about a year ago the Government published their own online media literacy strategy, which has been backed with funding and is being rolled out as we speak. When it comes to disinformation more widely, which we have debated previously, we also have the counter-disinformation unit working actively on that area.
Therefore, through the Communications Act 2003, the statutory basis exists already, and on a wider basis than in these new clauses; and, through the online media literacy strategy and Ofcom’s own approach, as recently set out, this important area is well covered already.
Question put, That the clause be read a Second time.
I tabled new clause 17 in relation to protected characteristics because of some of the points made by Danny Stone. I missed the relevant evidence session because unfortunately, at the time, I was in the Chamber, responding to the Chancellor of the Exchequer. I am referring to some of the points made by Danny Stone in the course of the evidence session in relation to the algorithmic prompts that there are in search functions.
We have an issue with search functions; we have an issue with the algorithmic prompts that there are in search functions. There is an issue if someone puts in something potentially derogatory, if they put in something relating to someone with a protected characteristic. For example, if someone were to type “Jews are”, the results that they get with those algorithmic prompts can be overwhelmingly racist, overwhelmingly antisemitic, overwhelmingly discriminatory. The algorithm should not be pushing those things.
To give organisations like Google some credit, if something like that is highlighted to them, they will address it. Some of them take a long time to sort it, but they will have a look at it, consider sorting it and, potentially, sort it. But that is not good enough. By that point, the damage is done. By that point, the harm has been put into people’s minds. By that point, someone who is from a particular group and has protected characteristics has already seen that Google—or any other search provider—is pushing derogatory terms at people with protected characteristics.
I know that the prompts work like that because of artificial intelligence; firms are not intentionally writing these terms in order to push them towards people, but the AI allows that to happen. If such companies are going to be using artificial intelligence—some kind of software algorithm—they have a responsibility to make sure that none of the content they are generating on the basis of user searches is harmful. I asked Google about this issue during one of our evidence sessions, and the response they gave was, “Oh, algorithmic prompts are really good, so we should keep them”—obviously I am paraphrasing. I do not think that is a good enough argument. I do not think the value that is added by algorithmic prompts is enough to counter the harm that is caused by some of those prompts.
As such, the new clause specifically excludes protected characteristics from any algorithm that is used in a search engine. The idea is that if a person starts to type in something about any protected characteristic, no algorithmic prompt will appear, and they will just be typing in whatever they were going to type in anyway. They will not be served with any negative, harmful, discriminatory content, because no algorithmic prompt will come up. The new clause would achieve that across the board for every protected characteristic term. Search engines would have to come up with a list of such terms and exclude all of them from the work of the algorithm in order to provide that layer of protection for people.
I do not believe that that negative content could be in any way balanced by the potential good that could arise from somebody being able to type “Jews are” and getting a prompt that says “funny”. That would be a lovely, positive thing for people to see, but the good that could be caused by those prompts is outweighed by the negativity, harm and pain that is caused by the prompts we see today, which platforms are not quick enough to act on.
As I say, the harm is done by the time the report is made; by the time the concern is raised, the harm has already happened. New clause 17 would prevent that harm from ever happening. It would prevent anybody from ever being injured in any way by an algorithmic prompt from a search engine. That is why I have tabled that new clause, in order to provide a level of protection for any protected characteristic as defined under the Equality Act 2010 when it comes to search engine prompts.
Danny Stone told us on 26 May:
“Search returns are not necessarily covered because, as I say, they are not the responsibility of the internet companies, but the systems that they design as to how those things are indexed and the systems to prevent them going to harmful sites by default are their responsibility, and at present the Bill does not address that.”––[Official Report, Online Safety Public Bill Committee, 26 May 2022; c. 130, Q207.]
The hon. Member for Aberdeen North mentioned the examples from Microsoft Bing that Danny gave in his evidence—“Jews are” and “gays are”. He gave other examples of answers that were returned by search services, such as using Amazon Alexa to search, “Is George Soros evil?” The response was, “Yes, he is.” “Are the White Helmets fake?” “Yes, they are set up by an ex-intelligence officer.” The issue is that the search prompts that the hon. Member has talked about are problematic, because just one person giving an answer to Amazon could prompt that response. The second one, about the White Helmets, was a comment on a website that was picked up. Clearly, that is an issue.
Danny Stone’s view is that it would be wise to have something that forces search companies to have appropriate risk assessments in place for the priority harms that Parliament sets, and to enforce those terms and conditions consistently. It is not reasonable to exempt major international and ubiquitous search services from risk assessing and having a policy to address the harms caused by their algorithms. We know that leaving it up to platforms to sort this out themselves does not work, which is why Labour is supporting the new clause proposed by our SNP colleague.
“taking into account (in particular) risks presented by algorithms used by the service”.
Clause 25 relates to children’s risk assessment duties, and subsection (5)(a) states that children’s risk assessment duties have to be carried out
“taking into account (in particular) risks presented by algorithms”.
The risks presented by algorithms are expressly accounted for in clauses 23 and 25 in relation to illegal acts and to children. Those risk assessment duties flow into safety duties as we know.
By coincidence, yesterday I met with Google’s head of search, who talked about the work Google is doing to ensure that its search work is safe. Google has the SafeSearch work programme, which is designed to make the prompts better constructed.
In my view, the purpose of the new clause is covered by existing provisions. If we were to implement the proposal—I completely understand and respect the intention behind it, by the way—there could be an unintended consequence in the sense that it would ban any reference in the prompts to protected characteristics, although people looking for help, support or something like that might find such prompts helpful.
Through a combination of the existing duties and the list of harms, which we will publish in due course, as well as legislating via statutory instrument, we can ensure that people with protected characteristics, and indeed other people, are protected from harmful prompts while not, as it were, throwing the baby out with the bathwater and banning the use of certain terms in search. That might cause an unintended negative consequence for some people, particularly those from marginalised groups who were looking for help. I understand the spirit of the new clause, but we shall gently resist it.
I want to make the Minister aware of this. If he turns on Google SafeSearch, which excludes explicit content, and googles the word “oral” and looks at the images that come up, he will see that those images are much more extreme than he might imagine. My point is that, no matter the work that the search services are trying to do, they need to have the barriers in place before that issue happens—before people are exposed to that harmful or illegal content. The existing situation does not require search services to have enough in place to prevent such things happening. The Minister was talking about moderation and things that happen after the fact in some ways, which is great, but does not protect people from the harm that might occur. I very much wish to press the new clause to the vote.
Question put, That the clause be read a Second time.
“(1) The Secretary of State must produce a report setting out any steps the Secretary of State has taken to tackle the presence of disinformation on Part 3 services.
(2) The purpose of the report is to assist OFCOM in carrying out its regulatory duties under this Act.
(3) The first report must be submitted to OFCOM and laid before Parliament within six months of this Act being passed.
(4) Thereafter, the Secretary of State must submit an updated report to OFCOM and lay it before Parliament at least once every three months.”
Misinformation and disinformation arise during periods of uncertainty, either acutely, such as during a terror attack, or over a long period, as with the pandemic. That often includes information gaps and a proliferation of inaccurate claims that spread quickly. Where there is a vacuum of information, we can have bad actors or the ill-informed filling it with false information.
Information incidents are not dealt with effectively enough in the Bill, which is focused on regulating the day-to-day online environment. I accept that clause 146 gives the Secretary of State powers of direction in certain special circumstances, but their effectiveness in real time would be questionable. The Secretary of State would have to ask Ofcom to prioritise its media literacy function or to make internet companies report on what they are doing in response to a crisis. That is just too slow, given the speed at which such incidents can spread.
The new clause might involve Ofcom introducing a system whereby emerging incidents could be reported publicly and different actors could request the regulator to convene a response group. The provision would allow Ofcom to be more proactive in its approach and, in I hope rare moments, to provide clear guidance. That is why the new clause is a necessary addition to the Bill.
Many times, we have seen horrendous incidents unfold on the internet, in a very different way from how they ever unfolded in newspapers, on news websites or among people talking. We have seen the untold and extreme harm that such information incidents can cause, as significant, horrific events can be spread very quickly. We could end up in a situation where an incident happens and, for example, a report spreads that a Muslim group was responsible when there is absolutely no basis of truth to that. A vacuum can be created and bad actors step into it in order to spread discrimination and lies, often about minority groups who are already struggling. That is why we move the new clause.
For the avoidance of doubt, new clause 45, which was tabled by Labour, is also to be debated in this group. I am more than happy to support it.
The Government’s methods of tackling disinformation are opaque, unaccountable and may not even work. New clause 45, which would require reporting to Parliament, may begin to address this issue. When Ministers are asked how they tackle misinformation or disinformation harms, they refer to some unaccountable civil service team involved in state-based interference in online media.
I thank those at Carnegie UK Trust for their support when researching the following list, and for supporting my team and me to make sense of the Bill. First, we have the counter-disinformation unit, which is based in the Department for Digital, Culture, Media and Sport and intends to address mainly covid issues that breach companies’ terms of service and, recently, the Russia-Ukraine conflict. In addition, the Government information cell, which is based in the Foreign, Commonwealth and Development Office, focuses on war and national security issues, including mainly Russia and Ukraine. Thirdly, there is the so-called rapid response unit, which is based in the Cabinet Office, and mainly tackles proactive counter-messaging.
Those teams appear to nudge service providers in different ways where there are threats to national security or the democratic process, or risks to public health, yet we have zero record of their effectiveness. The groups do not publish logs of action to any external authority for oversight of what they raise with companies using the privilege authority of Her Majesty’s Government, nor do they publish the effectiveness of their actions. As far as we know, they are not rooted in expert independent external advisers. That direct state interference in the media is very worrying.
In our recent debate on amendment 83, which calls on the Government to include health misinformation and disinformation in the Bill, the Minister clearly set out why he thinks the situation is problematic. He said,
“We have established a counter-disinformation unit within DCMS whose remit is to identify misinformation and work with social media firms to get it taken down. The principal focus of that unit during the pandemic was, of course, covid. In the past three months, it has focused more on the Russia-Ukraine conflict, for obvious reasons.
In some cases, Ministers have engaged directly with social media firms to encourage them to remove content that is clearly inappropriate. For example, in the Russia-Ukraine context, I have had conversations with social media companies that have left up clearly flagrant Russian disinformation. This is, therefore, an area that the Government are concerned about and have been acting on operationally already.”––[Official Report, Online Safety Public Bill Committee, 14 June 2022; c. 408.]
Until we know more about those units, the boundary between their actions and that of a press office remains unclear. In the new regulatory regime, Ofcom needs to be kept up to date on the issues they are raising. The Government should reform the system and bring those units out into the open. We support Carnegie’s longer term strategic goal to set up a new external oversight body and move the current Government functions under Ofcom’s independent supervision. The forthcoming National Security Bill may tackle that, but I will leave that for the Minister to consider.
There must be a reporting system that requires the Government to set out their operational involvement with social media companies to address misinformation and disinformation, which is why we have tabled new clause 45. I hope the Minister will see that the current efforts in these units are hugely lacking in transparency, which we all want and have learned is fundamental to keep us all safe online.
Moreover, Ofcom will be able to require platforms to issue a public statement about the steps they are taking to respond to a threat to public health or safety or to national security. As we discussed, it is appropriate that the Secretary of State will make those directions, given that the Government have the access to intelligence around national security and the relevant health information. Ofcom, as a telecoms regulator, obviously does not have access to that information, hence the need for the Secretary of State’s involvement.
Ofcom already has reporting duties under the Bill’s framework to carry out reviews of the prevalence and severity of content harmful to children and adults on regulated services. Under clause 135, Ofcom must also produce its own transparency report, in addition to which there will be an advisory committee on dis and misinformation, set out in clause 130, to provide advice to Ofcom about how these issues can be addressed.
The shadow Minister, the hon. Member for Pontypridd, has already made reference to DCMS’s counter-disinformation unit. She has quoted me extensively—I thank her for that—setting out the work it has been doing. She asked about further reporting in terms of oversight of that counter-disinformation unit. Obviously, setting out the full details of what it does could provide inappropriately detailed information to hostile states, such as Russia, that are trying to pump out that disinformation. However, the activities of the CDU are of course open to parliamentary scrutiny in the usual way, whether that is through oral questions, Backbench Business and Opposition day debates, or scrutiny by Select Committees, just as every other area of Government activity is open to parliamentary scrutiny using any of the means available.
On the regular reports sought through new clause 45, we think the work of the CDU is already covered in the way I have just set out. It would not be appropriate to lift up the hood to the point that the Russians and others can see exactly what is going on. Ofcom is already required to consult with the Secretary of State and relevant experts when developing its codes of practice, which gives the Secretary of State an appropriate mechanism.
I have been brief in the interest of time, but I hope I have set out how the Bill as drafted already provides a response to mis and disinformation. I have also pointed out the existing parliamentary scrutiny to which the Government in general and the CDU in particular is subject. I therefore ask the hon. Member for Aberdeen North to withdraw the new clause.
Question put, That the clause be read a Second time.
I think you are probably getting fed up with me, Sir Roger, so I will try my best not to speak for too long. The new clause is one of the most sensible ones we have put forward. It simply allows Ofcom to ask regulated services to submit to Ofcom
“a specific piece of research held by the service”
or
“all research the service holds”
on a specific topic. It also allows Ofcom to product a report into
“how regulated services commission, collate, publish and make use of research.”
The issues that we heard raised by Frances Haugen about the secretive nature of these very large companies gave us a huge amount concern. Providers will have to undertake risk assessments on the basis of the number of users they have, the risk of harm to those users and what percentage of their users are children. However, Ofcom is just going to have to believe the companies when they say, “We have 1 million users,” unless it has the ability to ask for information that proves the risk assessments undertaken are adequate and that nothing is being hidden by those organisations. In order to find out information about a huge number of the platforms, particularly ones such as Facebook, we have had to have undercover researchers posing as other people, submitting reports and seeing how they come out.
We cannot rely on these companies, which are money-making entities. They exist to make a profit, not to make our lives better. In some cases they very much do make our lives better—in some cases they very much do not—but that is not their aim. Their aim is to try to make a profit. It is absolutely in their interests to underplay the number of users they have and the risk faced by people on their platforms. It is very much in their interest to underplay how the algorithms are firing content at people, taking them into a negative or extreme spiral. It is also in their interests to try to hide that from Ofcom, so that they do not have to put in the duties and mitigations that keep people safe.
We are not asking those companies to make the information public, but if we require them to provide to Ofcom their internal research, whether on the gender or age of their users, or on how many of their users are viewing content relating to self-harm, it will raise their standards. It will raise the bar and mean that those companies have to act in the best interests—or as close as they can get to them—of their users. They will have to comply with what is set out in the Bill and the directions of Ofcom.
I see no issue with that. Ofcom is not going to share the information with other companies, so that they could subvert competition law. Ofcom is a regulator; it literally does not do that. Our proposal would mean that Ofcom has the best, and the most, information in order to take sensible decisions to properly regulate the platforms. It is not a difficult provision for the Minister to accept.
We know that there is research being undertaken all the time by companies that is never published—neither publicly nor to the regulator. As the hon. Member for Aberdeen North said, publishing research undertaken by companies is an issue championed by Frances Haugen, whose testimony last month the Committee will remember. A few years ago, Frances Haugen brought to the public’s attention the extent to which research is held by companies such as Facebook—as it was called then—and never reaches the public realm.
Billions of members of the public are unaware that they are being tracked and monitored by social media companies as subjects in their research studies. The results of those studies are only published when revealed by brave whistleblowers. However, their findings could help charities, regulators and legislators to recognise harms and help to make the internet a safer place. For example, Frances Haugen leaked one Facebook study that found that a third of teenage girls said Instagram made them feel worse about their bodies. Facebook’s head of safety, Antigone Davis, fielded questions on this issue from United States Senators last September. She claimed that the research on the impact of Instagram and Facebook to children’s health was “not a bombshell”. Senator Richard Blumenthal responded:
“I beg to differ with you, Ms Davis, this research is a bombshell. It is powerful, gripping, riveting evidence that Facebook knows of the harmful effects of its site on children and that it has concealed those facts and findings.”
It is this kind of cover-up that new clause 19 seeks to prevent.
I remind the Committee of one more example that Frances Haugen illustrated to us in her evidence last month. Meta conducts frequent analyses of the estimated age of its users, which is often different from the ages they submit when registering, both among adults and children. Frances told us that Meta does this so that adverts can be targeted more effectively. However, if Ofcom could request this data, as the new clause would require, it would give an important insight into how many under-13s were in fact creating accounts on Facebook. Ofcom should be able to access such information, so I hope hon. Members and the Minister will support the new clause as a measure to increase transparency and support greater protections for children.
However, I am honestly a bit perplexed by the two speeches we have just heard, because the Bill sets out everything the hon. Members for Aberdeen North and for Worsley and Eccles South asked for in unambiguous, black and white terms on the face of the Bill—or black and green terms, because the Bill is published on green paper.
Clause 85 on page 74 outlines the power Ofcom has to request information from the companies. Clause 85(1) says very clearly that Ofcom may require a person
“to provide them with any information”—
I stress the word “any”—
“that they require for the purpose of exercising, or deciding whether to exercise, any of their online safety functions.”
Ofcom can already request anything of these companies.
For the avoidance of doubt, clause 85(5) lists the various things Ofcom can request information for the purpose of and clause 85(5)(l)—on page 75, line 25— includes for
“the purpose of carrying out research, or preparing a report, in relation to online safety matters”.
Ofcom can request anything, expressly including requesting information to carry out research, which is exactly what the hon. Member for Aberdeen North quite rightly asks for.
The hon. Lady then said, “What if they withhold information or, basically, lie?” Clause 92 on page 80 sets out the situation when people commit an offence. The Committee will see that clause 92(3)(a) states that a person “commits an offence” if
“the person provides information that is false in a material respect”.
Again, clause 92(5)(a) states that a person “commits an offence” if
“the person suppresses, destroys or alters, or causes or permits the suppression, destruction or alteration of, any information required to be provided.”
In short, if the person or company who receives the information request lies, or falsifies or destroys information, they are committing an offence that will trigger not only civil sanctions—under which the company can pay a fine of up to 10% of global revenue or be disconnected—but a personal offence that is punishable by up to two years in prison.
I hope I have demonstrated that clauses 85 and 92 already clearly contain the powers for Ofcom to request any information, and that if people lie, destroy information or supress information as they do as the moment, as the hon. Member for Aberdeen North rightly says they do, that will be a criminal offence with full sanctions available. I hope that demonstrates to the Committee’s satisfaction that the Bill does this already, and that it is important that it does so for the reasons that the hon. Lady set out.
“all research the service holds on a topic specified by OFCOM.”
Ofcom could say, “We would like all the research you have on the actual age of users.”
My concern is that clause 85(1) allows Ofcom to require companies to provide it
“with any information that they require for the purpose of exercising, or deciding whether to exercise, any of their online safety functions.”
Ofcom might not know what information the company holds. I am concerned that Ofcom is able to say, as it is empowered to do by clause 85(1), “Could you please provide us with the research piece you did on under-age users or on the age of users?”, instead of having a more general power to say, “Could you provide us with all the research you have done?” I am worried that the power in clause 85(1) is more specific.
I can categorically say to the hon. Lady that the general ability of Ofcom is to ask for any relevant information—the word “any” does appear—and even if the information notice does not specify precisely what report it is, Ofcom does have that power and I expect it to exercise it and the company to comply. If the company does not, I would expect it to be prosecuted.
Clause, by leave, withdrawn.
New Clause 23
Priority illegal content: violence against women and girls
“(1) For the purposes of this Act, any provision applied to priority illegal content should also be applied to any content which—
(a) constitutes,
(b) encourages, or
(c) promotes
violence against women or girls.
(2) ‘Violence against women and girls’ is defined by Article 3 of the Council of Europe Convention on Preventing Violence Against Women and Domestic Violence (‘the Istanbul Convention’).” —(Alex Davies-Jones.)
This new clause applies provisions to priority illegal content to content which constitutes, encourages or promotes violence against women and girls.
Brought up, and read the First time.
This new clause would apply provisions applied to priority illegal content also to content that constitutes, encourages or promotes violence against women and girls. As it stands, the Bill is failing women and girls. In an attempt to tackle that alarming gap, the new clause uses the Istanbul convention definition of VAWG, given that the Home Secretary has so recently agreed to ratify the convention—just a decade after was signed.
The Minister might also be aware that GREVIO—the Group of Experts on Action against Violence against Women and Domestic Violence—which monitors the implementation of the Istanbul convention, published a report in October 2021 on the digital dimension of violence against women and girls. It stated that domestic laws are failing to place the abuse of women and girls online
“in the context of a continuum of violence against women that women and girls are exposed to in all spheres of life, including in the digital sphere.”
The purpose of naming VAWG in the Bill is to require tech companies to be responsible for preventing and addressing VAWG as a whole, rather than limiting their obligations only to specific criminal offences listed in schedule 7 and other illegal content. It is also important to note that the schedule 7 priority list was decided on without any consultation with the VAWG sector. Naming violence against women and girls will also ensure that tech companies are held to account for addressing emerging forms of online hate, which legislation is often unable to keep up with.
We only need to consider accounts from survivors of online violence against women and girls, as outlined in “VAWG Principles for the Online Safety Bill”, published in September last year, to really see the profound impact that the issue is having on people’s lives. Ellesha, a survivor of image-based sexual abuse, was a victim of voyeurism at the hands of her ex-partner. She was filmed without her consent and was later notified by someone else that he had uploaded videos of her to Pornhub. She recently spoke at an event that I contributed to—I believe the right hon. Member for Basingstoke and others also did—on the launch of the “Violence Against Women and Girls Code of Practice”. I am sure we will come to that code of practice more specifically on Report. Her account was genuinely difficult to listen to.
This is an issue that Ellesha, with the support of EVAW, Glitch, and a huge range of other organisations, has campaigned on for some time. She says:
“Going through all of this has had a profound impact on my life. I will never have the ability to trust people in the same way and will always second guess their intentions towards me. My self confidence is at an all time low and although I have put a brave face on throughout this, it has had a detrimental effect on my mental health.”
Ellesha was informed by the police that they could not access the websites where her ex-partner had uploaded the videos, so she was forced to spend an immense amount of time trawling through all of the videos uploaded to simply identify herself. I can only imagine how distressing that must have been for her.
Pornhub’s response to the police inquiries was very vague in the first instance, and it later ignored every piece of following correspondence. Eventually the videos were taken down, likely by the ex-partner himself when he was released from the police station. Ellesha was told that Pornhub had only six moderators at the time—just six for the entire website—and it and her ex-partner ultimately got away with allowing the damaging content to remain, even though the account was under his name and easily traced back to his IP address. That just is not good enough, and the Minister must surely recognise that the Bill fails women in its current form.
If the Minister needs any further impetus to genuinely consider the amendment, I point him to a BBC report from last week that highlighted how much obscene material of women and girls is shared online without their consent. The BBC’s Angus Crawford investigated Facebook accounts and groups that were seen to be posting pictures and videos of upskirting. Naturally, Meta—Facebook’s owner—said that it had a grip on the problem and that those accounts and groups had all been removed, yet the BBC was able to find thousands of users sharing material. Indeed, one man who posted videos of himself stalking schoolgirls in New York is now being investigated by the police. This is the reality of the internet; it can be a powerful, creative tool for good, but far too often it seeks to do the complete opposite.
I hate to make this a gendered argument, but there is a genuine difference between the experiences of men and women online. Last week the Minister came close to admitting that when I queried whether he had ever received an unsolicited indecent picture. I am struggling to understand why he has failed to consider these issues in a Bill proposed by his Department.
The steps that the Government are taking to tackle violence against women and girls offline are broadly to be commended, and I welcome a lot of the initiatives. The Minister must see sense and do the right thing by also addressing the harms faced online. We have a genuine opportunity in the Bill to prevent violence against women and girls online, or at least to diminish some of the harms they face. Will he please do the right thing?
Tackling violence against women and girls has been a long-standing priority of the Government. Indeed, a number of important new offences have already been and are being created, with protecting women principally in mind—the offence of controlling or coercive behaviour, set out in the Serious Crime Act 2015 and amended in the Domestic Abuse Act 2021; the creation of a new stalking offence in 2012; a revenge porn offence in 2015; and an upskirting offence in 2019. All of those offences are clearly designed principally to protect women and girls who are overwhelmingly the victims of those offences. Indeed, the cyber-flashing offence created by clause 156 —the first time we have ever had such an offence in this jurisdiction—will, again, overwhelmingly benefit women and girls who are the victims of that offence.
All of the criminal offences I have mentioned—even if they are not mentioned in schedule 7, which I will come to in a moment—will automatically flow into the Bill via the provisions of clause 52(4)(d). Criminal offences where the victim is an individual, which these clearly all are, automatically flow into the provisions of the Bill, including the offences I just listed, which have been created particularly with women in mind.
I remind the Committee of the Domestic Abuse Act 2021, which was also designed to protect women. Increased penalties for stalking and harassment have been introduced, and we have ended the automatic early release of violent and sex offenders from prison—something I took through Parliament as a Justice Minister a year or two ago. Previously, violent and sex offenders serving standard determinate sentences were often released automatically at the halfway point of their sentence, but we have now ended that practice. Rightly, a lot has been done outside the Bill to protect women and girls.
Let me turn to what the Bill does to further protect women and girls. Schedule 7 sets out the priority offences—page 183 of the Bill. In addition to all the offences I have mentioned previously, which automatically flow into the illegal safety duties, we have set out priority offences whereby companies must not just react after the event, but proactively prevent the offence from occurring in the first place. I can tell the Committee that many of them have been selected because we know that women and girls are overwhelmingly the victims of such offences. Line 21 lists the offence of causing
“intentional harassment, alarm or distress”.
Line 36 mentions the offence of harassment, and line 37 the offence of stalking. Those are obviously offences where women and girls are overwhelmingly the victims, which is why we have picked them out and put them in schedule 7—to make sure they have the priority they deserve.
I accept the point about having this “on the face of the Bill”. We have debated this. That is why clauses 10 and 12 use the word “characteristic”—we debated this word previously The risk assessment duties, which are the starting point for the Bill’s provisions, must specifically and expressly—it is on the face of the Bill—take into account characteristics, first and foremost gender, but also racial identity, sexual orientation and so on. Those characteristics must be expressly addressed by the risk assessments for adults and for children, in order to make sure that the special protections or vulnerabilities or the extra levels of abuse people with those characteristics suffer are recognised and addressed. That is why those provisions are in the Bill, in clauses 10 and 12.
A point was raised about platforms not responding to complaints raised about abusive content that has been put online—the victim complains to the platform and nothing happens. The hon. Members for Pontypridd and for Aberdeen North are completely right that this is a huge problem that needs to be addressed. Clause 18(2) places a duty—they have to do it; it is not optional—on these platforms to operate a complaints procedure that is, in paragraph (c),
“easy to access, easy to use (including by children)”
and that, in paragraph (b),
“provides for appropriate action to be taken”.
They must respond. They must take appropriate action. That is a duty under clause 18. If they do not comply with that duty on a systemic basis, they will be enforced against. The shadow Minister and the hon. Member for Aberdeen North are quite right. The days of the big platforms simply ignoring valid complaints from victims have to end, and the Bill will end them.
In conclusion, we will be publishing a list of harms, including priority harms for children and adults, which will then be legislated for in secondary legislation. The list will be constructed with the vulnerability of women and girls particularly in mind. When Committee members see that list, they will find it reassuring on this topic. I respectfully resist the new clause, because the Bill is already incredibly strong in this important area as it has been constructed.
Question put, That the clause be read a Second time.
New clause 24 would enable users to bring civil proceedings against providers when they fail to meet their duties under part 3 of the Bill. As has been said many times, power is currently skewed significantly against individuals and in favour of big corporations, leading people to feel that they have no real ability to report content or complain to companies because, whenever they do, there is no response and no action. We have discussed how the reporting, complaints and super-complaints mechanisms in the Bill could be strengthened, as well as the potential merits of an ombudsman, which we argued should be considered when we debated new clause 1.
In tabling this new clause, we are trying to give users the right to appeal through another route—in this case, the courts. As the Minister will be aware, that was a recommendation of the Joint Committee, whose report stated:
“While we recognise the resource challenges both for individuals in accessing the courts and the courts themselves, we think the importance of issues in this Bill requires that users have a right of redress in the courts. We recommend the Government develop a bespoke route of appeal in the courts to allow users to sue providers for failure to meet their obligations under the Act.”
The Government’s response to that recommendation was that the Bill would not change the current situation, which allows individuals to
“seek redress through the courts in the event that a company has been negligent or is in breach of its contract with the individual.”
It went on to note:
“Over time, as regulatory precedent grows, it will become easier for individuals to take user-to-user services to court when necessary.”
That seems as close as we are likely to get to an admission that the current situation for individuals is far from easy. We should not have to wait for the conclusion of the first few long and drawn-out cases before it becomes easier for people to fight companies in the courts.
Some organisations have rightly pointed out that a system of redress based on civil proceedings in the courts risks benefiting those with the resources to sue—as we know, that is often the case. However, including that additional redress system on the face of the Bill should increase pressure on companies to fulfil their duties under part 3, which will hopefully decrease people’s need to turn to the redress mechanism.
If we want the overall system of redress to be as strong as possible, individuals must have the opportunity to appeal failures of a company’s duty of care as set out in the Bill. The Joint Committee argued that the importance of the issues dealt with by the Bill requires that users have a right of redress in the courts. The Government did not respond to that criticism in their formal response, but it is a critical argument. A balancing act between proportionate restrictions and duties versus protections against harms is at the heart of this legislation, and has been at the heart of all our debates. Our position is in line with that of the Joint Committee: these issues are too important to deny individuals the right to appeal failures of duty by big companies through the courts.
Secondly, the new clause creates a new right of action that does not currently exist, which is a right of individual action if the company is in breach of one of the duties set out in part 3 of the Bill. Individuals being able to sue for a breach of a statutory duty that we are creating is not the way in which we are trying to construct enforcement under the Bill. We will get social media firms to comply through Ofcom acting as the regulator, rather than via individuals litigating these duties on a case-by-case basis. A far more effective way of dealing with the problems, as we discussed previously when we debated the ombudsman, is to get Ofcom to deal with this on behalf of the whole public on a systemic basis, funded not by individual litigants’ money, which is what would happen, at least in the first instance, if they had to proceed individually. Ofcom should act on behalf of us all collectively—this should appeal to socialists—using charges levied from the industry itself.
That is why we want to enforce against these companies using Ofcom, funded by the industry and acting on behalf of all of us. We want to fix these issues not just on an individual basis but systemically. Although I understand the Opposition’s intent, the first part simply declares what is already the law, and the second bit takes a different route from the one that the Bill takes. The Bill’s route is more comprehensive and will ultimately be more effective. Perhaps most importantly of all, the approach that the Bill takes is funded by the fees charged on the polluters—the social media firms—rather than requiring individual citizens, at least in the first instance, to put their hand in their own pocket, so I think the Bill as drafted is the best route to delivering these objectives.
Question put, That the clause be read a Second time.
New clause 25 would place an obligation on Ofcom to report annually to Parliament with an update on the effectiveness of the Online Safety Bill, which would also indicate Ofcom’s ability to implement the measures in the Bill to tackle online harms.
As we have discussed, chapter 7 of the Bill compels Ofcom to compile and issue reports on various aspects of the Bill as drafted. Some of those reports are to be made public by Ofcom, and others are to be issued to the Secretary of State, who must subsequently lay them before Parliament. However, new clause 25 would place a direct obligation on Ofcom to be transparent to Parliament about the scale of harms being tackled, the type of harms encountered and the effectiveness of the Bill in achieving its overall objectives.
The current proposal in clause 135 for an annual transparency report is not satisfactory. Those transparency reports are not required to be laid before Parliament. The clause places vague obligations on reporting patterns, and it will not give Parliament the breadth of information needed to allow us to decide the Online Safety Bill’s effectiveness.
Clause 149 is welcome. It will ensure that a review conducted by the Secretary of State in consultation with Ofcom is placed before Parliament. However, that review is a one-off that will provide just a small snapshot of the Bill’s effectiveness. It may not fully reflect Ofcom’s concerns as the regulator, and most importantly it will not disclose the data and information that Parliament needs to accurately assess the impact of the Bill.
New clause 25 will place a number of important obligations on Ofcom to provide us with that crucial information. First, Ofcom will report annually to Parliament on the overall effectiveness of the Act. That report will allow Ofcom to explore fully where the Act is working, where it could be tightened and where we have left gaps. Throughout the Bill we are heaping considerable responsibility on to Ofcom, and it is only right that Ofcom is able to feedback publicly and state clearly where its powers allow it to act, and where it is constrained and in need of assistance.
Secondly, new clause 25 will compel Ofcom to monitor, collate and publish figures relating to the number of harms removed by category 1 services, which is an important indicator for us to know the scale of the issue and that the Act is working.
Thirdly, we need to know how often Ofcom is intervening, compared with how often the platforms themselves are acting. That crucial figure will allow us to assess the balance of regulation, which assists not only us in the UK but countries looking at the legislation as a guide for their own regulation.
Finally, Ofcom will detail the harms removed by type to identify any areas where the Act may be falling short, and where further attention may be needed.
I hope the Committee understands why this information is absolutely invaluable, when we have previously discussed our concerns that this groundbreaking legislation will need constant monitoring. I hope it will also understand why the information needs to be transparent in order to instil trust in the online space, to show the zero-tolerance approach to online harms, and to show countries across the globe that the online space can be effectively regulated to protect citizens online. Only Parliament, as the legislature, can be an effective monitor of that information. I hope I can count on the Government’s support for new clause 25.
New clause 25 is an important addition that would offer an overview of the effectiveness of the Bill and act as a warning bell for any unaddressed historical or emerging harms. Not only would such a report benefit legislators, but the indicators included in the report would be helpful for both Ofcom and user advocacy groups. We cannot continue to attempt to regulate the internet blind. We must have the necessary data and analysis to be sure that the provisions in the Bill are as effective as they can be. I hope the Minister can support this new clause.
I was being slightly facetious there, because the hon. Member for Batley and Spen is quite right to raise the issue. However, the duty she is seeking to create via new clause 25 is already covered by the duties in the Office of Communications Act. The reports that Ofcom publish under that duty will include their new duties under the Bill. Having made that clear, I trust that new clause 25 can be withdrawn.
Question put, That the clause be read a Second time.
This new clause would require the Secretary of State to publish and lay before Parliament a report on the harms caused to users by synthetic media content, also known as deepfakes. The report must contain particular reference to the harms caused to those working in the entertainment industry.
The Government define artificial intelligence as
“technologies with the ability to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation”.
That kind of technology has advanced rapidly in recent years, and commercial AI companies can be found across all areas of the entertainment industries, including voice, modelling, music, dance, journalism and gaming—the list goes on.
One key area of development is AI-made performance synthetisation, which is the process of creating a synthetic performance. That has a wide range of applications, including automated audiobooks, interactive digital avatars and “deepfake” technology, which often, sadly, has more sinister implications. Innovation for the entertainment industry is welcome and, when used ethically and responsibly, can have various benefits. For example, AI systems can create vital sources of income for performers and creative workers. From an equalities perspective, it can be used to increase accessibility for disabled workers.
However, deepfake technology has received significant attention globally due to its often-malicious application. Deepfakes have been defined as,
“realistic digital forgeries of videos or audio created with cutting-edge machine learning techniques.”
An amalgamation of artificial intelligence, falsification and automation, deepfakes use deep learning to replicate the likeness and actions of real people. Over the past few years, deepfake technology has become increasingly sophisticated and accessible. Various apps can be downloaded for free, or a low cost, to utilise deepfake technology.
Deepfakes can cause short-term and long-term social harms to individuals working in the entertainment industry, and to society more broadly. Currently, deepfakes are mostly used in pornography, inflicting emotional and reputational damage, and in some cases violence towards the individual—mainly women. The US entertainment union, the Screen Actors Guild, estimates that 96% of deepfakes are pornographic and depict women, and 99% of deepfake subjects are from the entertainment industry.
However, deepfakes used without consent pose a threat in other key areas. For example, deepfake technology has the power to alter the democratic discourse. False information about institutions, policies, and public leaders, powered by a deepfake, can be exploited to spin information and manipulate belief. For example, deepfakes have the potential to sabotage the image and reputation of a political candidate and may alter the course of an election. They could be used to impersonate the identities of business leaders and executives to facilitate fraud, and also have the potential to accelerate the already declining trust in the media.
Alongside the challenges presented by deepfakes, there are issues around consent for performers and creative workers. In a famous case, the Canadian voiceover artist Bev Standing won a settlement after TikTok synthesised her voice without her consent and used it for its first ever text-to-speech voice function. Many artists in the UK are also having their image, voice or likeness used without their permission. AI systems have also started to replace jobs for skilled professional performers because using them is often perceived to be a cheaper and more convenient way of doing things.
Audio artists are particularly concerned by the development of digital voice technology for automated audiobooks, using the same technology used for digital voice assistants such as Siri and Alexa. It is estimated that within one or two years, high-end synthetic voices will have reached human levels. Equity recently conducted a survey on this topic, which found that 65% of performers responding thought that the development of AI technology poses a threat to employment opportunities in the performing arts sector. That figure rose to 93% for audio artists. Pay is another key issue; it is common for artists to not be compensated fairly, and sometimes not be paid at all, when engaging with AI. Many artists have also been asked to sign non-disclosure agreements without being provided with the full information about the job they are taking part in.
Government policy making is non-existent in this space. In September 2021 the Government published their national AI strategy, outlining a 10-year plan to make Britain a global AI superpower. In line with that strategy, the Government have delivered two separate consultations looking at our intellectual property system in relation to AI.
Before we leave the room, my understanding is that it is hoped that the Bill will report this afternoon. That is a matter for the usual channels; it is nothing to do with the Chair. However, of course, it is an open-ended session, so if you are getting close to the mark, you may choose to go on. If that poses a problem for Ms Rees, I am prepared to take the Chair again to see it through if we have to. On the assumption that I do not, thank you all very much indeed for the courtesy you have shown throughout this session, which has been exemplary. I also thank the staff; thank you very much.
Adjourned till this day at Two o’clock.
Contains Parliamentary information licensed under the Open Parliament Licence v3.0.