PARLIAMENTARY DEBATE
Online Safety Bill (Tenth sitting) - 14 June 2022 (Commons/Public Bill Committees)
Debate Detail
Chair(s) † Sir Roger Gale, Christina Rees
Members† Ansell, Caroline (Eastbourne) (Con)
† Bailey, Shaun (West Bromwich West) (Con)
† Blackman, Kirsty (Aberdeen North) (SNP)
Carden, Dan (Liverpool, Walton) (Lab)
† Davies-Jones, Alex (Pontypridd) (Lab)
† Double, Steve (St Austell and Newquay) (Con)
† Fletcher, Nick (Don Valley) (Con)
Holden, Mr Richard (North West Durham) (Con)
† Keeley, Barbara (Worsley and Eccles South) (Lab)
† Leadbeater, Kim (Batley and Spen) (Lab)
† Miller, Dame Maria (Basingstoke) (Con)
Mishra, Navendu (Stockport) (Lab)
† Moore, Damien (Southport) (Con)
† Nicolson, John (Ochil and South Perthshire) (SNP)
† Philp, Chris (Parliamentary Under-Secretary of State for Digital, Culture, Media and Sport)
† Russell, Dean (Watford) (Con)
† Stevenson, Jane (Wolverhampton North East) (Con)
ClerksKatya Cassidy, Kevin Maddison, Seb Newman, Committee Clerks
† attended the Committee
Public Bill CommitteeTuesday 14 June 2022
(Afternoon)
[Sir Roger Gale in the Chair]
Online Safety Bill
“any of the following provisions of the Suicide Act 1961—
(a) section 2;
(b) section 3A (inserted by section Communication offence for encouraging or assisting self-harm of this Act).”
‘(1) In the Suicide Act 1961, after section 3 insert—
“3A Communication offence for encouraging or assisting self-harm
(1) A person (“A”) commits an offence if—
(a) A sends a message,
(b) the message encourages or could be used to assist another person (“B”) to inflict serious physical harm upon themselves, and
(c) A’s act was intended to encourage or assist the infliction of serious physical harm.
(2) The person referred to in subsection (1)(b) need not be a specific person (or class of persons) known to, or identified by, A.
(3) A may commit an offence under this section whether or not any person causes serious physical harm to themselves, or attempts to do so.
(4) A person guilty of an offence under this section is liable—
(a) on summary conviction, to imprisonment for a term not exceeding 12 months, or a fine, or both;
(b) on indictment, to imprisonment for a term not exceeding 5 years, or a fine, or both.
(5) “Serious physical harm” means serious injury amounting to grievous bodily harm within the meaning of the Offences Against the Person Act 1861.
(6) No proceedings shall be instituted for an offence under this section except by or with the consent of the Director of Public Prosecutions.
(7) If A arranges for a person (“A2”) to do an Act and A2 does that Act, A is also to be treated as having done that Act for the purposes of subsection (1).
(8) In proceedings for an offence to which this section applies, it shall be a defence for A to prove that—
(a) B had expressed intention to inflict serious physical harm upon themselves prior to them receiving the message from A;
(b) B’s intention to inflict serious physical harm upon themselves was not initiated by A; and
(c) the message was wholly motivated by compassion towards B or to promote the interests of B’s health or wellbeing.”’
“I know that every attempt my brother considered at ending his life, from his early 20s to when he died in April, aged 40, was based on extensive online research. It was all too easy for him to find step-by-step instructions so he could evaluate the effectiveness and potential impact of various approaches and, most recently, given that he had no medical background, it was purely his ability to work out the quantities of various drugs and likely impact of taking them in combination that equipped him to end his life.”
It is so easy when discussing the minutiae of the Bill to forget its real-world impact. I have worked with Samaritans on the new clause, and I use that quote with permission. It is the leading charity in trying to create a suicide-safer internet. It is axiomatic to say that suicide and self-harm have a devastating impact on people’s lives. The Bill must ensure that the online space does not aid the spreading of content that would promote this behaviour in any way.
There has rightly been much talk about how children are affected by self-harm content online. However, it should be stressed they do not exclusively suffer because of that content. Between 2011 and 2015, 151 patients who died by suicide were known to have visited websites that encouraged suicide or shared information about methods of harm, and 82% of those patients were aged over 25. It is likely that, as the Bill stands, suicide-promoting content will be covered in category 1 services, as it will be designated as harmful. Unless this amendment is passed, that content will not be covered on smaller sites, which is crucial. As Samaritans has identified, it is precisely in these smaller fora and websites that harm proliferates. The 151 patients who took their own life after visiting harmful websites may have been part of a handful of people using those sites, which would not fall under the definition of category 1, as I am sure the Minister will confirm.
None of us on the Opposition Benches seeks to make political capital out of any of the things we propose. All of us, on both sides of the House, are here with the best of intentions, to try to ensure that we get the best possible Bill. We all want to be able to vote for the Bill at the end of the day. Indeed, as I said, I have worked with two friends on the Conservative Benches—with the hon. Member for Watford on the Joint Committee on the draft Bill and with the hon. Member for Wolverhampton North East on the Select Committee on Digital, Culture, Media and Sport—and, as we know, they have both voted for various proposals. It is perhaps part of the frustration of the party system here that people are forced to go through the hoops and pretend that they do not really agree with things that they actually do agree with.
Let us try to move on with this, in a way that we have not done hitherto, and see if we can agree on amendments. We will withdraw amendments if we are genuinely convinced that they have already been considered by the Government. On the Government side, let them try to accept some of our amendments—just begin to accept some—if, as with this one, they think they have some merit.
I was talking about Samaritans, and exactly what it wants to do with the Bill. It is concerned about harmful content after the Bill is passed. This feeds into potentially the most important aspect of the Bill: it does not mandate risk assessments based exclusively on risk. By adding in the qualifications of size and scope, the Bill wilfully lets some of the most harmful content slip through its fingers—wilfully, but I am sure not deliberately. Categorisation will be covered by a later amendment, tabled by my hon. Friend the Member for Aberdeen North, so I shall not dwell on it now.
In July 2021, the Law Commission for England and Wales recommended the creation of a new narrow offence of the “encouragement or assistance” of serious self-harm with “malicious intent”. The commission identified that there is
“currently no offence that adequately addresses the encouragement of serious self-harm.”
The recommendation followed acknowledgement that
“self-harm content online is a worrying phenomenon”
and should have a
“robust fault element that targets deliberate encouragement of serious self-harm”.
Currently, there are no provisions of the Bill to create a new offence of assisting or encouraging self- harm.
In conclusion, I urge the Minister to listen not just to us but to the expert charities, including Samaritans, to help people who have lived experience of self-harm and suicide who are calling for regulation of these dangerous sites.
I, too, pay tribute to Samaritans for all the work it has done in supporting the Bill and these amendments to it. As colleagues will be aware, new clause 36 follows a recommendation from the Law Commission dating back to July 2021. The commission recommended the creation of a new, narrow offence of the “encouragement or assistance” of serious self-harm with “malicious intent”. It identified that there is
“currently no offence that adequately addresses the encouragement of serious self-harm.”
The recommendation followed acknowledgement that
“self-harm content online is a worrying phenomenon”
and should have a
“robust fault element that targets deliberate encouragement of serious self-harm”.
Currently, there are no provisions in the Bill to create a new offence of assisting or encouraging self-harm, despite the fact that other recommendations from the Law Commission report have been brought into the Bill, such as creating a new offence of cyber-flashing and prioritising tackling illegal suicide content.
We all know that harmful suicide and self-harm content is material that has the potential to cause or exacerbate self-harm and suicidal behaviours. Content relating to suicide and self-harm falls into both categories in the Bill—illegal content and legal but harmful content. Encouraging or assisting suicide is also currently a criminal offence in England and Wales under the Suicide Act 1961, as amended by the Coroners and Justice Act 2009.
Content encouraging or assisting someone to take their own life is illegal and has been included as priority illegal content in the Bill, meaning that platforms will be required to proactively and reactively prevent individuals from encountering it, and search engines will need to structure their services to minimise the risk to individuals encountering the content. Other content, including content that positions suicide as a suitable way of overcoming adversity or describes suicidal methods, is legal but harmful.
The Labour party’s Front-Bench team recognises that not all content falls neatly into the legal but harmful category. What can be helpful for one user can be extremely distressing to others. Someone may find it extremely helpful to share their personal experience of suicide, for example, and that may also be helpful to other users. However, the same material could heighten suicidal feelings and levels of distress in someone else. We recognise the complexities of the Bill and the difficulties in developing a way around this, but we should delineate harmful and helpful content relating to suicide and self-harm, and that should not detract from tackling legal but clearly harmful content.
In its current form, the Bill will continue to allow legal but clearly harmful suicide and self-harm content to be accessed by over-18s. Category 1 platforms, which have the highest reach and functionality, will be required to carry out risk assessments of, and set out in their terms and conditions their approach to, legal but harmful content in relation to over-18s. As the hon. Member for Ochil and South Perthshire outlined, however, the Bill’s impact assessment states that “less than 0.001%” of in-scope platforms
“are estimated to meet the Category 1 and 2A thresholds”,
and estimates that only 20 platforms will be required to fulfil category 1 obligations. There is no requirement on the smaller platforms, including those that actively encourage suicide, to do anything at all to protect over-18s. That simply is not good enough. That is why the Labour party supports new clause 36, and we urge the Minister to do the right thing by joining us.
If Members have been listening to me carefully, they will know that the Government are doing further work or are carefully listening in a few areas. We may have more to say on those topics as the Bill progresses; it is always important to get the drafting of the provisions exactly right. I hope that that has indicated to the hon. Gentleman our willingness to listen, which I think we have already demonstrated well.
On new clause 36, it is important to mention that there is already a criminal offence of inciting suicide. It is a schedule 7 priority offence, so the Bill already requires companies to tackle content that amounts to the existing offence of inciting suicide. That is important. We would expect the promotion of material that encourages children to self-harm to be listed as a primary priority harm relating to children, where, again, there is a proactive duty to protect them. We have not yet published that primary priority harm list, but it would be reasonable to expect that material promoting children to self-harm would be on it. Again, although we have not yet published the list of content that will be on the adult priority harm list—obviously, I cannot pre-empt the publication of that list—one might certainly wish for content that promotes adults to self-harm to appear on it too.
The hon. Gentleman made the point that duties relating to adults would apply only to category 1 companies. Of course, the ones that apply to children would apply to all companies where there was significant risk, but he is right that were that priority harm added to the adult legal but harmful list, it would apply only to category 1 companies.
Those category 1 companies are likely to be small in number, as I think the shadow Minister said, but I would imagine—I do not have the exact number—that they cover well over 90% of all traffic. However, as I hinted on the Floor of the House on Second Reading—we may well discuss this later—we are thinking about including platforms that may not meet the category 1 size threshold but none the less pose high-level risks of harm. If that is done—I stress “if”—it will address the point raised by the hon. Member for Ochil and South Perthshire. That may answer the point that the hon. Member for Batley and Spen was going to raise, but if not, I happily give way.
On amendment 142 and the attendant new clause 36, the Government agree with the sentiment behind them—namely, the creation of a new offence of encouraging or assisting serious self-harm. We agree with the substance of the proposal from the hon. Member for Ochil and South Perthshire. As he acknowledged, the matter is under final consideration by the Law Commission and our colleagues in the Ministry of Justice. The offence initially proposed by the Law Commission was wider in scope than that proposed under new clause 36. The commission’s proposed offence covered the offline world, as well as the online one. For example, the new clause as drafted would not cover assisting a person to self-harm by providing them with a bladed article because that is not an online communication. The offence that the Law Commission is looking at is broader in scope.
The Government have agreed in principle to create an offence based on the Law Commission recommendation in separate legislation, and once that is done the scope of the new offence will be wider than that proposed in the new clause. Rather than adding the new clause and the proposed limited new offence to this Bill, I ask that we implement the offence recommended by the Law Commission, the wider scope of which covers the offline world as well as the online world, in separate legislation. I would be happy to make representations to my colleagues in Government, particularly in the MOJ, to seek clarification about the relevant timing, because it is reasonable to expect it to be implemented sooner rather than later. Rather than rushing to introduce that offence with limited scope under the Bill, I ask that we do it properly as per the Law Commission recommendation.
Once the Law Commission recommendation is enacted in separate legislation, to which the Government have already agreed in principle, it will immediately flow through automatically to be incorporated into clause 52(4)(d), which relates to illegal content, and under clause 176, the Secretary of State may, subject to parliamentary approval, designate the new offence as a priority offence under schedule 7 via a statutory instrument. The purpose of amendment 142 can therefore be achieved through a SI.
The Government publicly entirely agree with the intention behind the proposed new clause 36, but I think the way to do this is to implement the full Law Commission offence as soon as we can and then, if appropriate, add it to schedule 7 by SI. The Government agree with the spirit of the hon. Gentleman’s proposal, but I believe that the Government already have a plan to do a more complete job to create the new offence.
Amendment, by leave, withdrawn.
“1A An offence under section 13 of the Criminal Justice Act (Northern Ireland) 1966 (c. 20 (N.I.)) (assisting suicide etc).”
This amendment adds the specified offence to Schedule 7, with the effect that content amounting to that offence counts as priority illegal content.
In future, if new Scottish or Northern Irish offences are created, the Secretary of State will be able to consult Scottish or Northern Irish Ministers and, by regulations, amend schedule 7 to add the new offences that may be appropriate if conceived by the devolved Parliament or Assembly in due course. That, I think, answers the question asked by the hon. Lady earlier in our proceedings. As I say, we consulted the devolved Administrations extensively and I hope that the Committee will assent readily to the amendments.
With reference to some of the later paragraphs, I am keen for the Minister to explain briefly how this will work in the case of Scotland. We believe that the revenge porn offence in Scotland is more broadly drawn than the English version, so the level of protection for women in England and Wales will be increased. Can the Minister confirm that?
The Bill will not apply the Scottish offence to English offenders, but it means that content that falls foul of the law in Scotland, but not in England or Wales, will still be relevant regulated content for service providers, irrespective of the part of the UK in which the service users are located. That makes sense from the perspective of service providers, but I will be grateful for clarity from the Minister on this point.
I appreciate that the Minister has worked with the devolved Administrations to table the amendments. I also appreciate the way in which amendment 126 is written, such that the Secretary of State “must consult” Scottish Ministers and the Department of Justice in Northern Ireland before making regulations that relate to legislation in either of the devolved countries. I am glad that the amendments have been drafted in this way and that the concern that we heard about in evidence no longer seems to exist, and I am pleased with the Minister’s decision about the way in which to make any future changes to legislation.
I agree with the position put forward by the hon. Member for Pontypridd. My understanding, from what we heard in evidence a few weeks ago, is that, legally, all will have to agree with the higher bar of the offences, and therefore anyone anywhere across the UK will be provided with the additional level of protection. She is right that the offence might not apply to everyone, but the service providers will be subject to the requirements elsewhere. Similarly, that is my view. Once again, I thank the Minister.
Amendment 116 agreed to.
Amendments made: 117, in schedule 7, page 183, line 29, at end insert—
“4A An offence under section 50A of the Criminal Law (Consolidation) (Scotland) Act 1995 (racially-aggravated harassment).”
This amendment adds the specified offence to Schedule 7, with the effect that content amounting to that offence counts as priority illegal content.
Amendment 118, in schedule 7, page 183, line 36, at end insert—
“5A An offence under any of the following provisions of the Protection from Harassment (Northern Ireland) Order 1997 (S.I. 1997/1180 (N.I. 9))—
(a) Article 4 (harassment);
(b) Article 6 (putting people in fear of violence).”
This amendment adds the specified offences to Schedule 7, with the effect that content amounting to those offences counts as priority illegal content.
Amendment 119, in schedule 7, page 184, line 2, at end insert—
“6A An offence under any of the following provisions of the Criminal Justice and Licensing (Scotland) Act 2010 (asp 13)—
(a) section 38 (threatening or abusive behaviour);
(b) section 39 (stalking).”
This amendment adds the specified offences to Schedule 7, with the effect that content amounting to those offences counts as priority illegal content.
Amendment 120, in schedule 7, page 184, line 38, at end insert—
“12A An offence under any of the following provisions of the Criminal Justice (Northern Ireland) Order 1996 (S.I. 1996/3160 (N.I. 24))—
(a) Article 53 (sale etc of knives);
(b) Article 54 (sale etc of knives etc to minors).”
This amendment adds the specified offences to Schedule 7, with the effect that content amounting to those offences counts as priority illegal content.
Amendment 121, in schedule 7, page 184, line 42, at end insert—
“13A An offence under any of the following provisions of the Firearms (Northern Ireland) Order 2004 (S.I. 2004/702 (N.I. 3))—
(a) Article 24 (sale etc of firearms or ammunition without certificate);
(b) Article 37(1) (sale etc of firearms or ammunition to person without certificate etc);
(c) Article 45(1) and (2) (purchase, sale etc of prohibited weapons);
(d) Article 63(8) (sale etc of firearms or ammunition to people who have been in prison etc);
(e) Article 66A (supplying imitation firearms to minors).”
This amendment adds the specified offences to Schedule 7, with the effect that content amounting to those offences counts as priority illegal content.
Amendment 122, in schedule 7, page 184, line 44, at end insert—
“14A An offence under any of the following provisions of the Air Weapons and Licensing (Scotland) Act 2015 (asp 10)—
(a) section 2 (requirement for air weapon certificate);
(b) section 24 (restrictions on sale etc of air weapons).”
This amendment adds the specified offences to Schedule 7, with the effect that content amounting to those offences counts as priority illegal content.
Amendment 123, in schedule 7, page 185, line 8, at end insert—
“16A An offence under any of the following provisions of the Sexual Offences (Northern Ireland) Order 2008 (S.I. 2008/1769 (N.I. 2))—
(a) Article 62 (causing or inciting prostitution for gain);
(b) Article 63 (controlling prostitution for gain).”—(Chris Philp.)
This amendment adds the specified offences to Schedule 7, with the effect that content amounting to those offences counts as priority illegal content.
Amendments made: 124, in schedule 7, page 185, line 14, at end insert—
“18A An offence under section 2 of the Abusive Behaviour and Sexual Harm (Scotland) Act 2016 (asp 22) (disclosing, or threatening to disclose, an intimate photograph or film).”
This amendment adds the specified offence to Schedule 7, with the effect that content amounting to that offence counts as priority illegal content.
Amendment 125, in schedule 7, page 185, line 28, at end insert—
“20A An offence under section 49(3) of the Criminal Justice and Licensing (Scotland) Act 2010 (articles for use in fraud).”—(Chris Philp.)
This amendment adds the specified offence to Schedule 7, with the effect that content amounting to that offence counts as priority illegal content.
Amendment proposed: 59, in schedule 7, page 185, line 39, at end insert—
“Animal Welfare
22A An offence under any of the following provisions of the Animal Welfare Act 2006—
(a) section 4 (unnecessary suffering);
(b) section 5 (mutilation);
(c) section 7 (administration of poisons);
(d) section 8 (fighting);
(e) section 9 (duty of person responsible for animal to ensure welfare).
22B An offence under any of the following provisions of the Animal Health and Welfare (Scotland) Act 2006—
(a) section 19 (unnecessary suffering);
(b) section 20 (mutilation);
(c) section 21 (cruel operations);
(d) section 22 (administration of poisons);
(e) section 23 (fighting);
(f) section 24 (ensuring welfare of animals).
22C An offence under any of the following provisions of the Welfare of Animals Act (Northern Ireland) 2011—
(a) section 4 (unnecessary suffering);
(b) section 5 (prohibited procedures);
(c) section 7 (administration of poisons);
(d) section 8 (fighting);
(e) section 9 (ensuring welfare of animals).
22D For the purpose of paragraphs 22A, 22B or 22C of this Schedule, the above offences are deemed to have taken place regardless of whether the offending conduct took place within the United Kingdom, if the offending conduct would have constituted an offence under the provisions contained within those paragraphs.”—(Alex Davies-Jones.)
This amendment adds certain animal welfare offences to the list of priority offences in Schedule 7.
Question put, That the amendment be made.
“Human trafficking
22A An offence under section 2 of the Modern Slavery Act 2015.”
This amendment would designate human trafficking as a priority offence.
Our amendment seeks to deal explicitly with what Meta and other companies refer to as “domestic servitude”, which we know better as human trafficking. This abhorrent practice has sadly been part of our society for hundreds if not thousands of years, and today, human traffickers are aided by various apps and platforms. The same platforms that connect us with old friends and family across the globe have been hijacked by the very worst people in our world, who are using them to create networks of criminal enterprise, none more cruel than human trafficking.
Investigations by the BBC and The Wall Street Journal have uncovered how traffickers use Instagram, Facebook and WhatsApp to advertise, sell, and co-ordinate the trafficking of young women. One would think that this issue would be of the utmost importance to Meta—Facebook, as it was at the time—yet, as the BBC reported,
“the social media giant only took ‘limited action’ until ‘Apple Inc. threatened to remove Facebook’s products from the App Store, unless it cracked down on the practice’.”
Those of us who have sat on the DCMS Committee and the Joint Committee on the draft Bill—I and my friends across the aisle, the hon. Members for Wolverhampton North East and for Watford—know exactly what it is like to have Facebook’s high heid yins before you. They will do absolutely nothing to respond to legitimate pressure. They understand only one thing: the force of law and of financial penalty. Only when its profits were in danger did Meta take the issue seriously.
The omission of human trafficking from schedule 7 is especially worrying because if it is not directly addressed as priority illegal content, we can be certain that it will not be prioritised by the platforms. We know that from their previous behaviour.
We understand that it is difficult to try to regulate in respect of human trafficking on platforms. It requires work across borders and platforms, with moderators speaking different languages. We established that Facebook does not have moderators who speak different languages. On the Joint Committee on the draft Bill, we discovered that Facebook does not moderate content in English to any adequate degree. Just look at the other languages around the world—do we think Facebook has moderators who work in Turkish, Finnish, Swedish, Icelandic or a plethora of other languages? It certainly does not. The only language that Facebook tries to moderate—deeply inadequately, as we know—is English. We know how bad the moderation is in English, so can the Committee imagine what it is like in some of the world’s other languages? The most terrifying things are allowed to happen without moderation.
Regulating in respect of human trafficking on platforms is not cheap or easy, but it is utterly essential. The social media companies make enormous amounts of money, so let us shed no tears for them and the costs that will be entailed. If human trafficking is not designated a priority harm, I fear it will fall by the wayside, so I must ask the Minister: is human trafficking covered by another provision on priority illegal content? Like my hon. Friend the Member for Aberdeen North, I cannot see where in the Bill that lies. If the answer is yes, why are the human rights groups not satisfied with the explanation? What reassurance can the Minister give to the experts in the field? Why not add a direct reference to the Modern Slavery Act, as in the amendment?
If the answer to my question is no, I imagine the Minister will inform us that the Bill requires platforms to consider all illegal content. In what world is human trafficking that is facilitated online not a priority? Platforms must be forced to be proactive on this issue; if not, I fear that human trafficking, like so much that is non-priority illegal content, will not receive the attention it deserves.
More widely, although we appreciate that the establishment of priority offences online is the route the Government have chosen to go down with the Bill, we believe the Bill remains weak in relation to addressing harms to adults and wider societal harms. Sadly, the Bill remains weak in its approach and has seemingly missed a number of known harms to both adults and children that we feel are a serious omission. Three years on from the White Paper, the Government know where the gaps are, yet they have failed to address them. That is why we are pleased to support the amendment tabled by the hon. Members for Ochil and South Perthshire and for Aberdeen North.
Human trafficking offences are a serious omission from schedule 7 that must urgently be rectified. As we all know from whistleblower Frances Haugen’s revelations, Facebook stands accused, among the vast array of social problems, of profiting from the trade and sale of human beings—often for domestic servitude—by human traffickers. We also know that, according to internal documents, the company has been aware of the problems since at least 2018. As the hon. Member for Ochil and South Perthshire said, we know that a year later, on the heels of a BBC report that documented the practice, the problem was said to be so severe that Apple itself threatened to pull Facebook and Instagram from its app store. It was only then that Facebook rushed to remove content related to human trafficking and made emergency internal policy changes to avoid commercial consequences described as “potentially severe” by the company. However, an internal company report detailed that the company did not take action prior to public disclosure and threats from Apple—profit over people.
In a complaint to the US Securities and Exchange Commission first reported by The Wall Street Journal, whistleblower Haugen wrote:
“Investors would have been very interested to learn the truth about Facebook almost losing access to the Apple App Store because of its failure to stop human trafficking on its products.”
I cannot believe that the Government have failed to commit to doing more to tackle such abhorrent practices, which are happening every day. I therefore urge the Minister to do the right thing and support amendment 90.
Turning to the priority offences set out in schedule 7 —I saw this when I was a Home Office Minister—modern slavery is generally associated with various other offences that are more directly visible and identifiable. Modern slavery itself can be quite hard to identify. That is why our approach is, first, to incorporate modern slavery as a regular offence via clause 52(4)(d) and, secondly, to specify as priority offences those things that are often identifiable symptoms of it and that are feasibly identified. Those include many of the offences listed in schedule 7, such as causing, inciting or controlling prostitution for gain, as in paragraph 16 on sexual exploitation, which is often the manifestation of modern slavery; money laundering, which is often involved where modern slavery takes place; and assisting illegal immigration, because modern slavery often involves moving somebody across a border, which is covered in paragraph 15 on assisting illegal immigration, as per section 25 of the Immigration Act 1971.
Modern slavery comes into scope directly via clause 52(4)(d) and because the practicably identifiable consequences of modern slavery are listed as priority offences, I think we do have this important area covered.
Amendment, by leave, withdrawn.
Schedule 7, as amended, agreed to.
Clause 53
“Content that is harmful to children” etc
Question proposed, That the clause stand part of the Bill.
The Bill does not adequately address the risks caused by the design—the functionalities and features of services themselves—or those created by malign contact with other users, which we know to be an immense problem. Research has found that online grooming of young girls has soared by 60% in the last three years—and four in five victims are girls. We also know that games increasingly have addictive gambling-style features. Those without user-to-user functionalities, such as Subway Surfers, which aggressively promotes in-app purchases, are currently out of scope of the Bill.
Lastly, research by Parent Zone found that 91% of children say that loot boxes are available in the games they play and 40% have paid to open one. That is not good enough. I urge the Minister to consider his approach to tackling harmful content and the impact that it can have in all its forms. When considering how children will be kept safe under the new regime, we should consider concerns flagged by some of the civil society organisations that work with them. Organisations such as the Royal College of Psychiatrists, The Mix, YoungMinds and the Mental Health Foundation have all been instrumental in their calls for the Government to do more. While welcoming the intention to protect children, they note that it is not clear at present how some categories of harm, including material that damages people’s body image, will be regulated—or whether it will be regulated at all.
While the Bill does take steps to tackle some of the most egregious, universally damaging material that children currently see, it does not recognise the harm that can be done through the algorithmic serving of material that, through accretion, will cause harm to children with particular mental health vulnerabilities. For example, beauty or fitness-related content could be psychologically dangerous to a child recovering from an eating disorder. Research from the Mental Health Foundation shows how damaging regular exposure to material that shows conventionally perfect images of bodies, often edited digitally and unattainable, are to children and young people.
This is something that matters to children, with 84% of those questioned in a recent survey by charity The Mix saying the algorithmic serving of content was a key issue that the Bill should address. Yet in its current form it does not give children full control over the content they see. Charities also tell us about the need to ensure that children are exposed to useful content. We suggest that the Government consider a requirement for providers to push material on social media literacy to users and to provide the option to receive content that can help with recovery where it is available, curated by social media companies with the assistance of trusted non-governmental organisations and public health bodies. We also hope that the Government can clarify that material damaging to people’s body image will be considered a form of harm.
Additionally, beyond the issue of the content itself that is served to children, organisations including YoungMinds and the Royal College of Psychiatrists have raised the potential dangers to mental health inherent in the way services can be designed to be addictive.
I urge the Minister to address those issues and to consider how the Government can go further, whether through this legislation or further initiatives, to help to combat some of those issues.
“an appreciable number of children”
is included as
“content that is harmful to children”.
That is completely reasonable. However, subsection (5) excludes illegal content and content with a “potential financial impact”. I appreciate that these provisions are drafted in quite a complicated way, but it would be useful to have an understanding of what that means. If it means there is no harm on the basis of things that are financial in nature, that is a problem, because that explicitly excludes gambling-type sites, loot boxes and anything of that sort, which by their nature are intentionally addictive and try to get children or adults to part with significant amounts of cash. If they are excluded, that is a problem.
How will clause 53 be future-proofed? I am not suggesting that there is no future proofing, but it would be helpful to me and fellow Committee members if the Minister explained how the clause will deal with new emerging harms and things that may not necessarily fall within the definitions that we set initially. How will those definitions evolve and change as the internet evolves and changes, and as the harms with which children are presented evolve and change?
And finally—I know that the Minister mentioned earlier that saying, “And finally”, in a speech is always a concern, but I am saying it—I am slightly concerned about the wording in subsection (4)(c), which refers to
“material risk of significant harm to an appreciable number of children”,
because I am not clear what an “appreciable number” is. If there is significant harm to one child from content, and content that is incredibly harmful to children is stumbled upon by a child, is it okay for that provider to have such content? It is not likely to accessed by an “appreciable number of children” and might be accessed by only a small number, but if the Minister could give us an understanding of what the word “appreciable” means in that instance, that would be greatly appreciated.
The shadow Minister asked about the definition of harm, and whether all the harms that might concern Parliament, and many of us as parents, will be covered. It may be helpful to refer to definition of harm provided in clause 187, at the top of page 153. Committee members will note that the definition is very wide and that subsection (2) defines it as “physical or psychological harm”, so I hope that partly answers the shadow Minister’s question.
“psychological harm amounting to at least serious distress”.
The definition of harm in clause 187 that I read out is the definition of harm used elsewhere in the Bill. However, as I said before in the House and in the evidence session, the Government’s belief and intention is that epilepsy trolling would fall in the scope of clause 150, because giving someone an epileptic fit clearly does have a physical implication, as my hon. Friend said, but also causes psychological harm. Being given an epileptic fit is physically damaging, but it causes psychological harm as well.
Despite the fact that the definition of harm in clause 187 does not apply in clause 150, which has its own definition of harm, I am absolutely categoric that epilepsy trolling is caught by clause 150 because of the psychological harm it causes. I commend my hon. Friend the Member for Watford for being so attentive on the question of epilepsy, and also in this debate.
Returning to the definition of harm in clause 187, besides the wide definition covering physical and psychological harm, clause 187(4) makes it clear that harm may also arise not just directly but if the content prompts individuals to
“act in a way that results in harm to themselves or that increases the likelihood of harm to themselves”.
Clause 187(4)(b) covers content where the
“individuals do or say something to another individual that results in”
that individual suffering harm. I hope the shadow Minister is reassured that the definition of harm that applies here is extremely wide in scope.
There was a question about media literacy, which I think the hon. Member for Batley and Spen raised in an intervention. Media literacy duties on Ofcom already exist in the Communications Act 2003. The Government published a comprehensive and effective media literacy strategy about a year ago. In December—after the first version of the Bill was produced, but before the second and updated version—Ofcom updated its policy in a way that went beyond the duties contained in the previous version of the Bill. From memory, that related to the old clause 103, in the version of the Bill published in May last year, which is of course not the same clause in this version of the Bill, as it has been updated.
The hon. Member for Aberdeen North raised, as ever, some important points of detail. She asked about future proofing. The concept of harm expressed in the clause is a general concept of harm. The definition of harm is whatever is harmful to children, which includes things that we do not know about at the moment and that may arise in the future. Secondly, primary priority content and priority content that is harmful can be updated from time to time by a statutory instrument. If some new thing happens that we think deserves to be primary priority content or priority content that is harmful to children, we can update that using a statutory instrument.
The hon. Lady also asked about exclusions in clause 53(5). The first exclusion in subsection (5)(a) is illegal content, because that is covered elsewhere in the Bill—it is covered in clause 52. That is why it is excluded, because it is covered elsewhere. The second limb, subsection 5(b), covers some financial offences. Those are excluded because they are separately regulated. Financial services are separately regulated. The hon. Lady used the example of gambling. Gambling is separately regulated by the Gambling Act 2005, a review of which is imminent. There are already very strong provisions in that Act, which are enforced by the regulator, the Gambling Commission, which has a hard-edged prohibition on gambling if people are under 18.
The other question raised by the hon. Member for Aberdeen North was about the definition of “an appreciable number”. I have a couple of points to make. By definition, anything that is illegal is covered already in schedule 7 or through clause 52(4)(d), which we have mentioned a few times. Content that is
“primary priority content that is harmful to children”
or
“priority content that is harmful to children”
is covered in clause 53(4)(a) and (b), so we are now left with the residue of stuff that is neither illegal nor primary priority content; it is anything left over that might be harmful. By definition, we have excluded all the serious harms already, because they would be either illegal or in the priority categories. We are left with the other stuff. The reason for the qualifier “appreciable” is to make sure that we are dealing only with the residual non-priority harmful matters. We are just making sure that the duty is reasonable. What constitutes “appreciable” will ultimately get set out through Ofcom guidance, but if it was a tiny handful of users and it was not a priority harm, and was therefore not considered by Parliament to be of the utmost priority, it would be unlikely to be applicable to such a very small number. Because it is just the residual category, that is a proportionate and reasonable approach to take.
Question put and agreed to.
Clause 53 accordingly ordered to stand part of the Bill.
Clause 54
“Content that is harmful to children” etc
“(2A) Priority content designated under subsection (2) must include content that contains health-related misinformation and disinformation, where such content is harmful to adults.”
This amendment would amend Clause 54 so that the Secretary of State’s designation of “priority content that is harmful to adults” must include a description of harmful health related misinformation or disinformation (as well as other priority content that might be designated in regulations by the Secretary of State).
The Bill requires category 1 service providers to set out how they will tackle harmful content on their platforms. In order for this to work, certain legal but harmful content must be designated in secondary legislation as
“priority content that is harmful to adults.”
As yet, however, it is not known what will be designated as priority content or when. There have been indications from Government that health-related misinformation and disinformation will likely be included, but there is no certainty. The amendment would ensure that harmful health-related misinformation and disinformation would be designated as priority content that is harmful to adults.
With a third of internet users unaware of the potential for inaccurate or biased information online, it is vital that this amendment on health-related misinformation and disinformation is inserted into the Bill during Committee stage. It would give Parliament the time to scrutinise what content is in scope and ensure that regulation is in place to promote proportionate and effective responses. We must make it incumbent on platforms to be proactive in reducing that pernicious form of disinformation, designed only to hurt and to harm. As we have seen from the pandemic, the consequences can be grave if the false information is believed, as, sadly, it so often is.
The Government’s chosen approach to regulating the online space has left too much up to secondary legislation. We are also concerned that health misinformation and disinformation—a key harm, as we have all learned from the coronavirus pandemic—is missing from the Bill. That is why we too support amendment 83. The impact of health misinformation and disinformation is very real. Estimates suggest that the number of social media accounts posting misinformation about vaccines, and the number of users following those accounts, increased during the pandemic. Research by the Centre for Countering Digital Hate, published in November 2020, suggested that the number of followers of the largest anti-vaccination social media accounts had increased by 25% since 2019. At the height of the pandemic, it was also estimated that there were 5.4 million UK-based followers of anti-vaccine Twitter accounts.
Interestingly, an Ofcom survey of around 200 respondents carried out between 12 and 14 March 2021 found that 28% of respondents had come across information about covid-19 that could be considered false or misleading. Of those who had encountered such information, respondents from minority ethnic backgrounds were twice as likely to say that the claim made to them made them think twice about the issue compared with white respondents. The survey found that of those people who were getting news and information about the coronavirus within the preceding week, 15% of respondents had come across claims that the coronavirus vaccines would alter human DNA; 18% had encountered claims that the coronavirus vaccines were a cover for the implant of trackable microchips, and 10% had encountered claims that the vaccines contained animal products.
Public health authorities, the UK Government, social media companies and other organisations all attempted to address the spread of vaccine misinformation through various strategies, including moderation of vaccine misinformation on social media platforms, ensuring the public had access to accurate and reliable information and providing education and guidance to people on how to address misinformation when they came across it.
Although studies do not show strong links between susceptibility to misinformation and ethnicity in the UK, some practitioners and other groups have raised concerns about the spread and impact of covid-19 vaccine misinformation among certain minority ethnic groups. Those concerns stem from research that shows historically lower levels of vaccine confidence and uptake among those groups. Some recent evidence from the UK’s vaccine roll-out suggests that that trend has continued for the covid-19 vaccine.
Data from the OpenSAFELY platform, which includes data from 40% of GP practices in England, covering more than 24 million patients, found that up to 7 April 2021, 96% of white people aged over 60 had received a vaccination compared with only 77% of people from a Pakistani background, 76% from a Chinese background and 69% of black people within the same age group. A 2021 survey of more than 172,000 adults in England on attitudes to the vaccine also found that confidence in covid-19 vaccines was highest in those of white ethnicity, with some 92.6% saying that they had accepted or would accept the vaccine. The lowest confidence was found in those of black ethnicity, at 72.5%. Some of the initiatives to tackle vaccine misinformation and encourage vaccine take-up were aimed at specific minority ethnic groups, and experts have emphasised the importance of ensuring that factual information about covid-19 vaccines is available in multiple different languages.
Social media companies have taken various steps to tackle misinformation on their platforms during the covid-19 pandemic, including removing or demoting misinformation, directing users to information from official sources and banning certain adverts. So, they can do it when they want to—they just need to be compelled to do it by a Bill. However, we need to go further. Some of the broad approaches to content moderation that digital platforms have taken to address misinformation during the pandemic are discussed in the Parliamentary Office of Science and Technology’s previous rapid response on covid-19 and misinformation.
More recently, some social media companies have taken specific action to counter vaccine misinformation. In February 2021, as part of its wider policies on coronavirus misinformation, Facebook announced that it would expand its efforts to remove false information about covid-19 vaccines, and other vaccines more broadly. The company said it would label posts that discuss covid-19 vaccines with additional information from the World Health Organisation. It also said it would signpost its users to information on where and when they could get vaccinated. Facebook is now applying similar measures to Instagram.
In March 2021, Twitter began applying labels to tweets that could contain misinformation about covid-19 vaccines. It also introduced a strike policy, under which users that violate its covid-19 misinformation policy five or more times would have their account permanently suspended.
YouTube announced a specific ban on covid-19 anti-vaccination videos in October 2020. It committed to removing any videos that contradict official information about the vaccine from the World Health Organisation. In March, the company said it had removed more than 30,000 misleading videos about the covid-19 vaccine since the ban was introduced. However, as with most issues, until the legislation changes, service providers will not feel truly compelled to do the right thing, which is why we must legislate and push forward with amendment 83.
“priority content that is harmful to adults”
content that he or she considers to present
“a material risk of significant harm to an appreciable number of adults”.
We have discussed this issue in other places before, but I am deeply concerned about freedom of speech and people being able to say what they think. What is harmful to me may not be harmful to any other colleagues in this place. We would be leaving it to the Secretary of State to make that decision. I would like to hear the Minister’s thoughts on that.
Over the past two years, the Department for Digital, Culture, Media and Sport has worked together with other Departments to develop a strong operational response to this issue. We have established a counter-disinformation unit within DCMS whose remit is to identify misinformation and work with social media firms to get it taken down. The principal focus of that unit during the pandemic was, of course, covid. In the past three months, it has focused more on the Russia-Ukraine conflict, for obvious reasons.
In some cases, Ministers have engaged directly with social media firms to encourage them to remove content that is clearly inappropriate. For example, in the Russia-Ukraine context, I have had conversations with social media companies that have left up clearly flagrant Russian disinformation. This is, therefore, an area that the Government are concerned about and have been acting on operationally already.
Obviously, we agree with the intention behind the amendment. However, the way to handle it is not to randomly drop an item into the Bill and leave the rest to a statutory instrument. Important and worthy though it may be to deal with disinformation, and specifically harmful health-related disinformation, there are plenty of other important things that one might add that are legal but harmful to adults, so we will not accept the amendment. Instead, we will proceed as planned by designating the list via a statutory instrument. I know that a number of Members of Parliament, probably including members of this Committee, would find it helpful to see a draft list of what those items might be, not least to get assurance that health-related misinformation and disinformation is on that list. That is something that we are considering very carefully, and more news might be forthcoming as the Bill proceeds through Parliament.
I hope that I have set out my approach. We have heard the calls to publish the list so that parliamentarians can scrutinise it, and we also heard them on Second Reading.
I will now turn to the question raised by my hon. Friend the Member for Don Valley regarding freedom of expression. Those on one side of the debate are asking us to go further and to be clearer, while those on the other side have concerns about freedom of expression. As I have said, I honestly do not think that these legal but harmful provisions infringe on freedom of speech, for three reasons. First, even when the Secretary of State decides to designate content and Parliament approves of that decision through the affirmative procedure—Parliament gets to approve, so the Secretary of State is not acting alone—that content is not being banned. The Bill does not say that content designated as legal but harmful should immediately be struck from every corner of the internet. It simply says that category 1 companies—the big ones—have to do a proper risk assessment of that content and think about it properly.
Secondly, those companies have to have a policy to deal with that content, but that policy is up to them. They could have a policy that says, “It is absolutely fine.” Let us say that health disinformation is on the list, as one would expect it to be. A particular social media firm could have a policy that says, “We have considered this. We know it is risky, but we are going to let it happen anyway.” Some people might say that that is a weakness in the Bill, while others might say that it protects freedom of expression. It depends on one’s point of view, but that is how it works. It is for the company to choose and set out its policy, and the Bill requires it to enforce it consistently. I do not think that the requirements I have laid out amount to censorship or an unreasonable repression of free speech, because the platforms can still set their own terms and conditions.
There is also the general duty to have regard to free speech, which is introduced in clause 19(2). At the moment, no such duty exists. One might argue that the duty could be stronger, as my hon. Friend suggested previously, but it is unarguable that, for the first time ever, there is a duty on the platforms to have regard to free speech.
There is also the duty to have regard to freedom of expression, and there is a protection of democratic and journalistic importance in clauses 15 and 16. Although those clauses are not perfect and some people say they should be stronger, they are at least better than what we have now. When I say that this is good for freedom of speech, I mean that nothing here infringes on freedom of speech, and to the extent that it moves one way or the other, it moves us somewhat in the direction of protecting free speech more than is the case at the moment, for the reasons I have set out. I will be happy to debate the issue in more detail either in this Committee or outside, if that is helpful and to avoid trying the patience of colleagues.
We also share the concerns expressed by the hon. Member for Don Valley about the Secretary of State’s potential powers, the limited scope and the extra scrutiny that Parliament might have to undertake on priority harms, so I hope he will support some of our later amendments.
Question put, That the amendment be made.
Clause 54 ordered to stand part of the Bill.
Clause 55
Regulations under sections 53 and 54
“and other stakeholders, including organisations that campaign for the removal of harmful content online”.
This amendment requires the Secretary of State to consult other stakeholders before making regulations under clause 53 or 54.
Clause stand part.
Clause 56 stand part.
Labour has said time and again that it should not be for the Secretary of State of the day to determine what constitutes harmful content for children or adults. Without the important consultation process outlined in amendment 62, there are genuine concerns that that could lead to a damaging precedent whereby a Secretary of State, not Parliament, has the ability to determine what information is harmful. We all know that the world is watching as we seek to work together on this important Bill, and Labour has genuine concerns that without a responsible consultation process, as outlined in amendment 62, we could inadvertently be suggesting to the world that this fairly dogmatic approach is the best way forward.
Amendment 62 would require the Secretary of State to consult other stakeholders before making regulations under clauses 53 and 54. As has been mentioned, we risk a potentially dangerous course of events if there is no statutory duty on the Secretary of State to consult others when determining the definition of harmful content. Let me draw the Minister’s attention to the overarching concerns of stakeholders across the board. Many are concerned that harmful content for adults requires the least oversight, although there are potential gaps that mean that certain content—such as animal abuse content—could completely slip through the net. The amendment is designed to ensure that sufficient consultation takes place before the Secretary of State makes important decisions in directing Ofcom.
It is welcome that clause 56 will force Ofcom, as the regulator, to carry out important reviews that will assess the extent to which content is harmful to children and adults when broadly appearing on user-to-user services. As we have repeatedly said, transparency must be at the heart of our approach. While Labour does not formally oppose the clause, we have concerns about subsection (5), which states:
“The reports must be published not more than three years apart.”
The Minister knows that the Bill has been long awaited, and we need to see real, meaningful change and updates now. Will he tell us why it contains a three-year provision?
Amendment 62 is excellent, and I am more than happy to support it.
On the substance of amendment 62, tabled by the shadow Minister, I can confirm that the Government are already undertaking research and working with stakeholders on identifying what the priority harms will be. That consideration includes evidence from various civil society organisations, victims organisations and many others who represent the interests of users online. The wider consultation beyond Ofcom that the amendment would require is happening already as a matter of practicality.
We are concerned, however, that making this a formal consultation in the legal sense, as the amendment would, would introduce some delays while we do so, because a whole sequence of things have to happen after Royal Assent. First, we have to designate the priority harms by statutory instrument, and then Ofcom has to publish its risk assessments and codes of practice. If we insert into that a formal legal consultation step, it would add at least four or even six months into the process of implementing the Act. I know that that was not the hon. Lady’s intention and that she is concerned about getting the Act implemented quickly. For that reason, the Government do not want to insert a formal legal consultation step into the process, but I am happy to confirm that we are engaging in the consultation already on an informal basis and will continue to do so. I ask respectfully that amendment 62 be withdrawn.
The purpose of clauses 55 and 56 has been touched on already, and I have nothing in particular to add.
Question put, That the amendment be made.
Clauses 56 ordered to stand part of the Bill.
Question proposed, That the clause stand part of the Bill.
We do, however, have some concerns about the exact principles and minimum standards for the user verification duty, which I will address when we consider new clause 8. We also have concerns about subsection (2), which states:
“The verification process may be of any kind (and in particular, it need not require documentation to be provided).”
I would be grateful if the Minister could clarify exactly what that process will look like in practice.
Lastly, as Clean Up the Internet has said, we need further clarification on whether users will be given a choice of how they verify and of the verification provider itself. We can all recognise that there are potential down- sides to the companies that own the largest platforms —such as Meta, Google, Twitter and ByteDance—developing their own in-house verification processes and making them the only option for users wishing to verify on their platform. Indeed, some users may have reservations about sharing even more personal data with those companies. Users of multiple social media platforms can find it inconvenient and confusing, and could be required to go through multiple different verification processes on different platforms to achieve the same outcome of confirming their real name.
There is a risk of the largest platforms seeking to leverage their dominance of social media to capture the market for ID verification services, raising competition concerns. I would be grateful if the Minister could confirm his assessment of the potential issues around clause 57 as it stands.
I underline that again by saying that recent research from Compassion in Politics showed that more than one in four people were put off posting on social media because of the fear of abuse, particularly from anonymous posters. Far from the status quo promoting freedom of speech, it actually deters freedom of speech, as we have said in other debates, and it particularly affects women. The Government are to be applauded for this measure.
In the work I was doing with the FA and the Premier League around this very issue, I particularly supported their call for a twin-track approach to verified accounts that said that they should be the default and that people should automatically be able to opt out of receiving posts from unverified accounts. The Bill does not go as far as that, and I can understand the Government’s reasons, but I gently point out that 81% of the people who took part in the Compassion in Politics research would willingly provide identification to get a verified account if it reduced unverified posts. They felt that was important. Some 72% supported the idea if it reduced the amount of anonymous posting.
I am touching on clause 58, but I will not repeat myself when we debate that clause. I hope that it will be possible in the code of practice for Ofcom to point out the clear benefits of having verified accounts by default and perhaps urge responsible providers to do the responsible thing and allow their users to automatically filter out unverified accounts. That is what users want, and it is extraordinary that large consumer organisations do not seem to want to give consumers what they want. Perhaps Ofcom can help those organisations understand what their consumers want, certainly in Britain.
To be honest, I do not know how children’s identities could be verified, but giving them access to the filters that would allow them to block unverified accounts, whether or not they are able to verify themselves—because they are children and therefore may not have the identity documentation they need—would be very helpful.
I appreciate the points that the right hon. Member was making, and I completely agree with her on the requirement for user verification, but I have to say that I believe there is a place for anonymity on the internet. I can understand why, for a number of people, that is the only way that they can safely access some of the community support that they need.
If you will allow me to say a couple of things about the next clause, Sir Roger, Mencap raised the issue of vulnerable users, specifically vulnerable adult users, in relation to the form of identity verification. If the Minister or Ofcom could give consideration to perhaps including travel passes or adult passes, it might make the internet a much easier place to navigate for people who do not have control of their own documentation—they may not have access to their passport, birth certificate, or any of that sort of thing—but who would be able to provide a travel pass, because that is within their ownership.
Some of those processes exist already in relation to age verification, and I think that some companies are already active in this area. I do not think that it would be appropriate for us, in Parliament, to specify those sorts of details. It is ultimately for Ofcom to issue that guidance under clause 58, and it is, in a sense, up to the market and to users to develop their own preferences. If individual users prefer to verify their identity once and then have that used across multiple platforms, that will itself drive the market. I think that there is every possibility that that will happen. [Interruption.]
On resuming—
The shadow Minister, the hon. Member for Pontypridd, expressed concerns about privacy. That is of course why the list of people Ofcom must consult—at clause 58(3)(a)—specifies the Information Commissioner, to ensure that Ofcom’s guidance properly protects the privacy of users, for the reasons that the shadow Minister referred to in her speech.
Finally, on competition, if anyone attempts to develop an inappropriate monopoly position in this area, the Competition and Markets Authority’s usual powers will apply. On that basis, I commend the clause to the Committee.
Question put and agreed to.
Clause 57 accordingly ordered to stand part of the Bill.
Clause 58
OFCOM’s guidance about user identity verification
Question proposed, That the clause stand part of the Bill.
Question put and agreed to.
Clause 58 accordingly ordered to stand part of the Bill.
Clause 59
Requirement to report CSEA content to the NCA
Question proposed, That the clause stand part of the Bill.
It is welcome that regulated services will have to report all child sexual exploitation and abuse material that they detect on their platform. The Government’s decision to move away from the approach of a regulatory code of practice to a mandatory reporting requirement is an important improvement to the draft Bill.
For companies to report child sexual exploitation and abuse material correctly to the mandatory reporting body, they will need access to accurate datasets that will determine whether something that they are intending to report is child sexual exploitation and abuse content. What guidance will be made available to companies so that they can proactively detect CSEA, and what plans are in place to assist companies to identify potential CSEA that has not previously been identified? The impact assessment mentions that, for example, BT is planning to use the Internet Watch Foundation’s hash list, which is compliant with UK law enforcement standards, to identify CSEA proactively. Hashing is a technology used to prevent access to known CSEA; a hash is a unique string of letters and numbers which is applied to an image and which can then be matched every time a user attempts to upload a known illegal image to a platform. It relies, however, on CSEA already having been detected. What plans are in place to assist companies to identify potential CSEA?
Finally, it is important that the introduction of mandatory reporting does not impact on existing international reporting structures. Many of the largest platforms in the scope of the Bill are US-based and required under US law to report CSEA material detected on their platform to the National Centre for Missing and Exploited Children, which ensures that information relevant to UK law enforcement is referred to it for investigation.
The hon. Lady also asked about the technologies available to those companies, including hash matching—comparing images against a known database of child sexual exploitation images. A lot of technology is being developed that can proactively spot child sexual exploitation in new images that are not on the hash matching database. For example, some technology combines age identification with nude image identification; by putting them together, we can identify sexual exploitation of children in images that are new and are not yet in the database.
To ensure that such new technology can be used, we have the duties under clause 103, which gives Ofcom the power to mandate—to require—the use of certain accredited technologies in fighting not just CSEA, but terrorism. I am sure that we will discuss that more when we come to that clause. Combined, the requirement to proactively prevent CSEA and the ability to specify technology under clause 103 will mean that companies will know about the content that they now, under clause 59, have to report to the National Crime Agency. Interestingly, the hon. Member for Worsley and Eccles South mentioned that that duty already exists in the USA, so it is good that we are matching that requirement in our law via clause 59, which I hope that the Committee will agree should stand part of the Bill.
Question put and agreed to.
Clause 59 accordingly ordered to stand part of the Bill.
Clause 60
Regulations about reports to the NCA
Question proposed, That the clause stand part of the Bill.
Clause 60 sets out that the Secretary of State’s regulations must include
“provision about cases of particular urgency”.
Does the Minister have an idea what that will look like? What plans are in place to ensure that law enforcement can prioritise the highest risk and harm cases?
Under the new arrangements, the National Crime Agency as the designated body, the Internet Watch Foundation as the appropriate authority for notice and takedown in the UK, and Ofcom as the regulator for online harms will all hold a vast amount of information on the scale of the threat posed by child sexual exploitation and illegal content. How will the introduction of mandatory reporting assist those three organisations in improving their understanding of how harm manifests online? How does the Minister envisage the organisations working together to share information to better protect children online?
I would appreciate the Minister explaining how clause 61 will work in a Scottish context, because that clause talks about the Crime and Courts Act 2013. Does a discussion need to be had with Scottish Ministers, and perhaps Northern Ireland Ministers as well, to ensure that information sharing takes place seamlessly with devolved areas with their own legal systems, to the same level as within England and Wales? If the Minister does not have an answer today, which I understand that he may not in detail, I am happy to hear from him later; I understand that it is quite a technical question.
On the questions raised by the hon. Member for Aberdeen North, the Secretary of State might consult Scottish Ministers under clause 63(6)(c), particularly those with responsibility for law enforcement in Scotland, and the same would apply to other jurisdictions. On whether an amendment is required to cover any matters to do with the procedures in Scotland equivalent to the matter covered in clause 61, we do not believe that any equivalent change is required to devolved Administration law. However, in order to be absolutely sure, we will get the hon. Lady written confirmation on that point.
“provision about cases of particular urgency”.
I asked the Minister what that would look like.
Also, we think it is pretty important that the National Crime Agency, the Internet Watch Foundation and Ofcom work together on mandatory reporting. I asked him how he envisaged them working together to share information, because the better they do that, the more children are protected.
As the clause states, the regulations can set out expedited timeframes in cases of particular urgency. I understand that to mean cases where there might be an immediate risk to a child’s safety, or where somebody might be at risk in real time, as opposed to something historic—for example, an image that might have been made some time ago. In cases where it is believed abuse is happening at the present time, there is an expectation that the matter will be dealt with immediately or very close to immediately. I hope that answers the shadow Minister’s questions.
Question put and agreed to.
Clause 60 accordingly ordered to stand part of the Bill.
Clause 61 ordered to stand part of the Bill.
Clause 62
Offence in relation to CSEA reporting
Amendments 1 to 5 relate to the maximum term of imprisonment on summary conviction of an either-way offence in England and Wales. Amendments 1 to 4 insert a reference to the general limit in a magistrates’ court, meaning the time limit in section 224(1) of the Sentencing Code, which, currently, is 12 months.
“general limit in a magistrates’ court”
to account for any future changes to the sentencing limit in the magistrates court. The 2022 Act includes a secondary power to switch, by regulations, between a 12-month and six-month maximum sentence in the magistrates court, so we need to use the more general language in this Bill to ensure that changes back and forth can be accommodated. If we just fix a number, it would become out of sync if switches are made under the 2022 Act.
Amendment 1 agreed to.
Question proposed, That the clause, as amended, stand part of the Bill.
Takeovers, mergers and acquisitions are commonplace in the technology industry, and many companies are bought out by others based overseas, particularly in the United States. Once a regulated service has been bought out by a company based abroad, what plans are in place to ensure that either the company continues to report to the National Crime Agency or that it is enabled to transition to another mandatory reporting structure, as may be required in another country in the future. That is particularly relevant as we know that the European Union is seeking to introduce mandatory reporting functions in the coming years.
Clause 63 provides definitions for the terms used in chapter 2 of part 4, in relation to the requirement to report CSEA. In summary, a UK provider of a regulated service is defined as a provider that is
“incorporated or formed under the law of any part of the United Kingdom”
or where it is
“individuals who are habitually resident in the United Kingdom”.
The shadow Minister asked about the test and what counts, and I hope that provides the answer. We are defining CSEA content as content that a company becomes aware of containing CSEA. A company can become aware of that by any means, including through the use of automated systems and processes, human moderation or user reporting.
With regard to the definition of UK-linked CSEA, which the shadow Minister also asked about, that refers to content that may have been published and shared in the UK, or where the nationality or location of a suspected offender or victim is in the UK. The definition of what counts as a UK link is quite wide, because it includes not only the location of the offender or victim but where the content is shared. That is a wide definition.
“the place where the content was published, generated, uploaded or shared.”
The word “generated”—I am reading from clause 63(6)(a), at the top of page 56—is clearly in the past tense and would include the circumstance that the hon. Lady described.
Finally, on companies being taken over, if a company ceases to be UK-linked, we would expect it to continue to discharge its reporting duties, which might include reporting not just in the UK but to its domestic reporting agency—we have already heard the US agency described and referenced.
I hope that my answers demonstrate that the clause is intended to be comprehensive and effective. It should ensure that the National Crime Agency gets all the information it needs to investigate and prosecute CSEA in order to keep our children safe.
Question put and agreed to.
Clause 62, as amended, accordingly ordered to stand part of the Bill.
Clause 63 ordered to stand part of the Bill.
Clause 64
Transparency reports about certain Part 3 services
This amendment would change the requirement for transparency report notices from once a year to twice a year.
Clause stand part.
Amendment 55, in schedule 8, page 188, line 42, at end insert—
“31A The notice under section 64(1) must require the provider to provide the following information about the service—
(a) the languages in which the service has safety systems or classifiers;
(b) details of how human moderators employed or engaged by the provider are trained and supported;
(c) the process by which the provider takes decisions about the design of the service;
(d) any other information that OFCOM considers relevant to ensuring the safe operation of the service.”
This amendment sets out details of information Ofcom must request be provided in a transparency report.
That schedule 8 be the Eighth schedule to the Bill.
Clause 65 stand part.
Increasing the frequency of transparency reports from annual to biannual will ensure that platforms stay on the pulse of emergent risks, allowing Ofcom to do the same in turn. The amendment would also mean that companies focus on safety, rather than just profit. As has been touched on repeatedly, that is the culture change that we want to bring about. It would go some way towards preventing complacency about reporting harms, perhaps forcing companies to revisit the nature of harm analysis, management and reduction. In order for this regime to be world-leading and ambitious—I keep hearing the Minister using those words about the Bill—we must demand the most that we can from the highest-risk services, including on the important duty of transparency reporting.
Moving to clauses 64 and 65 stand part, transparency reporting by companies and Ofcom is important for analysing emerging harms, as we have discussed. However, charities have pointed out that platforms have a track record of burying documents and research that point to risk of harm in their systems and processes. As with other risk assessments and reports, such documents should be made public, so that platforms cannot continue to hide behind a veil of secrecy. As I will come to when I speak to amendment 55, the Bill must be ambitious and bold in what information platforms are to provide as part of the clause 64 duty.
Clause 64(3) states that, once issued with a notice by Ofcom, companies will have to produce a transparency report, which must
“be published in the manner and by the date specified in the notice.”
Can the Minister confirm that that means regulated services will have to publish transparency reports publicly, not just to Ofcom? Can he clarify that that will be done in a way that is accessible to users, similarly to the requirements on services to make their terms of service and other statements clear and accessible? Some very important information will be included in those reports that will be critical for researchers and civil society when analysing trends and harms. It is important that the data points outlined in schedule 8 capture the information needed for those organisations to make an accurate analysis.
Amendment 55 sets out the details of the information that Ofcom must request to be provided in a transparency report in new paragraph 31A. First, transparency disclosures required by the Bill should include how large companies allocate resources to tackling harm in different languages —an issue that was rightly raised by the hon. Member for Ochil and South Perthshire. As we heard from Frances Haugen, many safety systems at Meta have only a subset of detection systems for languages other than English. Languages such as Welsh have almost no safety systems live on Facebook. It is neither fair nor safe.
When we consider that more than 250 languages are spoken in London alone, the inconsistency of safety systems becomes very concerning. Charities have warned that people accessing Facebook in different languages are being exposed to very different levels of risk, with some versions of Facebook having few or none of the safety systems that protect other versions of the site in different languages.
When giving evidence to the Committee last month, Richard Earley disclosed that Meta regulated only 70 languages. Given that around 3 billion people use Facebook on a monthly basis across the world, that is clearly inadequate.
“the languages in which the service has safety systems or classifiers”.
We need to see what they are doing on this issue. It is an easily reported piece of information that will have an outsized impact on safety, even for English speakers. It will help linguistic groups in the multilingual UK and around the world.
Reporting on language would not be a big burden on companies. In her oral evidence, Frances Haugen told the Committee that large platforms can trivially produce this additional data merely by changing a single line of code when they do their transparency reports. We must not become wrapped up in the comfort of the language we all speak and ignore the gaping loophole left for other languages, which allows harms to slip through.
Under questioning from my hon. Friend the Member for Pontypridd last month, Richard Earley admitted that he had no idea how many human moderators work for Facebook directly and how many had abided by a UK standard code of conduct. That is disgraceful, yet Frances Haugen said it would be a simple matter, because changing a single line of code would get that information. In fact, she said:
We therefore have a duty to keep users safe, and the Bill must ensure that platforms do the right thing.
The third additional transparency disclosure is to show how companies make decisions about service design. Preventing harm to the public would be impossible unless both the regulator and civil society know what is happening inside these large tech companies. We know that if something cannot be detected, it clearly cannot be reported. Knowing how companies make decisions will allow for greater scrutiny of the information they disclose. Without it, there is a risk that Ofcom receives skewed figures and an incomplete picture. Amendment 55 would be a step in the right direction towards making the online environment more transparent, fair and safe for those working to tackle harms, and I hope the Minister will consider its merits.
Amendment 54 seeks to increase the frequency of transparency reporting from once a year to twice a year. To be honest, we do not want to do this unreasonably frequently, and our sense is that once a year, rather than twice a year, is the right regularity. We therefore do not support the amendment. However, Members will notice that there is an ability in clause 64(12) for the Secretary of State, by regulation, to
“amend subsection (1) so as to change the frequency of the transparency reporting process.”
If it turns out in due course that once a year is not enough and we would like to do it more frequently—for example, twice a year—there is the power for those regulations to be used so that the reporting occurs more frequently. The frequency is not set in stone.
I turn to amendment 55, which sets out a number of topics that would be included in reporting. It is important to say that, as a quick glance at schedule 8 shows, the remit of the reports is already extremely wide in scope. Hon. Members will see that paragraph 5 specifies that reports can cover
“systems and processes for users to report content which they consider to be illegal”
or “harmful”, and so on. Paragraph 6 mentions:
“The systems and processes that a provider operates to deal with illegal content, content that is harmful to children”,
and so on. Therefore, the topics that amendment 55 speaks to are already covered by the schedule, and I would expect such things to be reported on. We have given Ofcom the explicit powers to do that and, rather than prescribe such details in the Bill, we should let Ofcom do its job. It certainly has the powers to do such things—that is clearly set out in the schedule—and I would expect, and obviously the Opposition would expect, that it will do so. On that basis, I will gently resist amendments 54 and 55.
I recommend that the Minister thinks again about requiring Ofcom to provide details on human moderators who are employed or engaged and how they are trained and supported. It is a bit like when we find out about factories producing various items under appalling conditions in other parts of the world—we need transparency on these issues to make people do something about it. These platforms will not do anything about it. Under questioning from my hon. Friend the Member for Pontypridd, Richard Earley admitted that he had no idea how many human moderators were working for Facebook. That is appalling and we must do something about it.
Question put, That the amendment be made.
Amendment proposed: 55, in schedule 8, page 188, line 42, at end insert—
Question put, That the amendment be made.
Schedule 8 agreed to.
Clause 65 ordered to stand part of the Bill.
Question proposed, That the clause stand part of the Bill.
Clause 67 stand part.
That schedule 9 be the Ninth schedule to the Bill.
We have a few concerns—which were also outlined in evidence by Professor Clare McGlynn—about the definition of “provider pornographic content” in clause 66(3). It is defined as
“pornographic content that is published or displayed on the service by the provider of the service or by a person acting on behalf of the provider (including pornographic content published or displayed…by means of software or an automated tool or algorithm”.
That definition separates provider porn from content that is uploaded or shared by users, which is outlined in clause 49(2). That separation is emphasised in clause 66(6), which states:
“Pornographic content that is user-generated content in relation to an internet service is not to be regarded as provider pornographic content in relation to that service.”
However, as Professor McGlynn emphasised, it is unclear is exactly what will be covered by the words
“acting on behalf of the provider”.
I would appreciate some clarity from the Minister on that point. Could he give some clear examples?
Schedule 7 is an important schedule, which outlines the providers of internet services that are not subject to the duties on regulated provider pornographic content. Those are important exemptions that Labour welcomes being clarified in the Bill. For that reason, we have tabled no amendments at present.
In seriousness, this is an example of the Government moving the Bill on in response to widespread parliamentary and public commentary. It is right that we extend the duties to cover commercial pornographic content as well as the user-to-user pornography covered previously. I thank the Opposition parties for their support for the inclusion of those measures.
My final point on this important clause is in response to a question that the shadow Minister raised about clause 66(3), which makes reference to
“a person acting on behalf of the provider”.
That is just to ensure that the clause is comprehensively drafted without any loopholes. If the provider used an agent or engaged some third party to disseminate content on their behalf, rather than doing so directly, that would be covered too. We just wanted to ensure that there was absolutely no loophole—no chink of light—in the way that the clause was drafted. That is why that reference is there.
I am delighted that these clauses seem to command such widespread support. It therefore gives me great pleasure to commend them to the Committee.
Question put and agreed to.
Clause 66 accordingly ordered to stand part of the Bill.
Clause 67 ordered to stand part of the Bill.
Schedule 9 agreed to.
Clause 68
Duties about regulated provider pornographic content
“(2A) A duty to verify that every individual featured in regulated provider pornographic content is an adult before the content is published on the service.
(2B) A duty to verify that every individual featured in regulated provider pornographic content that is already published on the service when this Act is passed is an adult and, where that is not the case, remove such content from the service.
(2C) A duty to verify that each individual appearing in regulated provider pornographic content has given their permission for the content in which they appear to be published or made available by the internet service.
(2D) A duty to remove regulated provider pornographic content featuring an individual if that individual withdraws their consent, at any time, to the pornographic content in which they feature remaining on the service.”
This amendment creates a duty to verify that each individual featured in pornographic content is an adult and has agreed to the content being uploaded before it is published. It would also impose a duty to remove content if the individual withdraws consent at any time.
Amendment 115, in clause 68, page 60, line 17, after “(2)” insert “to (2D)”.
Clause stand part.
New clause 2—Duties regarding user-generated pornographic content: regulated services—
“(1) This section sets out the duties which apply to regulated services in relation to user-generated pornographic content.
(2) A duty to verify that each individual featuring in the pornographic content has given their permission for the content in which they feature to be published or made available by the service.
(3) A duty to remove pornographic content featuring a particular individual if that individual withdraws their consent, at any time, to the pornographic content in which they feature remaining on the service.
(4) For the meaning of ‘pornographic content’, see section 66(2).
(5) In this section, ‘user-generated pornographic content’ means any content falling within the meaning given by subsection (4) and which is also generated directly on the service by a user of the service, or uploaded to or shared on the service by a user of the service, may be encountered by another user, or other users, of the service.
(6) For the meaning of ‘regulated service’, see section 2(4).”
I am moving a series of targeted amendments, tabled by my right hon. Friend the Member for Kingston upon Hull North (Dame Diana Johnson), which I hope that all hon. Members will be able to support because this is an issue that goes beyond party lines. This is about children who have been sexually abused, women who have been raped, and trafficking victims who have been exploited, who have all suffered the horror of filmed footage of their abuse being published on some of the world’s biggest pornography websites. This is about basic humanity.
Currently, leading pornography websites allow members of the public to upload pornographic videos without verifying that everyone in the film is an adult, that they gave their permission for it to be uploaded to a pornography website, or even that they know the film exists. It is sadly not surprising that because of the absence of even the most basic safety measures, hugely popular and profitable pornography websites have been found hosting and profiting from filmed footage of rape, sex trafficking, image-based sexual abuse and child sexual abuse. This atrocious practice is ongoing and well documented.
In 2019, PayPal stopped processing payments for Pornhub—one of the most popular pornography websites in the world—after an investigation by The Sunday Times revealed that the site contained child abuse videos and other illegal content. That included an account on the site dedicated to posting so-called creepshots of UK schoolgirls. In 2020, The New York Times documented the presence of child abuse videos on Pornhub, prompting Mastercard, Visa and Discover to block the use of their cards for purchases on the site.
New York Times reporter Nicholas Kristof wrote of Pornhub:
“Its site is infested with rape videos. It monetizes child rapes, revenge pornography, spy cam videos of women showering, racist and misogynist content, and footage of women being asphyxiated in plastic bags.”
That particular pornography website is now subject to multiple lawsuits launched against its parent company, MindGeek, by victims whose abuse was published on the site. Plaintiffs include victims of image-based sexual abuse in the UK, such as Crystal Palace footballer Leigh Nicol. Her phone was hacked, and private content was uploaded to Pornhub without her knowledge. She bravely and generously shared her experience in an interview for Sky Sports News, saying:
“The damage is done for me so this is about the next generation. I feel like prevention is better than someone having to react to this. I cannot change it alone but if I can raise awareness to stop it happening to others then that is what I want to do… The more that you dig into this, the more traumatising it is because there are 14-year-old kids on these websites and they don’t even know about it. The fact that you can publish videos that have neither party’s consent is something that has to be changed by law, for sure.”
I agree. It is grotesque that pornography website operators do not even bother to verify that everyone featured in films on their sites is an adult or even gave permission for the film to be uploaded. That cannot be allowed to continue.
These amendments, which I hope will receive the cross-party backing that they strongly deserve, would stop pornography websites publishing and profiting from videos of rape and child sexual abuse by requiring them to implement the most basic of prevention measures.
More needs to be done to tackle this problem. Pornography websites need to verify that every individual in pornographic videos published on their site is an adult and gave their permission for the video to be published, and enable individuals to withdraw their consent for pornography of them to remain on the site. These are rock-bottom safety measures for preventing the most appalling abuses on pornography websites.
We feel these amendments would encapsulate the specific issue of consent-based imagery or video content for which consent has not been obtained. Many of these people do not even know that the content has been taken in the first place, and it is then uploaded to these websites. It would be the website’s duty to verify that consent had been obtained and that the people in the video were of the age of consent. That is why we urge hon. Members to back the amendments.
First, all material that contains the sexual abuse of children or features children at all—any pornographic content featuring children is, by definition, sexual abuse—is already criminalised through the criminal law. Measures such as the Protection of Children Act 1978, the Criminal Justice Act 1988 and the Coroners and Justice Act 2009 provide a range of criminal offences that include the taking, making, circulating, possessing with a view to distributing, or otherwise possessing indecent photos or prohibited images of children. As we would expect, everything that the hon. Lady described is already criminalised under existing law.
This part of the Bill—part 5—covers publishers and not the user-to-user stuff we talked about previously. Because they are producing and publishing the material themselves, publishers of such material are covered by the existing criminal law. What they are doing is already illegal. If they are engaged in that activity, they should—and, I hope, will—be prosecuted for doing it.
The new clause and the amendments essentially seek to duplicate what is already set out very clearly in criminal law. While their intentions are completely correct, I do not think it is helpful to have duplicative law that essentially tries to do the same thing in a different law. We have well established and effective criminal laws in these areas.
In relation to the separate question of people whose images are displayed without their consent, which is a topic that my right hon. Friend the Member for Basingstoke has raised a few times, there are existing criminal offences that are designed to tackle that, including the recent revenge pornography offences in particular, as well as the criminalisation of voyeurism, harassment, blackmail and coercive or controlling behaviour. There is then the additional question of intimate image abuse, where intimate images are produced or obtained without the consent of the subject, and are then disseminated.
“psychological harm amounting to at least serious distress”—
to the subject. That will capture a lot of this as well.
My right hon. Friend the Member for Basingstoke has made a point about needing to remove the intent requirement. Any sharing of an intimate image without consent should be criminalised. As we have discussed previously, that is being moved forward under the auspices of the Ministry of Justice in connection with the Law Commission’s proposed offence. That work is in flight, and I would anticipate it delivering legislative results. I think that is the remaining piece of the puzzle. With the addition of that piece of legislation, I think we will cover the totality of possible harms in relation to images of people whose consent has not been given.
In relation to material featuring children, the legislative pattern is complete already; it is already criminal. We do not need to do anything further to add any criminal offences; it is already illegal, as it should be. In relation to non-consensual images, the picture is largely complete. With the addition of the intimate image abuse offence that my right hon. Friend the Member for Basingstoke has been rightly campaigning for, the picture will be complete. Given that that is already in process via the Law Commission, while I again agree with what the Opposition are trying to do here, we have a process in hand that will sort this out. I hope that that makes the Government’s position on the amendments and the new clause clear.
Clause 68 is extremely important. It imposes a legally binding duty to make sure that children are not normally able to encounter pornographic content in a commercial context, and it makes it clear that one of the ways that can be achieved is by using age verification. If Ofcom, in its codes of practice, directs companies to use age verification, or if there is no other effective means of preventing children from seeing pornographic content, the clause makes it clear that age verification is expressly authorised by Parliament in primary legislation. There will be no basis upon which a porn provider could try to legally challenge Ofcom, because it is there in black and white in the Bill. It is clearly Parliament’s intention that hard-edged age verification will be legal. By putting that measure in the Bill as an example of the way that the duty can be met, we immunise the measure from legal challenge should Ofcom decide it is the only way of delivering the duty. I make that point explicitly for the avoidance of doubt, so that if this point is ever litigated, Parliament’s intention is clear.
Amendment, by leave, withdrawn.
Clause 68 ordered to stand part of the Bill.
Ordered, That further consideration be now adjourned—(Steve Double.)
OSB69 Full Fact (supplementary submission)
OSB70 Care Quality Commission (CQC)
OSB71 Oxford University's Child-Centred AI initiative, Department of Computer Science
OSB72 British Retail Consortium (BRC)
OSB73 Claudine Tinsman, doctoral candidate in Cyber Security at the University of Oxford
OSB74 British Board of Film Classification (BBFC)
OSB75 Advertising Standards Authority
OSB76 YoungMinds
Contains Parliamentary information licensed under the Open Parliament Licence v3.0.