PARLIAMENTARY DEBATE
Online Anonymity and Anonymous Abuse - 24 March 2021 (Commons/Commons Chamber)
Debate Detail
That this House has considered online anonymity and anonymous abuse.
In recent weeks, we have been rightly concerned about safety in our towns and cities, yet people face danger and harassment not just in the physical world, but on the dark cyber streets and alleyways of the internet. Cowardly keyboard warriors stalk these streets and lurk in our phones. They bully with abandon, they spread racist and misogynistic abuse, they attack looks, weight, age, race, gender, disability, success as well as failure and the young and old alike. No one is safe from their violent hate. Anonymity provides the shadows where these people can hide. It facilitates and encourages online abuse.
My own experience of hate came after the birth of my daughter last year. The outpouring of venom because I took four weeks’ maternity leave was a shock. Attacking somebody for being a mum or suggesting that a mum cannot do the job of an MP is misogynistic and, quite frankly, ridiculous. But I would be lying if I said that I did not find it very upsetting, especially at a time when I could barely move and needed to work out how to feed my new baby. Other people have suffered more—from death and rape threats to all forms of intimidation and harassment in between. Nobody should have to put up with that. Seeing the bravery with which others have confronted this menace has prompted me to campaign for change, and I am not alone.
The racism and abuse levelled at footballers is no longer from the terraces. Many England players who will run out for us tomorrow night have suffered unspeakable racist abuse. I fully support Harry Maguire’s calls for verified identification. I have spoken to the FA’s excellent Kick It Out, which has superb goals for social media companies to create robust and swift measures to take down abusive material, and for investigating authorities swiftly to identify the originators.
Katie Price launched a petition only a few weeks ago that already has over 160,000 signatures.
In Stroud, a robust military veteran has had years of deliberate online attempts to ruin his business and reputation. It has nearly broken him mentally at times. The Facebook page that attacks him has a spare one in case the first gets taken down. Another constituent has endured years of stalking and harassment. She is a retired social worker. She has found the police ill equipped to deal with such fast-moving tech, and even when the perpetrators put a picture of her garage door up online—indicating they knew where she lived—she still felt unprotected. A Gloucestershire journalist was recently told by an anonymous loon that she is single because she
“is self absorbed and looks like a slut”.
I have done enough domestic violence work as a lawyer to know that such attitudes and language are a short hop, skip and a jump to violence.
Of course, not all online nastiness is anonymous. One named man said of me on Facebook last week:
“She should be banished from our lovely Stroud…years ago she’d have been shot on the spot for her arrogance and hypocrisy…yet people voted for the ass licking vile piece of slime.”
Lovely—and I could go on and on. I do not have enough time to properly address other reports of dangerous antisemitism, fake news, vaccine misinformation, deliberate reputation ruining and online fraud. That is on top of the daily legal but harmful harassing-type behaviour, plus posts that have the veneer of a justified challenge but are really just deliberately spiking pile-ons and hate.
Constituents I have spoken to are clear that the reporting does not work, the cost of legal remedies are out of the reach of normal people and the law needs updating. We need to make social media known more for the good in our society, rather than as a toxic, unsafe hellhole. The Government’s online harms work, though overdue, is to be commended as a huge step in the right direction. That legislation will require media platforms to take more effective actions against abuse, whether it is anonymous or not. Its aims of protecting children and empowering adults to stay safe online are noble, yet the White Paper barely addresses the issue of anonymity. There were no specific consultation questions about the issue. That should be rectified without delay.
As it stands, tech companies do not know who millions of their users are. No matter how good their intentions, the lack of basic information means that any attempt to police platforms and bring offenders to justice is a painful process, if it happens at all. Ofcom’s hands will be tied behind its back before it even starts.
I do not propose the banning of anonymous accounts. There are great benefits in anonymity that I know other Members will speak passionately about today. I would like to see tech companies move on this issue, as we should not always need the Government to intervene, although sadly it currently looks like they will have to.
Three simple steps would go a long way to prevent, deter and reduce online abuse. First, we should give social media users the option to verify their identity. Secondly, we should make it easy for everybody to see whether or not a user has chosen to verify their identity. Members of this House already use that function—my Twitter account has a prominent blue tick next to it, thereby providing confidence that the account is genuine and my details have been checked. Verification works: we should make it available to all. Finally, we should give users the option to block communication, comments and other interaction from unverified users as a category, if they wish.
Some people argue that such moves would undermine freedom of speech, but I disagree. No one would be prevented from using another name or being “Princess What’s-her-chops”, but it would make it harder for online abusers to hide in the shadows if they cause mayhem. Importantly, it would make abusers easier to catch and give social media users the power of choice. Some will be happy to interact with unverified users; others will not. But there must be a choice.
In any event, what greater impediment to freedom of speech is there than people worrying that what they say online will end up in a death threat or a rape threat? What personal freedoms have been lost through the damage done to mental health by online bullying? How many people have already looked at online abuse and hesitated before applying for public-facing jobs, or not applied at all? My proposals would protect freedom of expression and respect the choice of anonymity, but make it harder for abusers to hide in darkness and give individuals new powers to control how they interact with others. I urge everybody to look up the organisation Clean up the Internet, which was co-founded by one of my constituents, to see the proposals in more detail.
Mr Deputy Speaker, no one should face the abuse and horror that you will hear about today. For the victims of online harm, the abuse is not virtual. It does not stay in cyber-space. It impacts the real lives of real people in the real world. If we fail properly to investigate the impact and options surrounding anonymity, I fear we will render any forthcoming legislation and change—no matter how good it is—out of touch and out of date before the ink is dry. We have the expertise, support and drive to tackle online harms; let us be a beacon of light and illuminate the dark streets of social media. Let us really lead the world on tackling anonymous abuse.
The time limit is three minutes and I must ask hon. Members to observe it very strictly, because otherwise colleagues will simply not be able to get into the debate. They will be doing colleagues a favour if they can even manage to deliver their speeches in less than three minutes.
Legislating on online harms gives us a vital opportunity to call a halt to the extremism, misinformation and avalanche of harmful abuse that has become commonplace on social media. Whether on big platforms such as Twitter or fringe platforms such as Telegram, harmful content is now all-pervasive. Recently, another tsunami of racist abuse was directed at the footballers Marcus Rashford, Lauren James and Anthony Martial. Sometimes, the perpetrators can be identified, but too often those responsible do not reveal who they are. In the past, we argued that online anonymity supported open democratic debate; I am now convinced that anonymity encourages online harm that is not just hateful in itself but is used to spread lies about individuals and aims to undermine their credibility and so shut down their voices. Far from nurturing democratic debate, anonymity undermines democracy.
My work challenging Jew-hate reached a climax last autumn, with the publication of the Equality and Human Rights Commission report into antisemitism in the Labour party. Community Security Trust found that my public comments at that time led to 90,000 mentions on social media. The vast majority were abusive, racist and misogynistic.
Let me share just a few; some are very offensive.
“I hope she dies soon. Dumb bitch”;
“nothing but a couple of shit-stirring…cum buckets, bought and paid for by Israel.”
I was told I was a “Mossad agent”, a “Zionist stooge”, a wrinkly “pedo-lover”. “Traitor.” “Snake.” “Rat.” “Shill.” “Nazi”. This abuse is aggressive, harmful, yet sometimes I have no idea who said it.
Ending anonymity for those who promulgate hate or harm is key to effectively combating it. We must compel social media companies to be able to identify all users. We know that is easily done. Take the online payment company PayPal. Everyone using PayPal must provide their identity when setting up an account. Users’ identity is not public, but it can be traced if required. If social media companies acted similarly, those who use online anonymity for good, such as whistleblowers, or victims of child abuse or domestic abuse, could continue to do so, but those who use anonymity to spread harmful content would be identifiable, and could be dealt with by the appropriate authorities. Knowing that would, at a stroke—
Of course plenty of people are anonymous without ever being abusive and, God knows, plenty of abuse comes from people who are perfectly open about who they are, but there is something of a media hierarchy in human nature. I think we all recognise that there are many people who would say things to someone on the phone that they would not say in person, who would put things in email that they would not say on the phone, would put things on Twitter that they would not write in an email, and yes, will post anonymously something they would never want to see their name written next to.
I do not want to ban anonymity, any more than my hon. Friend the Member for Stroud would. People have long sought its sense of freedom, its disinhibiting effect, its privacy and occasionally its hilarity and enjoyment, and there is nothing wrong with any of that. As long as there is no harm to anybody else, it is no business of the state. It is also important, of course, for activists in oppressive regimes, or for people seeking advice on sensitive issues, to discover a community out there, to know that they are not alone. But while in one context anonymity can give voice to the voiceless and empower the oppressed, in another it can coarsen public discourse and facilitate abuse. Surely it is possible for us to have the one without having to have the other. In this debate we will hear, indeed have already heard, about some really nasty abuses—in many cases, criminal abuses, where the issue with anonymity is really one about registration; it is about the impediments to enforcement action. Many of those cases will be about people in the public eye.
I am also concerned about lower-level effects—the impact on the general tone of public discourse, and the consequences for our social cohesion and mutual understanding. I am concerned not only, or even mainly, about public figures, but also about everybody else—about moderate, normal people of all views who fear to put their head above the parapet, and those deterred from entering public life in future for fear of what their children might see written about them on Twitter.
Free speech is at the heart of our traditions, but we have another long tradition that pamphlets declare who they are from—the imprint. Writers might write under pseudonyms, but someone—the publisher—is ultimately accountable. Social media platforms deny that responsibility, so anonymity could also make it easier for those foreign powers and others who want deliberately to confuse and divide us. It can be hard to know whether you are interacting with a person, a machine or something in between.
There are many possible permutations; there are also many pitfalls, and this warrants proper debate and deliberation. My proposal, like that of my hon. Friend, is a pretty mild one, and a safe one—that if you are on general-usage, mass-market social media using your real identity, you should have the right, if you choose, to hear only from other people using their real identity.
The first issue is impersonation online, which I experienced in 2019 just prior to the general election as I gave up my Twitter MP handle. Within an hour, someone had stolen it and followed a number of my colleagues, and began to tweet out as though they were me with my Twitter name DrLisaCameronMP. It was not a parody account, which we can all relate to, but one which I believe to have been made in malice to impersonate an MP. We all know how many constituents contact us initially through social media with crucial private issues who would have been affected by this individual acting with impunity.
When we reported it, we were originally told that there was nothing that breached standards. However, this is a serious matter not just for public figures but for those who come in confidence to seek our help. My situation could not be rectified until I contacted the CEO of Twitter. Following the election, I even had difficulty getting my own handle back, as I was initially advised that it belonged to someone else. No one was held responsible and it would have taken a police investigation to find out. It should simply never be allowed to happen. Where it does, there should be some means of recourse due to the adverse impact on our most vulnerable constituents.
The second issue relates to my role as vice-chair of the all-party parliamentary group against antisemitism. Sadly, and as we might expect, antisemitism and extremism are a key concern in relation to online anonymity. Disguising one’s identity is not new for extremists, as the Antisemitism Policy Trust pointed out in its briefing on online anonymity. The Ku Klux Klan and others have long sought to cover their faces in order to carry out extreme acts. The internet now offers anonymous abusers and spreaders of radical and violent ideologies a degree of protection by allowing them to hide their identities. According to the Community Security Trust’s incident statistics for October 2020, nearly 40% of reported antisemitic abuse online during that month came from fully anonymous and partially anonymous users. That is an extremely worrying trend.
Placing sensible checks on anonymity and incentivising against harm from anonymous accounts can help victims regain a sense of control and confidence, and would surely disrupt what are presently significant levels of abuse. I urge that restrictions be applied to online abusive actions, much more so than they currently are. Existing legislation urgently needs to be updated through the proposed online safety Bill.
Social media companies too often mistake harmful hate speech for legitimate freedom of expression. A recent report by The Guardian revealed internal moderator guidelines at Facebook, reportedly leaked to the newspaper, that say that public figures are considered to be permissible targets for certain types of abuse, including calls for their death. More needs to be done not just to take down harmful content, but to ensure that social media companies do not amplify it in their systems. No one has a freedom of expression right to be promoted on TikTok’s “For You” page or the Facebook news feed. An internal company report in 2016 told Facebook that 64% of people who joined groups sharing extremist content did so at the prompting of Facebook’s recommendation tools. Another report from August last year noted that 70% of the top 100 most active civic groups in the USA are considered non-recommendable for issues such as hate, misinformation, bullying and harassment.
The business model of social media companies is based around engagement, meaning that people who engage in and with abusive behaviour will see more of it, because that is what the platform thinks they are interested in. When we talk about regulating harmful content online, we are mainly talking about that model and the money these companies make from all user-generated content as long as it keeps people on their site. Content that uses dehumanising language to attack others is not only hurtful to the victim but more likely to encourage others to do the same.
When Parliament debates the online harms Bill later this year, we will have to remind ourselves what the real-world consequences are of abuse on social media. In Washington DC on 6 January, we saw an attempted insurrection in the US Capitol, fuelled by postings on Facebook, that caused the deaths of five people. In the UK, we have seen significant increases in recorded hate crimes over the past 10 years, suicide rates are at a 20-year high, and over the past six years the number of hospital admissions because of self-injury in pre-teens has doubled. Arrests for racist and indecent chanting at football grounds more than doubled between 2019 and 2020, even though hundreds of matches were cancelled or played behind closed doors. These issues are too serious to be left to the chief execs of the big tech companies alone. Those people need to recognise the harm that their systems can create in the hands of people and organisations intent on spreading hate and abuse. We need to establish the standards that we expect them to meet, and empower the regulatory institutions we will need to ensure that they are doing so.
One of the things we have to recognise is that this it is not an equal experience. Women, particularly women of colour, and people from non-binary backgrounds are especially at risk of being abused online. Some 82% of women politicians from around the world report experiencing psychological violence, and half of them have had rape or death threats. In the 2017 election, MPs who were women of colour were particularly targeted, receiving 35% more abuse than their white colleagues, with my right hon. Friend the Member for Hackney North and Stoke Newington (Ms Abbott) receiving half of all the abuse online during that election. It is little wonder that by 2019, many colleagues from across the House cited the abuse that they had faced as the reason they were standing down.
This is not just about people in the public domain. It is also about the experience of women and people of colour across our country, and we know that that has got worse during the pandemic, with a 50% increase in the abuse, according to Glitch!, which has been monitoring this. It is not just the words; it is the sheer volume of abuse we get. And it is not just online any more; it is leaching into our offline world, and it is increasingly not anonymous, with people feeling emboldened to use abuse as it becomes commonplace. Every year that we delay enacting this legislation is another year when we see voices being removed from our public domain, so let us kill the idea that this is about free speech. It is not free speech when 50% of the conversation is living in fear of what someone might do, or of being found or being terrorised, and it is not free speech when we are not hearing those voices—that diversity of voices that improves our debates and discussions.
I started off using kittens to try to take the heat out of conversations; now I have moved on to capybaras, but the problem in the last eight years has got worse. It has been state-sponsored, it is organised and it requires us to come together and hold the media companies accountable, just as we would hold a pub landlord accountable if we were being abused in a pub while just going about our business. The online harms Bill must recognise the intersectional nature of the issues we face. It must listen to organisations such as Glitch!, HOPE not hate and the Jo Cox Foundation—for goodness’ sake, it must listen to that—when they argue that we must recognise who is being targeted. In a free and fair democracy, we must fight to reclaim not just our streets but our social media too.
Human rights are often about a balance of rights. The right to anonymity in what someone says has to be balanced against the right of the people they abuse to speak freely themselves and the need to hold them to account for making their speech less free. These are of course difficult balances to strike, but if we care about everyone’s freedom of speech, we cannot avoid them.
Freedom of speech is not unrestricted in other arenas, and it should not be unrestricted on social media either. That restriction often comes via the criminal law, including online, but there is much we should not tolerate that falls short of criminal behaviour, damaging individuals and damaging us all. I agree with my hon. Friend the Member for Stroud and my right hon. Friend the Member for East Hampshire (Damian Hinds) that in addressing anonymous abuse of an individual, we perhaps should start, counter-intuitively, by looking not at the merits of anonymity, but at the merits of verifiable identity. Whether it is in online banking, shopping or combating deep fakes, it will increasingly help to be able to demonstrate who we are, and if we can establish reliable ways of proving identity, we should be able to choose to interact online only with others whose identity can be verified or who are willing to reveal it. However, anonymous content that damages us all, from disinformation to extremism, is a different problem. Here I think we should consider the disclosure of identity only with judicial sanction, in the same way as other intrusions into privacy such as search warrants or phone tapping, which require the authority of a judge.
Of course, all of this needs much more thought and debate, and the forthcoming online safety Bill should be an opportunity for both. Determining what the duty of care at the heart of the Bill requires online platforms to do, both for those who need the protection of anonymity and for those who need protection from anonymity, is a real challenge, but I think it is now one that we must grasp and do so in the course of this Bill, not put off again.
Since I entered politics, I myself have been the victim of a consistent and vicious campaign of abuse. It is largely based on my appearance. When I post the video of this speech online, invariably someone will post a gif or a comment designed to hurt me. Sometimes this is done through anonymous accounts and sometimes through real profiles. We live in a world where online platforms embolden people to be nasty, vindictive, spiteful and cruel, and very few public figures escape it.
Only last week, following a match with Slavia Prague, the Glasgow Rangers footballers, Glen Kamara and Kemar Roofe, were subjected to a torrent of some of the vilest racist abuse I have ever witnessed on Facebook, Instagram and Twitter. While this was on an unprecedented scale, the club tells me that every single black player at Rangers football club has been racially abused online at some point this season, yet when confronted, the social media companies wash their hands of responsibility and continue to facilitate this hate on their platforms. Rather than joining the rest of society in tackling racial inequality and prejudice, they are actually enabling it. It is time they were called out and held accountable.
I also want to raise a very sinister and very concerning trend seen in Northern Ireland. It involves threats of social media campaigns being waged against politicians and against journalists in a bid to destroy them personally and silence them publicly. What is most concerning is that the links to a political party that enjoys significant electoral support being behind this campaign of intimidation are strong. We need the social media companies to recognise this bot intimidation, and we need the police to be empowered to stop it.
In conclusion, we can continue to talk about these issues, but what we want to see is a legislative basis for enforcing measures for social media firms to require ID to verify accounts and for a swifter response and better co-operation with the police in tackling this online pandemic of hate. If that does not happen, we will have failed every victim, whether a teenager, TV presenter or international footballer, and allowed the continuation of hurt and harm against them. It is time for this Government to act.
When social media users are anonymous, they feel much more able to behave poorly and bully and abuse other users. It is not just me saying that; repeated studies show that anonymity makes user behaviour increasingly aggressive and violent. Anonymity also makes it much harder to enforce rules against such behaviour. If a troll is eventually banned, they simply create a new anonymous account under a new name and their behaviour continues unabated. Online anonymity is a key factor in the spread of disinformation, conspiracy theories and extremism. Organised disinformation networks exploit the ability to create fake accounts and false identities at scale, using these networks to create false and misleading content and to spread and amplify it.
Tackling this effectively will take a really fine balance. I have absolutely no appetite to see free speech curtailed, but if we hope to challenge disinformation and extreme content, we need to address this issue head-on. If we look at platforms such as BitChute and some of the people who use them, we see that they have weaponised free speech and are acting as if there are no boundaries at all. People use these platforms to propagate images and text that directly undermine the rights of other people and minority communities with hateful content. We need to have a proper discussion and understanding of this. Anonymity wrapped in the flag of free speech is being used right now to justify the sharing of extremely hateful content and to directly assault our democratic values.
How do we tackle this? I do not believe that we should just ban anonymity, but my hon. Friend the Member for Stroud makes exactly the right points on this issue. We should mandate that social media platforms give people the option to verify their identity. Users should have the option to block interaction with others who choose not to verify themselves, and it should be clear to everyone who is verified. Doing that would put the power back into the hands of users who want to enjoy the amazing benefits of social media but not see hate speech or fake news. Frankly, if someone is not prepared to stand up and put their name to their words, why should anyone listen to them?
The last year has shown the capacity of social media and digital platforms to benefit society. We have all seen how they can be misused. Online abuse and even online hate, including racism, antisemitism and misogyny, is prolific, with horrific impacts on individuals’ mental health. We have even seen cases that have been associated with suicide. In addition to harming individuals, propaganda, misinformation and fake news are threatening the health of populations and the security of democracies across the world.
Anonymous social media accounts are used by many abusers to hide and get away with their abuse. Research by Clean up the Internet has shown that the majority of abuse and misinformation spread online comes from these anonymous accounts. We should be in no doubt: anonymous accounts embolden bullies and facilitate abusers. This is ruining lives.
It is not only the personal damage caused by anonymous accounts: they are polluting public space, entrenching divisions and trashing our democracy. Research by the think-tank Compassion in Politics found that nearly one in three people are put off posting on social media sites for fear of abuse. Voices are being shut out from online debates because, for many, the space in which those debates are taking place is becoming increasingly toxic. Damaging lies are spun without any recourse.
The Centre for Research and Evidence on Security Threats has concluded that the use of fake social media accounts to influence the outcome of UK elections is “systemic” and that the level of influence of those accounts is “considerably more extensive” than is widely understood. Of course, this follows on from the work by the Digital, Culture, Media and Sport Committee and the US Senate committees.
We have arrived at this critical juncture largely because of the legal framework in which social media companies have been allowed to grow and prosper. A hands-off approach gave fledgling social media companies the room they needed to experiment with algorithms, turn profits from advertising revenue, and engage a larger user base with little social responsibility. The experience over the last decade or so shows that things cannot continue as they are. We need a new legal framework. I hope the online safety Bill will take the bold step of making sure that anonymity is sorted.
I only ever use Facebook, which is a great way to communicate with people, and most people on there use it sensibly and safely, but we all know that hiding behind a keyboard can bring out the worst in people. Lots of us have been guilty of reacting too quickly to a comment, saying the wrong thing or reading it in the wrong way, but that happens in real life anyway. Maybe some of us should actually be breathalysed before logging on to Facebook, as I am sure alcohol can sometimes play a part. Hiding behind a screen sometimes means that people act differently from the way they act in person. It can bring out a nasty side. If people want to be nasty, it is their right to be nasty, but they should stand up and be counted and identified in doing so.
As MPs we get our fair share of abuse online. The sad thing is that I have sort of accepted that, as I am an MP, it is an occupational hazard. When I say abuse, I mean personal threats—threatening messages about me, my wife, my family, my children and my friends. Sadly, it is even worse for female MPs and MPs of colour. It is unacceptable. I know that MPs are not the nation’s favourite people, but does that justify the level of abuse that they get? I do not think it does.
Some people wish to remain anonymous when reaching out for support or whistleblowing, and that is acceptable. This could be a person wishing to reach out to warn somebody about a new partner in their life who has been abusive or a criminal in the past. We need to find a place online for these people to go to make sure that they are listened to without being ridiculed or laughed at, and feel safe. Maybe we could look at creating an online safe place.
Lockdown has meant that many of us have spent more and more time online, and sadly, for many of us, that has meant more abuse, more threats and the like. In my constituency, I have a little girl called Jossie who has Down’s syndrome. Jossie is a beautiful, bright and loving little five-year-old girl. Her mother campaigns online to raise awareness of Down’s syndrome, and proudly displays Jossie’s pictures online. Trolls in my area thought it was a good idea to lift Jossie’s picture off the internet and put a noose over her head, and make fun about it to Jossie’s sister. This would not happen in the outside world without immediate, swift consequences, and the online world should be no different. I stand with Jossie.
There may be genuine and valid reasons why some people need to protect their identity online—those at risk of being traced by former partners in an abusive situation, professionals such as teachers, or those whose views may put them in serious danger of harm or torture from their own Government regime. Those who do not need protection, however, are the individuals who choose to use online platforms to bully, intimidate, troll and abuse others. Of course, many people online behave like that without feeling the need to hide behind fake profiles.
Social media gives a great platform to public figures such as MPs, but the downside to having a blue tick on Twitter, especially for women, is that we experience a disproportionate amount of online abuse almost daily. My right hon. Friends the Members for Hackney North and Stoke Newington (Ms Abbott) and for Barking (Dame Margaret Hodge), my hon. Friend the Member for Birmingham, Yardley (Jess Phillips) and the hon. Member for Bishop Auckland (Dehenna Davison) spring to mind when I think of online abuse—and indeed, as the hon. Member for Stroud mentioned, she herself was trolled for daring to be pregnant.
I cannot speak in a debate such as this without raising the huge problem of racism on online platforms. We have all seen it in all its disgusting forms all over social media, from trolling and vile language to the use of insulting stereotypes and images. Sadly, though, some forms of racism have been overlooked and even deemed acceptable by many. Racism is not only about the colour of our skin; Islamophobia and antisemitism have shamed political parties in recent years, and the Labour party has been in the spotlight over the recent findings of the Equality and Human Rights Commission. That is, of course, deeply shameful, as is the treatment of the former MPs who felt that they had no choice but to leave this party.
While I welcome the measures put in place by the leader of my party and the general secretary and his team, many of us have been reporting online abuse by members of our party for several years—personal abuse, sometimes by those with blue ticks themselves, and abuse by party members using anonymous, fake accounts. I know who they are, and it takes only a few seconds to find the posts that they write and share which blatantly clash with the values of this political party.
One of the worst examples for me was the member who mocked up photos and memes of me dressed in the striped clothes worn by Jews in concentration camps. Other members reported his behaviour, yet nothing at all was done until he posted support for a different political party, when he was swiftly expelled. I look forward to the changes that my party and social media platforms will bring in on online abuse, and to the end of these anonymous accounts that are not verified.
My constituent Frankie was subject to online stalking, which started on social media but escalated to severe harassment in the workplace, including incredibly offensive emails sent to both her and her colleagues. That caused serious distress for Frankie. While the police were able to track down her online stalker, because the stalker was based in a different jurisdiction—they were based in Scotland, a very long way from East Surrey, and had happened upon her almost by accident—nothing could be done, leaving that person free to go on and abuse again.
Although I welcome the online harms work that we are doing and the work of the Law Commission to improve the duty of care and to look at how we can boost our offences regime for online communications, I refer Members to the Law Commission’s language about the practical barriers to enforcement. It talks about the sheer scale of abusive and offensive online communications. That means that we must seek to prevent some of these communications; we cannot just use an offence regime to police them.
I want to touch briefly on free speech. We know that women are more abused online, we know that that inhibits them from expressing their views, and we know that that puts them off standing for office. I just wonder whether we are ensuring that we look at whose free speech we are protecting.
We also know that online anonymity can be very beneficial to people, whether they are whistleblowers, domestic abuse victims or people who just enjoy the uninhibited conversations that they can find online, but we know that, as more and more people go online, we will have more and more of these cross-jurisdiction problems. From 2 billion in 2015, we expect 7 billion people to be online in 2030. We cannot just act unilaterally on this issue; we need to look at what we can do to work with our partners across the world, because more and more of these trolls may not be based in the UK.
From stalking to harassment, grooming, scamming, extremism, fake news and political interference, we know that the anonymity of online interaction is doing harm, and there is therefore much merit in the proposals put forward by my hon. Friend the Member for Stroud. We should be considering those and all other policies that we can to try to inhibit some of the anonymous abuse that we have seen.
Online abuses have a real-world impact. We cannot just take refuge in dualism. We cannot say that the online world is different from the real world, because it is not. As we know, we have seen the final, awful result of some of this, which is that young people and other people have taken their own lives.
On anonymity, the hon. Gentleman, the hon. Member for Barrow and Furness (Simon Fell) and the hon. Member for East Surrey (Claire Coutinho) made the very sensible point that, of course, anonymity has its place in all this. It can provide much-needed support for whistleblowers, for minorities finding a safe online space for self-expression, and for victims of physical abuse connecting with a support network. The conversation about anonymity is difficult and we may have to make trade-offs in what the Bill contains. We may not like some of the trade-offs, but at the end of the day, it would be much more worrying if the online safety Bill did not get to the heart of the issue. I sincerely hope that it will.
Let us face it: we already take a variable approach. If somebody wants to buy alcohol—booze—they have to prove that they are of a certain age; they have to prove who they are. If someone wants to get a certain job, take out some sort of loan and or make a financial transaction, they have to prove who they are, so there is a precedent for such proof. Lastly, we have the technical know-how to tackle this issue. It is out there and we can use it.
I want to make it very clear to the Government that it will not be easy but we have to get the identification issue sorted out. It would be a hugely valuable achievement to pull this one off, because I never again want to hear the sort of stuff that I have been hearing in this debate, such as how the hon. Member for Stroud and many others have had this dreadful stuff flung at them. It is just the lowest of the low.
We have heard many reasons today why anonymity is important, such as if someone is a whistleblower, if they are a victim of sexual assault or if they are simply writing a blog about something that is not their day job. These are all reasons why anonymity is important, but a person must waive that right when they commit a crime. At the moment, it is far too difficult for the police to find out who the perpetrators are. It can take up to a year in some cases, even in very serious cases, to track down the authors of these offending tweets and so on. That is what we have to change.
My hon. Friend the Member for Folkestone and Hythe (Damian Collins) was absolutely right when he said that we need to see more action from social media companies, but we also need to clear up the law, so that it is simple and easy for the police to track down perpetrators. Tackling anonymity is important, because it not just shields criminals, but encourages lawlessness. My hon. Friend the Member for Barrow and Furness (Simon Fell) talked earlier about the Law Commission’s finding that anonymity facilitates and encourages abusive behaviours. My hon. Friend the Member for Stroud was absolutely right when she said that it is just a hop, skip and a jump between people behaving badly online and that translating into violence on the street.
We have spoken a lot this week, and in preceding weeks, about how we absolutely need to tackle violence against women. When we see data from Amnesty International on the number of women affected, it does even more to build the case that we need to tackle anonymity. We need to make sure people are held accountable for what they do online, and we need to make sure that the absolutely shocking stories we have heard today cannot be repeated. The Government are doing a lot in the online harms Bill, and this would be a worthy addition.
As many people live increasingly online, never more so than during the past year, it is time to get serious about making the online world safer and protecting pluralism, democracy and the mental health of the many people who engage online. As I suspect most Members will know, particularly female Members, online abuse from anonymous accounts has sadly become part of daily life for most roles in public life. Social media is a vital tool for engagement, to listen as well as to broadcast, so it is a problem when people have to wade through pointless abuse, perhaps gendered, sectarian or racist, particularly from accounts that appear to have been set up for precisely that reason.
Although there are tools to report and block accounts, it usually has to be done by the victim of abuse, after the abuse has been experienced and the damage done. This has a cumulative effect, and it is easy to see how it is being harnessed politically in something of a war of attrition against opposing political voices. This is a new manifestation of a phenomenon familiar in Northern Ireland and elsewhere, of low-level, persistent abuse and intimidation that is designed to wear down an opponent to the extent that they decide that expressing a particular viewpoint just is not worth the hassle.
Misinformation is an additional and pressing concern with far-reaching consequences, whether through sowing distrust and hate or more literal and immediate impacts, as with systematic misinformation about the covid-19 pandemic. Most concerning of all is the harmful impact that anonymous abuse can have on young or vulnerable people. Research shows that bullying and receiving content they wish they had not seen is widespread among young people, and it is mainly sent by people who are not prepared to attach their own name. We have heard tragic stories of online bullying of adults and young people—bullying that may continue 24 hours a day, following people into their homes and bedrooms, in a way that is impossible to escape.
I welcome the moves in the online harms Bill towards greater regulation and against anonymity where it is used for harmful purposes. Many Members have outlined some of the reasons why people may choose and need to act anonymously, but the internet is essentially a large public space with an antisocial behaviour problem. We design and shape our physical environment to promote safety. If the companies have failed to do that, the Government should step in. Slightly tangentially, Australia’s regulation of news content shows that, although it is an international problem, national Governments can tackle it.
The online world is a great place to engage, communicate and connect, but we need action now to harness it for good and to protect vulnerable people and pluralism.
The White Paper sets out a sensible approach to what will be quite a dramatic overhaul of how our online world is regulated. It is right to take this action, and the severity of the issue justifies the impact of the proposed regulation on the platforms that facilitate online communications; fines of up to £18 million or 10% of annual turnover for non-compliance are being proposed.
This debate is well timed, as hon. Members will have seen a lot of misinformation, most of which originates online, regarding the efficacy and safety of covid-19 vaccines. This is a dangerous situation, and I am concerned. The actions and comments of some in the EU that have fuelled these malicious claims are, frankly, immoral, endangering not just their neighbour but their own citizens, too. Fortunately, 94% of the UK have had or are willing to have their dose, so the Government have obviously won this battle and have been more successful than many others in doing so.
It is not just misinformation. From racist or homophobic abuse to content designed to attack the self-esteem of vulnerable people, we have seen time and again that a dangerous minority of anonymous web users can do some truly horrid things from behind the safety of their screens. I welcome the Government’s proposals, and believe that the approach of regulating and placing some of the responsibility on the firms and platforms facilitating such communications is the correct one.
I will use the short time that I have left to make some observations about the proposals on verified ID. If they are optional, I am concerned about cross-platform penetration—for example, Google search reporting on the results of tweets and Facebook posts. There will be some technical difficulties in how we make cross-platform presentations apply.
Secondly, we all know the power of some of the big tech firms such as Facebook, Google and Apple. I believe that they currently have what is necessary to identify a lot of these online trolls, and that the Government forcing a duty of care on them is the biggest and most important step that we will take—bringing them to the table, making them responsible, and calling them out when they do not co-operate with law enforcement.
One form that was new to me last year was when my constituent, Helen Mort, told me of the harrowing time that she had faced because of the actions of someone online who is still unknown despite their cruel behaviour. She told me of her shock and fear when she learned that non-sexual images of her had been uploaded to a porn website, the users of which were then invited to edit the photos, merging Helen’s face with explicit and violent sexual images.
Deepfakes were made by unknown people from original, non-intimate images taken from Helen’s social media without her consent and superimposed on sexual content. When she spoke to the police about the images, she was told that they could not act because there was no crime. In England and Wales, under section 33 of the Criminal Justice and Courts Act 2015 it is an offence to non-consensually distribute a private sexual photograph or film with the intent to cause distress to the person depicted. There is no offence if the original image is not private or sexual.
Clearly, that leaves people vulnerable to online abuse, suffering humiliation and distress, while the perpetrators of such acts can rest easy, knowing that there is no criminal offence with which they can be charged. The call for evidence in the Government’s recently reopened consultation on violence against women and girls is comprehensive in covering many types of online and image-based abuse, but it omits the creation of deepfakes. I hope that the Minister will agree with and pass on my concerns about that omission, and ask for it to be included in the Government’s consideration of the issues.
I have discussed the matter with the right hon. Member for Basingstoke (Mrs Miller), who I understand cannot be present because she is chairing another debate. She has done important work on this issue, and I know that she shares my concerns. When the online harms Bill eventually arrives, we must look to it to outlaw the making, taking and sharing of intimate images without consent.
Although the delay to the Bill has been frustrating, it will enable us to take account of the Law Commission’s current review on this area, which is due to be published later this year. I hope that the Minister, when winding up, will agree that when the Bill comes forward it must be used as a vehicle for implementing improvements to the law to protect people in a position like Helen’s.
Similarly, a petition to make verified identity a requirement for opening a social media account was created on 5 March by Katie Price, who is campaigning on the abuse faced by her son Harvey, and it already has nearly 170,000 signatures. In an evidence session last year, Katie told the Committee,
“if you go and get a mortgage, a car on HP, bank credit or whatever anyone wants, they want to know your name and address. They do credit checks…It is simple. Why can’t they do it on social media, so that if someone is being abusive then you have their address and you can find them?”
All this points to a growing public concern that anonymity allows people to get away with—and worse, encourages—horrendous abuse of a kind that would simply not be tolerated offline. Petitioners know that whatever measures we take will never be enough to stop the most determined trolls, but it is the same principle we implicitly understand when we lock our doors or install an alarm system: no home security measures will stop determined burglars, but they deter most would-be criminals.
I am co-chair of the all-party parliamentary group against antisemitism, and we have seen that anonymity also has an impact on online hate speech. In October 2020, nearly 40% of reported antisemitic abuse online during that month came from fully anonymous or partially anonymous users. It is a worrying trend, and the Antisemitism Policy Trust has written a really important briefing on this issue, which I encourage the Minister to read. Several studies have shown that anonymity can make user behaviour more aggressive by creating environments less constrained by social norms. The Law Commission’s investigation found that
“anonymity online often facilitates and encourages abusive behaviours. Combined with an online disinhibition effect, abusive behaviours, such as pile-on harassment, are much easier to engage in on a practical level.”
We have tiptoed around this issue of online harms for far too long. The Government must now set out a clear plan to address online abuse and hate speech, and give people back control of their online lives.
If I am honest, online abuse was not something that I experienced personally before becoming a politician, but in the past 16 months, I have felt the full force of it. Friends and colleagues tell me that it has become more commonplace over the past year, as people have been locked behind closed doors, struggling through the confusing ups and downs we have all experienced since covid hit; it seems to have become more acceptable to take it out on others online. On online platforms and public fora, egged on by an often equally anonymous audience, some people say things they would never normally dream of saying publicly, purely because their username gives them a level of anonymity. For those on the receiving end—even those who have strong, resilient support networks to protect them—the effect can be devastating, leading to mental and physical health crisis and even suicide.
Abuse like this is cowardly. These people have no respect for differences or democracy. They do not represent the vast majority of the population, yet despite the fact that we all recognise how abhorrent it is, it still goes on. Before things go too far; before more people suffer; before we hand a life of unhindered online bullying to our children, we need to take action. The online harms Bill will help to tackle anonymous abuse by putting a duty of care on companies towards their users, overseen by an independent regulator. There will be clear safety standards, mandatory reporting requirements and strong enforcement powers to deal with non-compliance. It will remove the excuse that is sometimes rolled out that this is freedom of speech, or that such behaviour cannot be tackled when an account is anonymous.
We need to be absolutely clear: abuse is not the same as freedom of speech and being anonymous does not give anyone the right to abuse anyone else.
Throughout the pandemic, we have seen how social media can be a force for good. It has kept families and friends connected. It has allowed our children to continue their education from home. It has helped many elderly people living on their own to tackle feelings of loneliness and isolation. But these days, social media is often used for the wrong reasons: as a platform for abuse, trolling and bullying, often by anonymous users who hide behind a screen to bombard people with vile and sometimes threatening words.
The abuse has affected every type of user, including parents, celebrities and, indeed, Members of this House. Although I have experienced hateful bullying and online interactions since I was elected, I know that it is absolutely nothing in comparison to some of the atrocious abuse experienced by other Members, from the newest to some of the longest serving. Sadly, for many female Members, this abuse often goes further and results in harassment and threats. I give huge credit to my hon. Friends the Members for Wrexham (Sarah Atherton) and for Ynys Môn (Virginia Crosbie), and all others in the House who, despite constant abuse from faceless trolls and bullies, have continued to work tirelessly for the good of their constituents and communities. For their resilience alone, they are a credit to this House.
Of course, this is not a partisan issue. The shocking abuse that is received by the right hon. Member for Hackney North and Stoke Newington (Ms Abbott) has been well documented for its appalling and abhorrent nature. Not only does such abuse hurt the individual; it damages our political discourse and our ability to debate. It ultimately damages our democracy, as it acts as a deterrent to people becoming engaged in politics at all.
Abusive behaviour on social media is indiscriminate and impacts many of our nation’s celebrities, sports stars and public figures, particularly if they are female or part of the BAME or LGBT communities. In the sporting world, sadly not a week goes by without a new instance of a black footballer being racially abused. Even our nation’s heroes are not immune, with trolling even aimed at the late Captain Sir Tom Moore and his family—a man who served this country, raised tens of millions for the NHS and lifted the spirits of a nation. Despite all his efforts, he and his family were still subjected to online abuse.
I have to ask, when is enough? Where do we draw the line? How many more individuals—whether close friends or public figures—do we have to lose before we start to address this horrific online abuse and hate? I think we all agree that the answer is now, and it is long overdue. Sadly, I say all this secure in the knowledge that the tweets will have already started, attacking me for having the audacity to stand up and say, “Enough.” Enough.
The online space has allowed some individuals to mask their characters and express hate in ways that would be utterly unacceptable in the real world. I know at first hand the level of abuse, hate and online threats that I have faced over the years, and it is happening online today. I know at first hand how a single tweet—which was up for less than eight minutes—can be misrepresented and exaggerated to wrongfully define an individual and cause an avalanche of hate and abuse. Due to that one single tweet, an individual was sent to prison for threats that he made to my life and Leave.EU was sued for misrepresenting me.
But what will happen to the hundreds of anonymous accounts whose Islamophobic, racist, misogynistic and hate-filled threats are left unchallenged on social media—the tweets of me wearing a hijab, falsely labelling me as an Islamist paedophile admirer, those describing me as a cancer that will lead to my destruction, and the hundreds of others that are still online today? In the real world we have hate crime laws and defamation laws, but for the anonymous trolls the online space has become a free-for-all.
This debate is not just about my experiences, or those of fellow parliamentarians, of the online abuse that we face daily; it is also about the experiences of ordinary people whose stories we never hear. We have the privilege of sharing our experiences in places such as Parliament, yet we still face this abuse, which is without any consequences. What hope do my constituents have?
The online harms Bill is meant to be a once-in-a-generation opportunity to legislate for the online world, so that we can protect freedom of expression without allowing hate, misinformation, and bullying to go unchecked. However, the Government have failed to deliver such legislation, and have watered down the contents of the Bill, leaving it with little effect. The very prejudices that we have fought to tackle over years have resurfaced online. It is almost as if the racism that was once expressed on the street has just moved anonymously online.
Parliament’s primary role might not be to change the moral conscience of those in society, but it is definitely to legislate against harm, and to protect all citizens of this country. I will end with this quote from the great Dr Martin Luther King which, sadly, I believe is still relevant today:
“Morality cannot be legislated, but behaviour can be regulated. Judicial decrees may not change the heart, but they can restrain the heartless.”
I am fortunate that I have good people around me, both inside and outside this House, who I can turn to when I receive such abuse. However, there are people across this country who have no one to turn to, and nobody to remind them of their good qualities. As we all know, online abuse can cause irreparable emotional and psychological damage to victims. It is not a surprise that as online abuse has increased, mental health issues have increased among young people. This is a growing pandemic, and I am pleased that the online safety Bill will begin to address those painful issues by holding internet companies to account.
My constituents have regularly shown their support for greater accountability from social media platforms and online abusers, and I have been blown away on numerous occasions by the online support I received when I have shared my experiences. Those outpourings of love and support are testament to the fact that social media can be a safe and positive place for everyone.
In their manifesto, the Government made a commitment to make the UK the safest place in the world to be online. That commitment shows their intent to bring about positive change, and I applaud Ministers for condemning such online abuse. I welcome the review of law enforcement powers to tackle illegal anonymous abuse online, and I look forward to the Law Commission’s recommendations for reform in that area. I hope that Ministers will seriously consider, where appropriate, whether to bring those recommendations into law as part of the online safety Bill. We must all join together to say, “Enough is enough”, and we need to take action on this now.
Let me cite two examples. Patricia Devlin, an award-winning crime reporter who works for the Sunday World newspaper, received a threat by direct message to her personal Facebook account. The sender threatened to rape her newborn son. It was signed with the name of the neo-Nazi terror group Combat 18, which has in the past had links to loyalist paramilitaries in Northern Ireland.
It is not just journalists who work for national media outlets who are targeted. The political editor of the Liverpool Echo, Liam Thorp, has exposed the scale of threats against and abuse of journalists working in the regional media. A perpetrator was jailed for two and a half years over death threats made to Liam and another Echo employee.
The NUJ has broadly welcomed the “National Action Plan for the Safety of Journalists” that the Government published this month, which stated:
“That there is a problem is undeniable. Too many journalists currently working in the UK do not feel safe from threats, abuse and physical harm”.
The words of Reporters Without Borders are cited in the plan:
“Harassing journalists has never been as easy as it is now.”
The NUJ will work with the Government and other partners on this issue to ensure the safety of journalists and that the strategy is implemented swiftly.
The cornerstones of any strategy to ensure that we have thriving, high-quality national and local journalism to uphold our democracy have to be not only firm legislative protections but a fully resourced independent media. An effective online safety Bill will be a critical component, but so will the securing of a stable future funding source for quality journalism. That is why the NUJ has thrown itself into the debate about not only the contents of the new legislation but the new funding opportunities to secure for the long term the quality journalism that is so critical to upholding our sense of community and our current democratic system.
The circumstances of the pandemic have been a petri dish for conspiracies, disinformation and hate speech, especially towards east and south-east Asian people, as powerfully highlighted in this place by the campaigning work of my hon. Friend the Member for Luton North (Sarah Owen). It can seem that social media companies are too big, too global and too all-encompassing for us to be able to regulate or hold them to account, but we must demand standards and protections to ensure online safety.
The Community Security Trust noted in its most recent report that 44% of the 789 recorded antisemitic incidents between January and June 2020 occurred online, and the anti-extremism campaign HOPE not hate says that
“the far right’s use of the web to promote, plan and assist in terrorism is something HOPE not hate has increasingly witnessed in recent years.”
The Antisemitism Policy Trust is clear that anonymity encourages conversations to get more extreme, as it eliminates people’s desire to conform to social standards. The lessening of anonymity leads to more dialogue within social norms. Anonymity is the soil—or perhaps the manure—for growing extremism and abuse.
Of course, online abuse is not solely the preserve of anonymous accounts. Indeed, I have been subjected to a barrage of antisemitic, biphobic and other hateful harassment—including from candidates standing in this May’s local elections—co-ordinated by Warrington Conservative association in WhatsApp chats that were later leaked. Those people have been emboldened by a political culture that has become utterly toxic, fuelled and accelerated by anonymous online abuse becoming normalised.
In addition to the harassment—particularly of women and minority communities—that we have heard about, I am especially concerned about the increasing sexual exploitation and abuse of children and the increasing threat of far-right online radicalisation. Children now are vulnerable to grooming in ways that no previous generation has been. Expectations of internet access start earlier than ever, and as hardware is now miniaturised and literally mobile, it is more likely to be accessed away from parental oversight than a traditional family desktop computer was. The threat of communicating with strangers, of catfishing or of other unexpected contact from sexual predators is real.
Anonymity is not the only issue that must be tackled. We need to finely balance the many, varied and legitimate needs for anonymity with the need to address harms perpetrated by anonymous accounts. But the fact that it is difficult and complicated is not a reason not to tackle it; it makes the task more necessary and urgent. I welcomed the Government’s 2019 White Paper; the matter must not be delayed further and I hope that the Government will introduce legislation as soon as possible.
Many journalists, particularly freelancers, use social media platforms such as Twitter to promote their work and the work of their news organisation and, as such, many have been the subject of vile abuse, rape and even death threats, as my right hon. Friend mentioned earlier. Women in particular are targets, with those who write about traditionally male topics, such as sport, technology or gaming, often singled out for particular abuse.
Like the NUJ, I welcome the publication of the national action plan for the safety of journalists—it is long overdue—but there is still a lot of work to do. Let me draw to the attention of the Minister and the hon. Member for Stroud the fact that the internal moderated guidelines that were leaked to The Guardian revealed that Facebook’s bullying and harassment policy explicitly allows for public figures to be targeted in ways otherwise banned by the site, including calls for their death, which is incredible.
The NUJ supports the need for much greater transparency and accountability from the tech giants in tackling the online abuse of journalists and, indeed, of public figures. Social media messaging organisations are simply not acting to stop this abuse or to ban serial offenders, and the NUJ quite rightly insists that the new regulatory framework must make clear to companies their responsibilities to address this online harm. Crucially, sanctions must exist, even against the tech giants, or perhaps especially against the tech giants, and applied if appropriate action is not taken. It is important to note that the online abuse of journalists is not always anonymous. The Government stated that improving public recognition of the value of journalists was a priority as part of their action plan. Public attacks by journalists on politicians also serve to undermine the public’s recognition of the value of journalists.
A thriving democracy requires a diverse press. Citizens making decisions in a democratic process must be properly informed. I hope that Ministers will meet with the NUJ ahead of the publication of the Bill to ensure that this is the case.
Social media platforms have connected us in a way that no one would have thought possible. From rapid instant messaging to sharing content on our everyday lives, we are immersed in readily accessible information and news in a way that our parents would have only dreamed of. In the past two decades, this immersion in an always on and always connected world has had another consequence. A study by the Turing Institute on online abuse in 2019 estimates that up to 40% of people in the UK have seen or been exposed to abusive content, and 10% to 20% have been targeted by abusive content. Worse still, that same year, Oxford Internet Surveys found that 27% of respondents had seen contents or imagery that were either cruel or hateful and that 10% had received abusive emails.
The ethnic breakdown makes for even more depressing reading: 41% of black respondents received abusive emails compared with just 7% of white respondents. That should come as no surprise to Members who witness this kind of content on a daily basis. They know all too well the sheer scale and impact that this abuse can have, particularly as it is often anonymous and spread by accounts with peculiar user names with either eggs or silhouettes as profile pictures. While we must protect freedom of expression, there should be the same level of accountability for someone who commits abuse online as there is for someone who verbally abuses a person on the street. I know the strength of feeling across this issue in my own area, which is why I am supporting Coventry Youth Activists’ campaign for a change in Facebook’s policy so that it recognises hate speech and discrimination specifically targeting disabled people. At the moment, that form of discrimination is reported under the “other” category, which makes it seem acceptable and less important than other types of abuse. Tech giants can and must do more to protect the rights of their users to feel safe, both online and in their everyday lives, and their users’ free expression should not mean that they are free from consequences of their action. Social media companies can absolutely do more to make the online environment safer. They have the tools at their disposal, and so do this Government. If the tech giants will not take action on it, it is up to the Government to give the regulators the resources and teeth they need to take appropriate actions to safeguard all users, so that social media platforms can be a place for people to truly feel safe.
One important thing we must also do is acknowledge that we have all suffered. I doubt any of us in this place have not been subject to online abuse from some anonymous source. We have to stand up and call it out. We have to recognise the scale of the problem that society is facing and the threat it poses to all of us, specifically to our young people. I urge the Government to take this on board and think about it every time; not just in specific legislation about social media or about the media, but in every piece of legislation where it could be important.
The sadism comes in many forms. Recently we have learned of the phenomenon of flashing GIFs posted and targeted at children with epilepsy, with the intention of causing seizures. Let that sink in for a moment— trolls targeting epileptic kids with potentially dangerous consequences, again, under the guise of anonymity. Paedophiles have exploited the internet since its inception. With false identities and ever more sophisticated software, young people can be entrapped by those pretending to be someone they are not.
As a member of the Digital, Culture, Media and Sport Committee, I have listened to evidence from the social media companies—TikTok, Facebook, Twitter and Instagram. They all promise to target the problem, but they are all woefully unresponsive. I asked a TikTok boss recently why he was not more tenacious in tackling anti-vaccination disinformation targeted at young people. He said that his company had thousands of moderators hot on the heels of the anti-vaxxers. It only took me a few moments to find a TikTok account with hundreds of thousands of followers posting lies with impunity.
Politicians are not meant to moan, or we get called self-indulgent, but we all know the shocking abuse sent to women politicians in particular. When the First Minister Nicola Sturgeon revealed personal details of tragic miscarriages she had suffered, she was deluged by trolls mocking her or saying that she was lying about her family grief. A Holyrood magazine survey found that a third of MSPs had received a death threat, with a third of female MSPs receiving a threat of a sexual nature in addition. Before the 2019 election, a survey showed that many female MPs who were not standing again cited the abuse they received online as a major factor in their decision.
Misogyny, homophobia and racism have sadly always been a part of politics, but they are now magnified by the perfect poisonous storm: a huge worldwide audience and anonymity. I have had a wee taste of it myself recently. I praised online a young trans constituent of mine last year who had bravely spoken of her life in a BBC documentary. Almost instantly, a sinister organisation called the LGB Alliance began trolling me. It offered the reward of a retweet to anyone who donated money to it in my name. I was, as a result, deluged by abuse from anonymous accounts. I was called a rape enabler, a misogynist and, although I am an openly gay man, a homophobe. As an openly gay man, I was also called a “greasy bender”.
The LGB Alliance was thrown off the country’s two largest crowdfunders as a hate group, but Twitter would not take its account down, despite it clearly and egregiously violating Twitter’s own rules. I am a man in my 50s. The experience was not pleasant, but I was acutely conscious of all the young trans people reading the venom and despairing. They have, after all, been subjected to a vile online onslaught in recent weeks and months.
There can be no accountability in anonymity. Social media is now so ingrained in our lives that it cannot be allowed to continue without some form of verification. The cowards who send death threats or seizure-inducing messages or who attempt to groom children would not dare be so bold if we knew who they were, or at least if the social media companies knew who they were. Of course, the trolling does not exclusively come from the UK. Abuse, misinformation and disinformation flood in from Russian bots, attempting to undermine our values and democracy, and over the course of the pandemic, they threaten our health with covid disinformation.
Like the hon. Member for Stroud (Siobhan Baillie), I believe that verification is the best way to protect people online. That does not mean that people—especially those who are vulnerable—should not be able to use a nom de plume, but social media users should have the option to prove their identity. That could come with an equivalent to the blue tick for verified profiles on Twitter. Such verification would allow us to know who we are interacting with online and know that those we talk to are who they say they are. Users would also then be able to decide whether to block all non-verified users. This would offer protection to parents worried about their children’s safety online, to those who wish to avoid Russian and other bots, and to all of us who would choose to talk only to real people.
Who among us does not seek to stem the avalanche of poison, abuse and disinformation? Will the social media companies embrace such proposals and self-enforce? The evidence suggests not. The Government need to get tough. Soft-touch regulation is not working.
This has been a difficult debate to listen to, particularly the brave speeches from my right hon. Friend the Member for Barking (Dame Margaret Hodge) and my hon. Friends the Members for Bradford West (Naz Shah) and for Warrington North (Charlotte Nichols). The issues that this debate raises will resonate across the country because of the prevalence of online abuse. There were excellent contributions from Members across the House, including very thoughtful ones from my hon. Friends the Members for Canterbury (Rosie Duffield), for Oldham East and Saddleworth (Debbie Abrahams) and for Newcastle upon Tyne North (Catherine McKinnell), my right hon. Friend the Member for Hayes and Harlington (John McDonnell), and my hon. Friends the Members for Easington (Grahame Morris) and for Coventry North West (Taiwo Owatemi).
My hon. Friend the Member for Walthamstow (Stella Creasy) mentioned in her speech the charity Glitch, which found that almost half of women have reported experiencing online abuse since the beginning of the covid pandemic. Women from all walks of life are subjected to it, but particularly those in public life, from MPs to journalists to TV presenters.
Today’s debate has been a small snapshot of what has become a tsunami of vitriol, hate, misogyny and racism online, a cesspit that parts of public engagement online have sadly become. Just ask my right hon. Friend the Member for Hackney North and Stoke Newington (Ms Abbott), the journalist Nadine White or the rugby presenter Sonia McLaughlan. Glitch’s research found that 84% of its respondents experienced online abuse from strangers—a shocking statistic. Members know that much of the abuse directed at us is first seen by our staff, and that can and does take a huge toll on them. Why would any girl or young woman stand for elected public office or become a journalist or a television presenter if this is what awaits them? It erodes our democracy and it erodes our society.
There will always be people who spew anonymous hate and abuse. In past years, it would be a letter hand-written with capital letters and green ink, which required some effort and the cost of a stamp. In today’s world, people have the internet at their fingertips. They have become emboldened and they have been joined by many others, sadly mainly men, who take advantage of unregulated globally powerful and very wealthy online platforms—which are, incidentally, also run by men—to attack and abuse women. Those attacks are copied and amplified to the point where it is normalised within public discourse online.
What should we as politicians and policy makers do to improve the situation? I will make two suggestions to the House. First, leadership should be demonstrated in this House—we heard lots of examples of that this afternoon. When my former colleague Paula Sherriff asked the Prime Minister to moderate
“offensive, dangerous or inflammatory language”,—[Official Report, 25 September 2019; Vol. 664, c. 793.]
the Prime Minister dismissed her request as “humbug”. Many of us in this Chamber were there to hear that at first hand. The way we conduct ourselves in this House has an impact on the world outside. When Government Ministers attack journalists and falsely accuse them of bad practice, they open them up to abuse, too.
Secondly, platforms that facilitate and permit abusers to operate need to be subject to the same regulatory framework that is applied to products on the market that carry a risk of harm, or cause harm to those who use them. We need a product safety regulatory approach for online platforms in the same way as we have them for medicines or children’s toys, to name two examples. If a product has a risk of harm, then the owner, creator or manufacturer—whatever we want to call them—should take steps to mitigate or remove that risk in order to operate. We have heard plenty of examples this afternoon of failures by companies to prevent their platforms from hosting and amplifying abuse and attacks. Algorithms promote this, because algorithms feed the platforms’ business models. Justifiable complaints are ignored, and community standards are weak and not enforced.
As several hon. Members have mentioned, the latest in a long line of shockingly negligent examples concerning Facebook emerged yesterday when it was revealed that its bullying and harassment policy explicitly allows for public figures to be targeted in ways otherwise banned on the site, including through death threats, according to internal moderator guidelines that were leaked to The Guardian. The public figures identified by Facebook include people whose claim to fame may simply be a large social media following or infrequent coverage in local newspapers.
We have heard lots of agreement across the House that anonymous accounts that attack people or spread hate should not be amplified, but nor should those that are not anonymous. While there are legitimate reasons for people to use a pseudonym online, it is clear that those accounts should be limited in what they do, and that other users should be able to limit their interaction with them. This could be done by the platforms now; it could have been done years ago. There is far more that the platforms could do using technology to prevent abuse from being hosted in the first place, and also to take it down quickly when it is reported.
This is why the online safety Bill is so vital and so urgent, and why it is so important that we get it right, so that it is not superseded by technological advancements. That is why Labour backs a principles-based approach. Despite insisting last week that the new law was a priority, the Government have yet again delayed the Bill. Although we expect it to be announced in the forthcoming Queen’s Speech, it is now likely to be more than five years after it was first promised that it will come into law. This delay is inexcusable, because we know that behaviour online can translate to behaviour offline.
The watering down of the measures initially proposed has dripped slowly, very slowly, since the White Paper was published two years ago next month. Why, if the Government are serious about clamping down on this abuse, would they keep in reserve criminal sanctions for senior company executives where there are repeated or aggravated breaches of the legislation, rather than implementing them immediately, as Labour has called for? Why would the Government water down the original proposals so that companies will be encouraged to set minimum terms and conditions, as they will be judged on how they enforce them? That will incentivise completely the wrong approach.
The Bill, whenever it eventually arrives, cannot be a lost opportunity to reset the dial on our online discourse. This is a once-in-a-generation opportunity. We all recognise and largely agree on what the problems are, and many of us on both sides of the House agree on the solutions. Plenty of them were suggested today. The Government need to stop delaying, stop watering down, catch up with other countries that have taken the lead and get on with this legislation. What have we gained by waiting? Nothing. What is to be gained by waiting? Nothing. This House and the public are impatient to hear from the Government on this legislation.
Let us be clear: free speech is crucial, and a climate of fear creates a crisis for freedom of speech. That is why the Government’s online safety Bill is so important. For the first time, the social networks on which so much of this abuse is hosted will be required to enforce terms and conditions that ban abuse and protect free speech. That is important not just for anonymous abuse; it is important in tackling the abuse that is far too prevalent from those who use their real names, too. Abuse online or offline, anonymous or obviously identifiable, is not acceptable, and this Government are balancing the benefits of anonymity for those who need it, free speech and the right of every citizen to feel safe. Nobody, not even the right hon. Member for Hayes and Harlington (John McDonnell), should be free anywhere to call for people to be lynched.
Several Members—my hon. Friends the Members for Ashfield (Lee Anderson) and for East Surrey (Claire Coutinho) and a host of others—have made compelling cases for the option to verify users’ identity when signing up to social media. Such an approach could bring benefits, not least the potential to more easily identify those involved in serious harm and abuse online. Ofcom will be exploring how platforms can meet the duty of care, and we must ensure that there are no safe spaces for criminals online, but at the same time we must be mindful of the arguments, particularly from those users who rely on anonymity to protect their safety online.
In December, we published the full Government response to the online harms White Paper consultation, which sets out the new expectations on companies to keep their users safe online. Services that host user-generated content or allow people to talk to others online will need to remove and limit the spread of illegal content, such as child sexual abuse and terrorist material. All companies will need to tackle illegal anonymous abuse on their services. All companies will need to assess the likelihood of children accessing their services and, if that seems likely, ensure additional protections. Companies with the largest audiences and high-risk features will have to take action in respect of content or activity on their services that is legal, but harmful to adults. That is because certain functionalities, such as the ability to share content or contact users anonymously, are more likely to give rise to harm. The regulator will set out how companies can fulfil their duty of care in codes of practice, including what measures are likely to be appropriate in the context of private communications. Users will be able to report anonymous abuse effectively, and should expect swift and effective responses from platforms.
The online safety Bill will be ready this year. Of course, the precise timings are subject to parliamentary time, but in the meantime we are already working closely with Ofcom to limit the implementation period to as short a time as possible. We want all parliamentarians to feed into this significant piece of work, and I encourage the Members who have contributed today to do that. We will continue working with Members of both Houses, and we will continue to listen to their concerns as we move through the passage of this legislation.
We are also clear that companies should not wait for legislation to be in place to take action, so I want to talk briefly about the measures that platforms are taking. For example, Facebook allows users to protect themselves from unwanted interactions. Instagram allows businesses and creator accounts to switch off direct messages from people they do not follow. Twitter has introduced a feature to limit replies to followers, providing users with more control over who they interact with. All of this is good news, but more work needs to be done in this area to keep all users safe online.
I also want to mention the police’s legal powers to investigate abusive behaviours. The police can already identify individuals who attempt to use anonymity to escape sanctions for online abuse where the activity is illegal. The UK-US data access agreement, which will shortly come into effect, allows UK law enforcement agencies to directly request information. It will significantly reduce the time required to obtain data for cases involving serious crimes.
Of course, we recognise that the law must continue reviewing this area as well, ensuring that the police have the necessary tools at their disposal to investigate anonymous abuse online. The Government are undertaking a review with law enforcement to ensure that their current powers are sufficient to tackle illegal abuse online, anonymous or otherwise. The outcome of that work will inform the Government’s position in relation to illegal anonymous abuse online and the online safety regulatory framework, as will the Law Commission reviews into existing legislation on abusive and harmful communications, including deepfakes. That report is expected later this year.
The Government are committed to tackling harms online, including harms perpetrated anonymously. We know that those harms are not evenly spread. We know that they disproportionately affect women, they disproportionately affect minorities, they disproportionately affect trans people. As has been said, they also disproportionately affect journalists. Anonymous abuse can have a significant impact on victims, whether they are members of the public or high-profile public figures.
The regulatory framework and the criminal law reforms will better protect all users online while also safeguarding freedom of expression, because it is vital that we get this legislation right. We want all parliamentarians to feed into this significant and important piece of work. We will continue to work with Members of both Houses, and we are confident that our approach, through the online safety framework and the criminal law review, will tackle online abuse, including abuse perpetrated anonymously. I pay tribute to all those Members who have so powerfully brought the need to do so home to us all today.
We heard the Minister rightly recognise the calls for verification, although we did not quite hear about the steps that could be taken and the importance of anonymity. I hope that the Minister and the DCMS team will meet me and a small number of Members to discuss that further, and perhaps set up a meeting with the tech companies so that we can talk that through. My thanks to everybody who has contributed; I am really chuffed with the support today.
Question put and agreed to.
Resolved,
That this House has considered online anonymity and anonymous abuse.
Contains Parliamentary information licensed under the Open Parliament Licence v3.0.