PARLIAMENTARY DEBATE
Automated Facial Recognition Surveillance - 27 January 2020 (Commons/Commons Chamber)
Debate Detail
Live facial recognition compares the images of people passing a camera with a specific and predetermined list of those sought by the police. It is then up to officers to decide whether to stop and speak to those flagged as a possible match. This replicates traditional policing methods such as using spotters at a football match. The technology can make the search for suspects quicker and more effective, but it must be used strictly within the law.
The High Court has found that there is an appropriate legal framework for the police use of live facial recognition, and that includes police common-law powers, data protection and human rights legislation, and the surveillance camera code. Those restrictions mean that sensitive personal data must be used appropriately for policing purposes, and only where necessary and proportionate. There are strict controls on the data gathered. If a person’s face does not match any on the watchlist, the record is deleted immediately. All alerts against the watchlist are deleted within 31 days, including the raw footage, and police do not share the data with third parties.
The Metropolitan Police Service informed me of its plans in advance, and it will deploy this technology where intelligence indicates it is most likely to locate serious offenders. Each deployment will have a bespoke watchlist made up of images of wanted people, predominantly those wanted for serious and violent offences. It will also help the police to tackle child sexual exploitation and to protect the vulnerable. Live facial recognition is an important addition to the tools available to the police to protect us all and to keep murderers, drug barons and terrorists off our streets.
An independent review of the Met’s facial recognition trial was published last July, and its conclusions are damning. Does the Minister agree with the report that the legal basis for this roll-out is questionable at best and is likely to be in conflict with human rights law? According to an analysis of the Met’s test data, 93% of supposed matches in the four years of trials have been wrong. As well as being inaccurate, facial recognition technology has been shown to be much less accurate in identifying women and ethnic minorities than in identifying white men. This means that women and black, Asian and minority ethnic people are much more likely to be stopped without reason than white men. Given that a black person is already 10 times more likely to be stopped and searched than a white person, does the Minister share the Liberal Democrats’ concern that this technology will increase discrimination and further undermine trust in the police among BAME communities?
The biometrics commissioner, the Information Commissioner and the surveillance camera commissioner have all raised concerns about facial recognition surveillance, and all three have argued that its impact on human rights must be resolved before a wider roll-out. What steps has the Minister taken since those warnings to examine and address the human rights issues they raise?
Since the ICO report was published, we have had the judgment in a case brought against South Wales police’s deployment of this technology, in which the High Court found there is an appropriate legal basis for the operation of facial recognition. However, I understand that there may be an appeal, and there is a suspended judicial review into the Met’s operation, which may be restarted, so if Members do not mind, I will limit what I say about that.
As for disproportionality, there is no evidence of it at the moment; the Met has not found disproportionality in its data in the trials it has run, and certainly a Cardiff University review of the South Wales police deployment could not find any evidence of it at all. The hon. Lady is, however, right to say that in a country that prides itself on being an open and liberal society, we need to take care with people’s impressions of how technology may impinge upon that. As she will know, live facial recognition has an awful lot of democratic institutions looking at it, not only this House: the London Assembly has a policing ethics panel; we have the surveillance camera commissioner and the Information Commissioner; and there is a facial recognition and biometrics board at the National Police Chiefs’ Council, which brings people together to look at these issues. There is lots of examination to make sure that it is used appropriately, and I am pleased to say that the Met will be operating it on a very transparent basis. As I understand it, the Met will be publishing information about which data was gathered and the success rate, and other information that will allow the public to have confidence that where the technology is deployed to identify wanted criminals it is having the effect intended.
As for unreliability, as technology is rolled out it obviously becomes more and more effective and reliable—[Interruption.] Well, I am the lucky owner of a telephone that allows me to make banking payments on the basis of recognising my face. That technology was not available in the last iteration of the phone—it is an iPhone—which used my thumb instead. So there are developments in technology. South Wales police found in trials that there was a 1:4,500 chance of triggering a false alert and more than an 80% chance of a correct alert. It is worth bearing in mind that even when the system does alert the police to a possible identification, the final decision as to whether to intervene with an individual is still taken by a human being.
The Scottish Government are employing an approach that involves a comprehensive, up-to-date legislative framework and a regularly updated code of conduct with strong oversight through a commissioner. In that way, my colleagues in Edinburgh hope to ensure that the use of the technology is proportionate, necessary and targeted, and that it respects human rights, privacy and data protection rules. Will the Minister follow suit?
Finally, so far as I am aware, there is no evidence that the use of this technology in the manner contemplated is effective in fighting crime. If I am wrong about that, will the Minister direct me to the evidence that says that it is effective? If not, why not employ less risky measures, such as following the Scottish Government’s example and employing more police officers in a meaningful way?
In the previous Parliament, the Science and Technology Committee looked at this issue as part of the biometrics and forensics strategy review. All of the key stakeholders recognised that a biometrics strategy that was not fit for purpose and not of the quality required to provide a regulatory framework for facial recognition technology was at the root of the issue. Can the Minister confirm whether that strategy has been updated since last April?
Has the Minister seen the concerns raised by the think-tank Future Advocacy that the deployment of this technology may infringe upon the rights of Muslim women who wear the niqab, and wider concerns about technology being less accurate, particularly with women and ethnic minorities?
The hon. Member for Newcastle upon Tyne Central (Chi Onwurah) made an important point. The embedding of bias in technology is a major issue that will worsen with the early widespread adoption of artificial intelligence. The Government will inherit these biases as a user of these technologies, so will my hon. Friend, noting that the American studies show that the disproportionality of false recognition for ethnic minority women was between 10 and 100 times that for Caucasians, look seriously at how those technologies are improving as he progresses the adoption of this technology?
Contains Parliamentary information licensed under the Open Parliament Licence v3.0.