Various technology news feeds indicate that facial recognition software has been implemented increasingly throughout England and Wales. However, concerns are being raised over its accuracy and the ethical ramifications, including the breach of public privacy, and are hotly debated.
Doubts over accuracy
Facial features from images captured in crowds, at events and on CCTV are compared to a database, whether that’s a collation of images stored on social media or those kept in a police system. This can then can assist police in the identification of suspects, providing ID in real-time, while also helping to verify sightings of missing individuals.
However, there are increasing calls for the technology to be reviewed and its use limited with some suggesting it may be discriminatory and lacking accuracy.
Trials within Britain are ongoing but current findings are indicative of the controversy, with low arrest data compared with significant incorrect identifications in live surveillance projects. For example, news reports suggest 2,000 false positive incidents by South Wales Police at the 2017 UEFA Champions League final. Research also suggests many misidentifications of ethnic minorities and of women. The result? Ineffective capturing of suspects and innocent individuals being stopped for searches.
The issue of discrimination is coming under the spotlight, with the 2017 Remembrance Sunday incident being one example, where it was suggested that ‘fixated individuals’ were put on a watch list borne from criteria associated with mental health issues.
Trials of the technology, however, comes at a high cost, with a £2.6m government grant to South Wales Police and over £200,000 to the Metropolitan Police covering this research.
A breach of privacy?
When using facial recognition in public, rights are potentially breached as individuals anticipate privacy. The software is able to identify us and access personal information, using data without our consent or control. The general public does not expect to be recorded, observed or monitored, with the associated loss of anonymity when this happens. This technology adds the ability to access further records on identified individuals, in addition to analysing and assessing in-depth statistics, data and personal information across a range of sources like never before. False identifications also serve to undermine the privacy of innocent, unsuspecting individuals.
This raises the issue as to how facial recognition technology should be regulated and deployed. Police use must adhere to the 2018 Data Protection Act as well as the Surveillance Camera Code of Practice, but there’s minimal guidance as to criteria for adding faces to a watch list and regulation of the technology
Currently, there is limited coordination in its use, with police forces collating and storing data differently. Furthermore, images on watch lists are typically collated from police databases, but this raises the issue as to whether those with minor, old or no convictions will end up on such lists.
There are calls for more guidance and an overarching framework in place to regulate the use of this technology as a result of ethical concerns, including greater consistency, smaller watch lists and improved quality standards to ensure objective use rather than subjective deployment at the discretion of that specific officer or department.
Facial recognition is exciting and potentially useful with a wide range of applications, but as current technology news is illustrating, legislation needs to respond to protect human rights and privacy.