In charts: facial recognition technology — and how much do we trust it?
Facial-recognition technology poses ethical questions that trouble citizens and civil liberties campaigners all over the world.
Advocates say the technology improves public safety and security. But critics say it is intrusive and often inaccurate, and they call for regulation to prevent law enforcement agencies from misusing it. One of the biggest problems is the technology’s technical bias against women and people of colour.
Last month, the European Commission published draft regulations on the use of artificial intelligence that included restrictions on facial recognition technologies. It said Canada and Japan were also looking at the proposals closely.
And, already In the US, technology companies including Microsoft and Amazon have announced that they will temporarily stop providing law enforcement agencies with facial-recognition tools. In 2019, San Francisco became the first city in the US to ban their use by local authorities.
However, in the UK, a new biometric watchdog — set up to scrutinise how police and other authorities deploy biometrics and surveillance cameras — has said the use of facial recognition technologies should not be banned.
Fraser Sampson, Britain’s commissioner for the retention and use of biometric material, has said usage should instead be left to the direction of law enforcement agencies, rather than lawmakers. He argued that criminals are increasingly relying on sophisticated technology and police “need to match their technological capability”. London’s Metropolitan Police currently uses live facial recognition technology.
In these charts, we display the latest data on the rise of facial recognition technology, its accuracy, and the scale of the public trust problem.
In North America, the value of the facial recognition technology market is forecast to grow rapidly.
Global demand is also growing fast, and specialist companies, in several countries, have been able to raise more funds in the last four years. China and, to a lesser extent the US, lead the market. China has invested heavily in AI, including facial recognition technology, and has quickly rolled out applications that have helped the government boost surveillance and exert control over the population. In the US, AI development is in the private sector, with a focus on commercial applications.
Trust depends on who is using facial recognition, and where. Some research suggests the technology can be biased.
The American Civil Liberties Union in a 2018 test found Amazon’s “Rekognition” software mismatched 28 members of Congress, identifying them as other people who had been arrested. More than a third of those false matches were people of colour. In a blog post, Amazon argued the test methodology was flawed.
Another study suggests the likelihood of not being correctly recognised in a mugshot is higher among women and non-white groups compared with their white, male counterparts.