Facial recognition technology has been plagued by accusations of bias
Unintended features: facial recognition technology has been plagued by accusations of bias © Getty Images

From one angle, the pandemic looks like a vindication of “techno-solutionism”. From the more everyday developments of teleconferencing to systems exploiting advanced artificial intelligence, platitudes to the power of innovation abound.

Such optimism smacks of short-termism. Desperate times often call for swift and sweeping solutions, but implementing technologies without regard for their impact is risky and increasingly unacceptable to wider society. The business leaders of the future who purchase and deploy such systems face costly repercussions, both financial and reputational.

Tech ethics, while a relatively new field, has suffered from perceptions that it is either the domain of philosophers or PR people. This could not be further from the truth — as the pandemic continues, so the importance grows of mapping out potential harms from technologies.

Take, for example, biometrics such as facial-recognition systems. These have a clear appeal for companies looking to check who is entering their buildings, how many people are wearing masks or whether social distancing is being observed. Recent advances in the field have combined technologies such as thermal scanning and “periocular recognition” (the ability to identify people wearing masks).

But the systems pose serious questions for those responsible for purchasing and deploying them. At a practical level, facial recognition has long been plagued by accusations of racial bias.

“It is challenging — you can see that from the number of examples [of errors],” says Rajarshi Gupta, head of artificial intelligence at cyber security company Avast. He points to a study by the American Civil Liberties Union which found 28 false matches between images of US members of congress and mugshots of people arrested for crimes. The legislators incorrectly matched were disproportionately people of colour.

There is also a significant risk of mission creep. Facial recognition, after all, has traditionally been sold as a tool for security and policing. In many cases, it would be simple for a company to pivot, post-pandemic, into using biometric tools for such purposes. That, in turn, raises the spectre of facial data being shared with other organisations, both private and public, without much transparency.

Biometrics is just one of the most striking examples of how Covid-19 has accelerated the need for tech ethics in business. Similar questions can be asked about employee tracking, rolled out in the name of productivity, if used without employees’ knowledge. Beyond the pandemic, there are further questions about the adoption of automated decision-making — algorithms that make decisions on loans, jobs and more.

Companies that buy high-risk systems and fail to consider the necessary safeguards can fall foul of a growing drive for ethical technologies which has outgrown the academic and activist communities where it began. Companies such as IBM stopping sales of facial-recognition tech to law enforcement agencies highlights the growing anger and power of this movement.

This developing focus on ethics is no fad, and companies that treat it as such put themselves at considerable risk. Leaving aside legislative changes (a possibility, though a slow-moving one in most jurisdictions), public censure and potential loss of business is enough to require a close look at what systems are in place. It will not be enough simply to blame the creators of an algorithm that proves to be biased.

If this all sounds bleak for companies that have been looking to innovation to save them from reaching breaking point in the pandemic, there is some better news. There has been an acceleration in some of the most cutting-edge technologies, Gupta notes. “For the past couple of years, a number of very smart researchers have spent a lot of brainpower trying to make things more explainable,” he says of attempts to make AI decisions fully transparent. While there is a long way to go, Gupta is optimistic that investment in the field will increase and make ethical applications easier.

There are a growing number of tools businesses can adapt when considering which technologies to buy. In the UK, NHSX, the National Health Service’s innovation arm, developed an assessment template for healthcare providers purchasing AI products.

Another practical example, released in October, is an ethics return-on-interest calculator, developed by tech ethics consultancy Hattusia. The tool provides an easy way to look at the potential implications of technologies for their bottom line, depending on relative risk. An accompanying report offers guidance for those who need to win over senior management to see the true value of tech ethics and offers a template for building a responsible business team.

Social-networking platform Facebook dropped the internal motto of “move fast and break things” more than half a decade ago. In the heat of the pandemic, there is a growing risk of other companies dusting it off. To do so is to gamble on the wrong side of history, and to view ethics through a narrow lens.

The assumption that tech ethics is mutually exclusive with innovation is at best lazy; so is the view that ethical treatment is an “optional” extra for companies that can afford it. “Ultimately, tech ethics is valuable because human lives are valuable, not because of the financial return,” notes Alice Thwaite, founder of Hattusia, in the ethics return-on-interest report. Other sectors, such as coal and oil, have faced a reckoning over the impact they have on society; there is no reason why technology should not do the same.

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments