Pandemic-led healthcare advances raise ethical and political hurdles
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Never in the field of human health has so much data been provided by so many with so few safeguards as during the Covid-19 pandemic.
From passively sharing medical records to actively participating in clinical trials, from the uploading of diagnostic test results to the use of mobile phone apps to track individuals’ locations, the consumer digital age has been characterised by a surge in tools and insights to tackle infection.
The unprecedented power of technology has helped mitigate the worst effects of the pandemic, not least by enabling people to work and study remotely in ways that would have been unthinkable even a decade ago.
Similarly, it has offered direct benefits for health, with more effective control of transmission, accelerated scientific and medical insights and enhanced vaccine and drug development. Technology platforms have accelerated the shift to online medical consultations, and the data they collect have had far wider benefits.
But much has come through mandatory “test, track and trace” systems, requirements for isolation and proof of vaccination imposed by governments that restrict movement and enhance surveillance. They have used powers rarely deployed in peacetime by western democracies, while authoritarian regimes have adopted such measures still more aggressively.
The speed of these digital health advances has left checks and balances lagging, fuelling distrust of governments and companies alike. This risks undermining future gains unless health innovations are accompanied by new approaches to “data solidarity” to balance public and private benefits, according to the findings of The Lancet & Financial Times Commission on Governing Health Futures 2030.
Steve Davis, a consultant and author of Under Currents: Channeling Outrage to Spark Practical Activism, describes the digital revolution as “net-net one of the most powerful things that will ever happen to human health”. He argued recently that “there is a huge gap around understanding what is available, the ecosystems are fragile, there [are] no clear policies on data governance, on digital privacy, on managing disinformation”.
While many people willingly share their personal data through social networks mined by companies and governments, the pandemic has crystallised specific concerns around health. Medical data is perceived as particularly sensitive, and its forced extraction may create resentment and lead to inconvenience or discrimination.
In the UK, the Information Commissioner’s Office (ICO) last year began investigating claims that at least one large Covid-19 testing company had included a notification — buried in extensive terms and conditions — that it could retain its clients’ DNA and other genetic information to share with external researchers.
More stories from this report
That highlighted worries about the potential commercial exploitation of information derived from government-imposed testing on travellers in the name of public health. Other concerns have emerged over wider sharing of data related to Covid-19 with law enforcement agencies, not always fully assuaged by the regulator. The ICO, for instance, says it has “received assurances that there is no automatic mass sharing of data from NHS Test and Trace to police forces”. It adds that “limited data can be shared under strict controls where police suspect self-isolation rules have been broken”.
In Singapore, authorities won initial praise for their prompt action to control the spread of coronavirus with the TraceTogether scheme in 2020. But, last year, new legislation was rushed through to put in additional safeguards over surveillance after officials revealed that data collected for coronavirus control had been used in a criminal investigation.
The collection and use of health data are only set to grow, offering the potential to help disease prevention and treatment significantly. However, it also risks creating ever more divergent outcomes between richer and poorer regions and countries, older and younger people, and those whose data are better or less integrated into health systems.
At the most basic level, much information is still not systematically collected, digitised or pooled — from details of patients’ discussions with doctors in the US, to medical records in poorer countries. Wilfred Njagi, chief executive of Villgro Africa, a Kenya-based investor in healthcare, says medical information from clinics in his country remains “a black hole — and an immense opportunity”.
Narrowing this “digital divide” will require substantial investment, though. Hila Azadzoy, managing director of the Global Health Initiative at Ada Health, which is experimenting with artificial intelligence to diagnose illness in Tanzania, Uganda and South Africa, says: “People agree that we need digital solutions. With the pandemic, health systems, governments and the private sector realise it is truly a must, not a nice-to-have.”
But many argue there should be tougher reassurances on confidentiality, given periodic data leaks and inappropriate sharing of sensitive information. For instance, Privacy International, an advocacy group, has highlighted the sale to advertisers of information on individuals’ mental health collected on apps in France, Germany and the UK.
Greater confidence also requires enhanced safeguards and scrutiny of AI based on imperfect information. In the US, for example, health insurers’ poorly constructed algorithms to identify and provide greater support for at-risk patients were found to discriminate against African-Americans.
Darlington Akogo, the founder of an AI-driven radiology diagnosis company in Ghana, is part of an international “Focus Group on Artificial Intelligence for Health” that is seeking to help regulators analyse and verify machine learning. “My optimism has increased, but so has my scepticism,” he says. “It’s clear we need AI to support healthcare in Africa. These tools have a lot of potential, but they may not be not quite ready. We need more assessments before we put them into wider use.”
More rigorous evidence and scrutiny are also necessary to demonstrate the clear clinical and cost-effectiveness of much health technology. The evidence base in most such fields, including mental health, remains limited.
Tobias Silberzahn, a partner at consultants McKinsey, argues that one of the problems with digital health initiatives during the pandemic has been the failure to provide sufficient useful insights that are directly relevant to individuals, such as personalised guidance on treatment on an app tailored to their own risk factors and stage of infection.
He suggests future health programmes need to be “fun, convenient and effective”, and that there is substantial potential from integrating medical data with broader “wellness” insights, such as sleep, nutrition, stress and movements tracked by wearable devices.
But Pooja Rao, co-founder of Qure.ai, an Indian AI health company, suggests such integration of broader data needs to stress the primacy of individuals as the owners and controllers of their personal health information, with the right to shift it between different health systems. “There is a lack of trust in private actors and government,” she says.
That points to the need for new institutions, such as data trusts or co-operatives that supervise any wider sharing of health records, as well as the advent of an approach known as “participatory digital health tools”, developed directly with and for users.
As Amandeep Gill, chief executive of the International Digital Health & AI Research Collaborative, says: “We have a data privacy and security paradigm. We need to shift the conversation to a data empowerment paradigm, in which the citizen has more agency on choice around their data.”