Covid crisis offers lessons in digital health and data responsibility
When the Financial Times forged a partnership with The Lancet in 2019 to convene a commission of global experts on the governance of digital health, the theme was already topical. Then Covid-19 struck.
On a practical level, the pandemic made it more difficult for the commissioners and their secretariat to meet and work, let alone for FT staff to report and co-ordinate a series of accompanying articles and events to encourage wider debate.
Yet coronavirus accelerated the use of digital health technology and heightened the relevance of the conclusions, which were delivered on schedule last autumn despite the disruptions and pressures.
Technology’s effects on health have been profound, bringing many advantages. At a basic level, simple digital tools such as text messaging can provide essential health information in remote areas, while older phones can support telemedicine where inadequate facilities or physical danger impede access, as is the case in countries such as Afghanistan.
Electronic health records can integrate consultations, diagnoses and treatment records to better co-ordinate individual patient care between physicians.
At its most sophisticated, aggregated data offer the potential to connect patients everywhere with a broad range of insights and experts, efficiently and affordably. Artificial intelligence is aiding the scrutiny of vast quantities of information to support “intelligent drug design”, interpretation of radiology scans, diagnoses of symptoms and improved logistics and supplies of medical materials.
The pandemic intensified the applications of digital tools, such as the rapid collection and analysis of infection data, ways to track human movement and the management and analysis of patient records. It helped modelling, patient recruitment for clinical trials, scrutiny of existing and innovative treatments and the unprecedented speed of development of new vaccines.
But many of these applications have been imposed from the top down and unequally distributed, with a suspicion about how personal data are used. If there are concerns about lax safeguards imposed on companies, many governments — authoritarian and democratic alike — have proved still less cautious and more maverick with personal data, with arguably greater consequences.
That has sparked worries about insights collected in the name of public health being used to restrict movement or shared with law enforcement and other agencies. This raises concerns about civil liberties and risks creating a backlash against faith in the state or willingness to comply with measures to curb the pandemic. Examples range from the UK’s contact-tracing app, to similar technologies in India, Singapore, Russia and China.
Covid-19 has also drawn a spotlight on broader concerns about inequality in health outcomes. Technology risks increasing these divisions, whether through the “digital divide” in access to affordable internet, or via opaque algorithms using partial data that introduce racial and other biases in their analyses.
There are concerns, too, about the imbalance of new forms of “data colonisation”, which perpetuate an imbalance between the providers and beneficiaries of insights into health at the expense of the poor.
Another group struggling to make its voice heard is the younger generation, with so many health policies determined by and targeting the diseases of older people, who typically wield greater power and resources.
Yet it is essential that such “digital natives” are integrated into discussions and the design of future health technology. Partly, that reflects principles of equity and good governance. Practically, it also helps incorporate prevention and wellbeing to complement disease treatment to ensure the broadest benefit.
Alongside this call for intergenerational discussion, the FT-Lancet commission’s multiple recommendations include measures to improve investment in digital infrastructure, training to ensure regulators and others are equipped to properly oversee technology for health, and greater “data solidarity” through experimenting with new institutions such as trusts and co-operatives.
The aim is to oversee how individuals’ medical information is used responsibly, and to balance legitimate caution over private gain with a desire to ensure it is pooled and shared for the public good — but with adequate safeguards. Data should also be portable, with requirements on interoperability to allow it to be moved easily between different systems.
Further insights and reflections from readers are welcome on the commission’s report, along with suggestions on next steps and emerging trends, as well as ways to test and develop the models it has outlined.
Andrew Jack is the FT’s global education editor