Tech’s power comes with great responsibility in fighting disease
When World Health Organization officials decided last year to convene a group to advise them on digital health, they were testing the limits of a longstanding taboo just weeks ahead of an impending catastrophe.
The UN agency usually maintains its distance from commercial organisations — from food companies to the pharmaceutical industry — to avoid any potential conflicts of interest.
Yet, alongside representatives of governments and non-profit groups, the 20-strong digital health technical advisory group includes executives from two of China’s largest tech companies, Baidu and Tencent, and is chaired by Steve Davis, a former McKinsey consultant.
The identities of those around the table reflect how far technology has already been intermingling with and disrupting traditional approaches to tackling health and disease. These trends have accelerated since the emergence of the coronavirus pandemic, with the digital world both boosting the virus — through an “infodemic” of fake medical news — and helping to mitigate its spread.
Ilona Kickbusch, co-chair of The Lancet and FT Commission “Governing health futures 2030: growing up in a digital world”, established in 2019 to explore best practice in the governance of digital health, says the pandemic upended the commission’s initial thinking, as well as its working practices. “We’ve got to look at these issues through the lens of Covid. All projections have gone out of the window,” she says.
Digital tools have proved of unprecedented value to public health officials, policymakers and patients struggling to respond to the pandemic. They have assisted with everything from the exchange of ideas to medical consultations at a time of physical travel restrictions and limits on face-to-face meetings. They have helped monitor, collect and crunch data, accelerating the ability to track the infection’s transmission in an effort to limit its spread.
But technology — especially social media — has also fuelled fake and questionable health information that has affected individuals’ attitudes to the disease and government handling of it. It has also led to an unprecedented sharing of personal data with few safeguards or controls, raising longer-term concerns over privacy and human rights.
Richard Sullivan, a professor at King’s College in London, is concerned that continual public announcements on escalating cases of transmission has distracted attention and resources from more complex and longer term health conditions.
Heightened levels of concern about contracting coronavirus, coupled with calls for hospitals to defer other types of operations and consultations, are leading, for example, to a pent-up burden of other diseases that will flare up in the months ahead, says Professor Sullivan, who specialises in oncology.
He also warns about the increasingly blurred lines around access to data — including the apps to monitor public movement and infection reports. “In many countries, the pandemic has been used as a Trojan horse to radically increase state surveillance under the guise of public health, much of which, as history tells us, will not be rolled back post-pandemic,” he says. “That raises serious concerns among the intelligence community, while commercialisation [of app data] has seriously eroded public trust and confidence.”
In China and Russia, for instance, individuals considered to be infected are monitored by mobile phone and their movements restricted. In the UK, the government has authorised the police, for enforcement purposes, to have access to phone data on individuals told to self-isolate.
New drugs and vaccines — the focus of much effort and funding in tackling coronavirus — receive intense regulatory scrutiny and their developers must demonstrate substantial evidence of safety and efficacy before they can be deployed.
Yet far less scrutiny applies to digital tools, despite the vast and growing sums invested in them and the increasing convergence of their applications with human health. Robust studies demonstrating the value of artificial intelligence in diagnosis and screening remain limited, notably in lower- and middle-income countries.
Beyond technology used directly for treatment, the Ada Lovelace Institute, a UK think-tank, recently described a growing “datafication” of society through mobile phones, movement trackers and online searches. This raises questions about human rights, privacy and growing inequality linked to information on health.
According to Ms Kickbusch, there needs to be greater debate around data stewardship to establish ground rules on ownership rights of health information.
That may restrict the scope for commercialisation of patient data without approval or compensation, but also may reflect a need to share such data for broader public good. These are some of the questions that The Lancet and FT Commission is exploring, and for which we are seeking the views of readers, including companies operating in this field.