Ethical and societal aspects of the digital disruption of health

Author
Dr. Tamar Sharon
Published on
15-11-2020
Category
Columns

 

In the past decade we’ve witnessed a growing digitalization of health and medicine, including the use of wearables and apps for health monitoring, to new forms of telemedicine and AI for diagnosis and treatment recommendation. These developments may contribute to improving individual and population health, but their disruptive nature also puts important values under strain, such as privacy, autonomy, solidarity, fairness and the common good. In order to unlock the potential of digital health we should be wary of this transformative impact and seek ways to safeguard these values as well.

Digital health: towards more personalized, preventive, participative and predictive healthcare

The phenomenon of digital health is not just about the use of digital tech in health. Two other aspects are important to mention. The first is datafication. Technologies like apps and wearables, social media and loyalty cards, have enabled all kinds of activities and behaviors to become translated into quantifiable data – how much physical activity people get, the “mood” of populations according to Twitter data, people’s consumption patterns, etc. When these processes can be measured and compared with other types of data, they also become relevant for health and medicine. Today, virtually any type of data can be considered health data, or can be used to infer health-related information when combined with other datasets. Second, many of the traditional practices of healthcare and medicine can now be carried out remotely thanks to digital tech. Health monitoring is being done to some extent by individuals themselves, using apps and wearables, and data that is relevant for medical research is being collected outside of the clinic and the laboratory with mobile devices. The Apple ResearchKit software, which allows researchers to collect data for clinical studies via people’s iPhones, is a good example of this. Some of the benefits of this are continuous data collection and the possibility to recruit large amounts of participants.

There is much potential in these developments for making healthcare more personalized, preventive, participative and predictive. With the increased collection of health-related data, some proponents envisage the creation of personal health maps. Deviations from what each person’s data look like in good health could indicate that there is a health problem on its way, so that an intervention can be made before a patient experiences any symptoms. And virtual medical assistants may help people stay healthy by feeding them personalized advice based on their data. This heightened involvement and participation of citizens may increase their sense of autonomy and empowerment. This is often the promise that comes with mobile health apps. Another good example of this are PGOs (Persoonlijke Gezondheidsomgevingen), the personal health environments which are currently being set up where citizens will be able to store and manage all their health data in one place, giving them control over who can access these data

Understanding the challenges of digital health

Despite this potential, digital health also raises a number of ethical and societal challenges. Take the promise of autonomy and empowerment. The ideal of empowerment promoted in digital health implies a certain understanding of health – as an individual responsibility, and often as a lifestyle choice. This is problematic, as we know that health and disease are often determined by factors that cannot be controlled, such as genetics (predispositions to certain conditions) and socio-economic status (junk food is cheaper than organic vegetables). Framing health as a matter of individual choice, that everyone is equally free to make and hence individually responsible for, is thus inaccurate. It can also have detrimental effects, stigmatizing and blaming those who do not make “good” choices, and eroding our sense of solidarity in health. We need to ask if everyone can be empowered to take more responsibility for their health and what the effects of this would be. For example, are citizens really ready for the responsibilities that come with managing all their health data in PGOs? What kind of new pressures – from family members, or private actors who may want to access PGOs – will be delegated to individuals that were previously mitigated by patient confidentiality rules? Empowerment and responsibilization can be wonderful ideals, but in practice they may shift responsibilities from states and healthcare systems onto the shoulders of individuals, and this may actually be dis-empowering.

Another important concern that is raised by digital health, especially in relation to continuous, ubiquitous health monitoring, is privacy. We can understand privacy as a kind of shield or buffer, which shelters us from the gaze of others, and which we need in order to develop as full-fledged individuals with our own identity. The legal scholar Julie Cohen calls privacy the “breathing room” needed for self-development. But this breathing room is under threat today, by the constant data collection, profiling and nudging we are undergoing in all areas, including our health. For example, health-related data that are shared with a consumer app are often shared with third parties, including advertisers and insurers. Health-related data that people post on Twitter are increasingly being used for medical research, for example in studies on depression. In these situations, even if people legally consent to their data being shared, by ticking an “I agree” box, and even if the data are publicly available, as on Twitter, this does not mean it is alright to repurpose this data – even if this for medical research.

In order to be of any use, the large amounts of data that are being generated in digital health need to made manageable, analyzable and actionable. Artificial intelligence (AI) is very good at this, and in the past few years we have seen promising applications of AI in the medical field. But we have also seen that AI systems are vulnerable to bias, and that this can lead to discriminatory and unfair outcomes. Bias can creep into AI in a number of ways. First, the datasets used to train an AI may be unrepresentative of reality. For example, it has been found that the algorithm used in developing some melanoma apps has been trained on datasets made up of more light-skinned than dark-skinned samples. In this case the app would be inevitably better at recognizing melanoma on light skin. The second way biases can creep in is because biases and prejudices exist in the real world. An AI that learns from real-world data will incorporate and reproduce existing biases. Last year, for example, it was revealed that an algorithm used for allocating care to patients in hospitals across the US had been systematically discriminating against black people. Not because its designers were racist, but because it used data that already reflected existing biases, namely that in the US black people who are just as sick as white people cost the health system less than their white counterparts. In such cases bias in AI can exacerbate existing inequalities in society. The fact that the recommendations and decisions made by AI are often opaque and difficult to explain complicates these issues, just as it raises additional ones concerning liability and responsibility when something goes wrong

The “Googlization of health”

One development that I am particularly interested in is what I call the “Googlization of health”. In recent years, all of the major tech corporations, such as Apple, Alphabet, Amazon and others, have begun taking an interest in health and medicine. Their expertise in data collection, management and analysis have made them interesting partners to collaborate with for data-driven personalized medicine. Examples include Apple ResearchKit studies, or some of Verily’s (Alphabet’s life science branch) collaborations with hospitals and research institutes, like the ‘Project Baseline’ in the US or the ‘Parkinson op Maat’ study taking place at Radboud UMC.

These new types of collaborations may be very beneficial for medical research and healthcare, but they are also accompanied by some risks. These do not only have to do with data protection and privacy, which in many cases are being dealt with properly in these collaborations, but also with broader issues concerning the growing power of these companies in all sectors of society. For example, will these corporations become gatekeepers of valuable health datasets they are helping to compile? What new biases may be introduced into research using technologies, like iPhones, that only certain socio-economic segments of the population use? What role will these companies, which are already so powerful in other sectors of society, start playing in setting research agendas and providing healthcare services? These questions have to do with health and medical research as a common good and must be taken into consideration as researchers and governments pursue these collaborations.

The future of digital health

We tend to think of technologies as neutral instruments, that we use to fulfill predefined functions. But technologies always have a greater transformative potential. As they are introduced into a domain like health and medicine, they change relationships between doctors and patients, they redistribute responsibilities and they challenge some of our most fundamental values. For this reason, they should not be evaluated solely in terms of their accuracy, efficiency or efficacy. Rather, we should be wary of how they can undermine values like autonomy, solidarity, privacy, fairness and the common good, and develop new frameworks for clinical practice, technology design and regulation that can ensure that these values do not get traded off – even for better health.

 

Dr. Tamar Sharon

Dr. Tamar Sharon is associate professor in philosophy of technology at Radboud University Nijmegen, where she co-directs the interdisciplinary iHub for Security, Privacy and Data Governance. She studied history and political theory at Paris Jussieu and Tel Aviv University, and holds a PhD in interdisciplinary studies from Bar Ilan University in Israel (2011). Her research explores the impacts of new and emerging technologies in the health domain, with past research projects on human enhancement, “healthy citizenship” and self-quantification. Her current project studies the “Googlization of health” – the recent move of large tech corporations into the areas of health and medical research – and what this means for the common good. Tamar is a member of the WHO European Advisory Committee on Health Research and the Young Academy of Europe.