Read more

December 18, 2023
4 min read
Save

Cancer researchers raise concerns about loss of ‘human touch’ with patient-facing AI

As artificial intelligence technologies play an increasingly important role in everyday human interactions, patients will likely deal with them to schedule appointments, monitor their health, learn about disease and connect to resources.

As these nonhuman communications factor more prominently into health care and, specifically, oncology care, an article published in JCO Oncology Practice addressed the need to maintain patient autonomy and human dignity when using artificial intelligence (AI) technology.

Holding hand doctor
A potential ethical implication of patient-facing AI is the loss of the human touch and empathy in patient interactions. Source: Adobe Stock.

“We were inspired to write this paper because of the rapid advancements in AI technology and its increasing integration into health care, particularly in patient-facing applications, which we felt were significantly underdiscussed in the medical literature,” lead author Amar H. Kelkar, MD, a stem cell transplantation physician at Dana-Farber Cancer Institute, told Healio. “Given our dual roles as clinical researchers and bioethicists, we wanted to define what we viewed as current and future representations of ‘patient-facing AI’ while recognizing both the transformative potential of AI in improving patient care and highlighting specific ethical concerns associated with these technologies.”

Amar H. Kelkar, MD
Amar H. Kelkar

Kelkar spoke with Healio about the ethical challenges posed by AI technology in telehealth, remote patient monitoring and health coaching, and discussed principles to guide the development and implementation of AI technology in oncology care.

Healio: What motivated you to write this paper?

Kelkar: We hoped to raise awareness among health care professionals and the public about the potential risks [of AI] to patient privacy, autonomy and dignity, and provide some preliminary guidance with a call to action for the deployment of ethical standards in the development of patient-facing AI.

Healio: How might AI-based telehealth and remote monitoring pose risks to confidentiality?

Kelkar: AI-based telehealth and remote monitoring systems can pose risks to confidentiality when they handle sensitive patient data. These systems may transmit and store patient information, including medical records and real-time health data. Inadequate security measures or breaches of these systems could lead to unauthorized access and exposure of confidential patient information, potentially resulting in privacy violations and identity theft. It's crucial for health care providers and technology developers to implement robust encryption and security protocols to mitigate risks before these technologies become more widely disseminated.

Healio: What are some of the ethical implications of AI-based health coaching programs? What will potentially be missing in these coaching interactions?

Kelkar: AI-based health coaching programs offer personalized support and guidance to patients, but they raise ethical concerns. One potential ethical implication is the loss of the human touch and empathy in patient interactions. Although AI can provide valuable information and reminders, it may lack the deep emotional understanding and connection that human coaches can offer. Additionally, there's the risk for overreliance on AI advice, potentially undermining patient autonomy if individuals follow AI recommendations without question. Striking a balance between AI assistance and human involvement is essential to ensure ethical health coaching.

Healio: How might the lack of transparency and explainability of AI models disrupt the bioethical principle of justice?

Kelkar: When AI models make decisions that impact health care — such as treatment recommendations — and these decisions are not transparent or explainable, they can unintentionally systematize bias while making it challenging to identify and rectify these issues. This can result in unequal access to health care resources and treatments, disadvantaging certain populations. To uphold justice, AI developers and health care institutions must prioritize transparency, accountability and fairness in their AI systems. Transparency, particularly with training data sets, is one important remedy.

Healio: What are the potential threats to human dignity and patient autonomy with patient-facing AI?

Kelkar: Patient-facing AI has the potential to threaten human dignity by depersonalizing health care interactions. Overreliance on AI may lead to patients feeling like they are reduced to data points or algorithms, rather than unique individuals with specific needs and emotions. Patient autonomy can also be compromised if individuals feel pressured to follow AI recommendations without human interaction or oversight, potentially undermining their ability to make informed decisions about their health. AI also has the potential to portray the illusion of empathy, which can be harmful because humans are deserving of empathetic care, but also because they may become unaware and potentially less cautious of potential harms. Maintaining human dignity and patient autonomy requires a careful balance between AI assistance and preserving the human element in health care.

Healio: What can oncologists do to educate their patients about interacting with AI?

Kelkar: Practicing oncologists can play a crucial role in educating their patients about patient-facing AI by staying informed about the latest AI applications in health care. They can initiate conversations with their patients about the benefits and limitations of AI, emphasizing that AI should complement, not replace, the patient-doctor relationship. Oncologists can also encourage patients to ask questions about AI recommendations and involve them in shared decision-making. They can also advocate for governmental oversight and medical organization guideline development. By fostering open communication and trust, oncologists can ensure that patients feel empowered and informed in their health care journey involving AI technologies.

Healio: Is there anything else you’d like to mention?

Kelkar: In addition to the ethical considerations highlighted in our paper, it's essential to emphasize the ongoing need for research and development of ethical guidelines specific to patient-facing AI. As technology evolves, so, too, should the ethical frameworks that govern its use in health care. Furthermore, collaboration between health care professionals, technologists, policymakers and patient advocacy groups is crucial to ensure that patient-facing AI technologies are developed and implemented in ways that prioritize patient well-being, equity and ethical integrity.

Reference:

Kelkar A, et al. JCO Oncol Pract. 2023;doi:10.1200/OP.23.00412.

For more information:

Amar H. Kelkar, MD, can be reached at Dana-Farber Cancer Institute, 450 Brookline Ave., Boston, MA 02215; email: amarh_kelkar@dfci.harvard.edu.