Read more

September 25, 2023
4 min read
Save

Artificial intelligence and the house of God

You've successfully added to your alerts. You will receive an email when new content is published.

Click Here to Manage Email Alerts

We were unable to process your request. Please try again later. If you continue to have this issue please contact customerservice@slackinc.com.

Those who follow my editorials in Healio Rheumatology will have no problem guessing what I am about to share with you regarding my perspective on artificial intelligence.

As per our roundtable with Jonathan H. Chen, MD, Michael LeTang, MS, RN-BC, Jacob M. van Laar, MD, and myself, there is much to be hopeful about as we learn how to incorporate AI in our daily practice routines. However, I also agree that AI, like any other groundbreaking advance, could be used for nefarious purposes — but I will leave that to others to sort out and regulate.

The use of new and evolving platforms of AI in the office and hospital will enhance our efficiency and, hopefully, our communications with patients, said Leonard H. Calabrese, DO.
"The use of new and evolving platforms of AI in the office and hospital will enhance our efficiency and, hopefully, our communications with patients," said Leonard H. Calabrese, DO.
Image: Adobe Stock

My concerns surrounding AI are already a matter of public record, as I reviewed Dr. Eric Topol’s brilliant and prescient book, Deep Medicine, in my April 2019 editorial “AI and the Rheumatologist: An Opportunity for Better Care.” As I re-read that editorial, I realize my views have not changed much even though I am now engaged with AI and using ChatGPT a lot these days for a variety of purposes.

I remain hopeful that in the right hands the use of new and evolving platforms of AI in the office and hospital will enhance our efficiency and, hopefully, our communications with patients. In addition, AI clearly has great potential to enhance our medical communications by crafting more user-friendly and informative notes, as well as responding to patients’ queries on a variety of non-emergency topics with detail and authority, both of which can save us time.

However, as I said way back in 2019, I still have fears that the administrative core in medicine may try to exploit this by having practitioners see more patients in less time. Such a move would have both a chilling and opposite effect on the quality of care, and foster greater burnout — so let’s not forget about it as a potential threat.

Leonard H. Calabrese

That said, my primary concerns regarding AI surround its potential impact on medical humanism, which could be negatively affected if there is an over reliance on the technology in the dyad of practitioner and patient. Although much has been discussed regarding the potential for AI to be empathic, I do not fear it as a competitive force against empathic, in-person practitioner-patient communication.

In fact, I think that the entry of AI into medicine represents a wake-up call for all in health care to double down on our efforts to keep the pilot light of humanity flickering in our daily care, in a way that non-human technology still cannot replace. I abide by the estimate that 80% of empathic communication is nonverbal and demands the intimacy of human-to-human contact. Noting the strain in a person’s voice, their subtly depressed affect or anger, as well as their eyes, moist with tears not shed, all require face-to-face as opposed to face-to-computer presence.

Empathic communication comes naturally to some of us, but for many it is a skill to be honed over a lifetime of practice. I still find it ironic that we choose not to focus on this in our educational efforts to improve care.

In my early days of teaching in our medical school, I used to refer a lot to The House of God — a novel written by Stephen Bergman, MD, DPhil, under the pen-name Samuel Shem — which was published more 40 years ago but is nevertheless timely, I believe, to reflect upon when discussing AI. Unfortunately, in today’s era of identity politics, the book is rarely if ever mentioned. Even if read with an open mind I will guess that many would still judge it too politically incorrect for the times.

My staunch personal view of the book, having read it twice, is that it is a story that is not only farcical, funny and sad all at the same time, but also one that can be viewed as a small flickering candle in the darkness of medicine’s current woes.

As I sat to write this editorial, I wondered what the novel’s “Fat Man” would have thought about AI? For those of you who have not read the book, the Fat Man was the chief medical resident and what some would call a real “customer” of the first-degree. He was tough as nails and cynical, yet also smart and kind. I actually asked ChatGPT what the Fat Man would likely say about AI and was provided a narrative which, while somewhat hackneyed, was also cute and, in my opinion, consistent with his ethos. You can see its response by following the first reference below.

I will end by quoting from The House of God and the man himself regarding his — and our — role in the care and caring of patients. The Fat Man said, “I make them feel that they’re still part of life, part of some grand nutty scheme, instead of alone with their diseases. With me, they still feel part of the human race.” And as narrator Roy Basch realized, “What these patients wanted was what anyone wanted — the hand in their hand, the sense that their doctor could care.”

I believe in this narrative far more now than when I first read this book as a resident, as I lacked the life experiences to process it the way I do now. Yes, bring on AI to medical practice, but remember what the Fat Man said. That’s my take, what’s yours? Please share your thoughts with me at calabrl@ccf.org or at rheumatology@healio.com.