Issue: July 25, 2020

Read more

July 21, 2020
4 min read
Save

Can artificial intelligence help reduce disparities in medical care?

Issue: July 25, 2020
You've successfully added to your alerts. You will receive an email when new content is published.

Click Here to Manage Email Alerts

We were unable to process your request. Please try again later. If you continue to have this issue please contact customerservice@slackinc.com.

Click here to read the Cover Story, "Pandemic spurs paradigm shift in artificial intelligence."

POINT

Accurate, reliable tools needed to eliminate bias

Any question about the utility of a tool is best answered by giving it a go.

Try the tool, compare it with others, change the design to improve it. One might indeed be able to drive a nail with a rock, pistol or hoe, but it should not take long to figure out that a hammer, a blob of metal on a stick, is better suited to driving nails.

Kenneth W. Goodman, PhD, FACMI, FACE
Kenneth W. Goodman

Computers and software are tools. Like other tools, they have evolved, and that evolution has been shaped by society, economics, politics and the law. Sometimes even ethics has played a role. We repeatedly see something interesting: Humans have a sad and destructive inclination to play favorites with no good reason. We have learned that our widespread failures of impartiality have corrupted society, economics, politics and the law. In the opposite direction, we now know that to improve society, economics, politics and the law, we need to make them impartial. Democracy is an attempt to do such a thing.

We have also learned — alas, it has taken us several thousand years —that striving for fairness, justice and impartiality is quite difficult, in part because unfairness, injustice and partiality are often cooked into the very systems we try to change. Machine learning algorithms have been found in many instances to incorporate bias against racial and ethnic minorities, women and others. The debate over AI bias anticipated our swift, recent and widespread recognition that health systems are also biased against minorities and that our society itself is partial and unjust.

Like a badly designed or manufactured hammer that makes it easy to bend a nail or make a mistake, biased AI training sets and algorithms make it easy to make mistakes. There is nothing about these tools that cannot be changed and improved. For our purposes, we not only need to make our AI tools more accurate and reliable, we must and can ensure that these measures of quality apply to all people. Ophthalmologists have led other specialties with exciting discoveries about AI’s diagnostic utility — and with no-less troublesome discoveries of its shortcomings. There is therefore an opportunity for vision specialists to continue to lead by both improving AI systems, databases and software and by ensuring that the elimination of bias is itself a mark of improvement.

Physicians have long known they must understand how their tools work in order to use them effectively. If one can learn optics, one can learn enough software engineering to see if computer programs are transparent, clearly annotated and documented, and fit for purpose. Absent that, one can demand that AI system vendors explain their products and warrant that they are safe, effective and therefore unbiased.

PAGE BREAK

The addition of a claw to a hammer improves both the tool and our metaphor. Not only does a hammer now drive a nail, it also makes it easy to correct a bent or crooked nail by being able to pull it out.

Kenneth W. Goodman, PhD, FACMI, FACE, is founder and director, University of Miami Miller School of Medicine’s Institute for Bioethics and Health Policy.

COUNTER

Biases difficult to detect, hard to mitigate

Anthony Solomonides, PhD, FAMIA
Anthony Solomonides

Technology has changed both the content and the methods of work.

It is possible to make a broader claim that technology has changed us. The ways we live, the ways we think, the ways we interact. To be clear, much of this is to the good, but it falls to me here to flip the coin and point out the deficits, and in the wake of artificial intelligence, to explore what we risk in the future: what we value now but may lose, and what we deprecate in the present and yet, in some obscure way, we risk carrying forward into our technological future.

In human terms, technology has the potential to deskill. Do you know someone who cannot do simple arithmetic without recourse to a calculator? For ophthalmologists and ophthalmic technicians, the capacity of image processing software to identify features invisible to an expert human eye raises the question: How welcome should this be? The answer must appear obvious: Why, as an ophthalmologist or optometrist, wouldn’t I welcome a device that helps me see more accurately? In a well-known experiment, low-intensity random noise was added to images; to the human eye, the images and their gross features looked the same, but the noise led to an erroneous machine interpretation. When it comes to responsibility, it is the physician who carries the burden. Still comfortable?

Modern AI is typically based on “machine learning.” Software is “trained” on observational data sets and then tested on other independent data sets. As shown in several studies, if there is bias in the data, that bias may enter into and be codified by an algorithm. For example, if, as reported by WHO, women are less frequently referred for cataract surgery, the algorithm may interpret this as women having less need of cataract surgery or having to meet a higher threshold before being referred. In terms of race, bias has been shown to be present in clinical guidelines, eg, in the use of so-called “race corrections.” Whether an AI system is built based on expert opinion or through machine learning from observational examples, the danger of incorporation of this bias is clear: It is both difficult to detect and hard to mitigate.

PAGE BREAK

My concluding concern is about human autonomy. From the earliest work in this area, those who sought to create artificial intelligence have rushed to warn us that it would be futile for humans to stand in the way of machines when they become more intelligent than we are. Let us consider autonomy and ask the question, what does it mean to be intelligent? Is it merely a question of having extraordinary skill in some area, let’s say, advanced mathematics or ocular surgery, or does it also encompass the ability to understand the plight of a stranger or to love another person? As healers, perhaps physicians would find it easier than most to understand and respond to this question.

Anthony Solomonides, PhD, FAMIA, is program director, Outcomes Research and Biomedical Informatics, NorthShore University HealthSystem, Evanston, Illinois.