October 19, 2018
4 min read
Save

BLOG: Next patient: Green, yellow or red?

You've successfully added to your alerts. You will receive an email when new content is published.

Click Here to Manage Email Alerts

We were unable to process your request. Please try again later. If you continue to have this issue please contact customerservice@slackinc.com.

In the cover story of this issue of Ocular Surgery News, we explore the way hand-held devices are changing health care around the world, giving us access to screen millions of patients with high-tech imaging and disease monitoring at very little per-patient cost. In the third world, and even in the U.S., remote monitoring promises to shorten the distance between doctors and patients and give us unprecedented population data on disease progression.

But in the developed world, the adoption of remote monitoring has fallen well behind what is technically possible. This is partly because indemnity payment systems (including Medicare) have not set up a payment process to reimburse doctors for the effort to review remotely collected data. Moreover, even if doctors are paid well for this, how many have extra time to review and report back to the patient on the volumes of data that can be produced by these remote monitoring devices?

One of my partners, Sev Teymoorian, a glaucoma specialist, says he likes to evaluate every patient he sees with a single question in mind: Is the patient better, worse or about the same? “Better” means we can celebrate, whether we’ve removed a cataract, controlled leaking choroidal neovascularization or treated conjunctivitis. “Worse” clearly requires the physician to expend some effort to understand the problem’s root cause, formulate and communicate a rational plan. “The same” might be an opportunity to engage in some preventive care education or just have a social visit with the patient.

In an ideal world, we’d know walking into the exam room of every patient whether he is better, worse or about the same. (One industry CEO friend calls this status green, red or yellow.) In an ideal world, this information gathering would be almost effortless, whether the data came from office tests or remote monitoring. We’d know just by glancing at a printed page or tree-friendly tablet screen. We’d see all the patient’s disease-specific data summarized in a neat row of colored indicators, as we’ve done with laboratory tests for generations.

But this is where most current technology is sorely lacking. Our information systems give us raw data, not green, yellow or red lights. The problem is that most electronic health record systems act like hollow containers for patient data. Most have been customized adequately to have specialty fields for eye-specific data like visual acuity, refraction, IOP, etc, but most just hold and don’t process that data in any useful way.

Let’s face it, much of the data analysis we do to get us to green, red or yellow can fairly easily be automated, giving a summary compared with historical data. As an example, when we look at a patient’s best corrected visual acuity, we compare it with past levels. We get concerned when we see a downward trend or a change from a previous baseline.

PAGE BREAK

Manually browsing these data is an inefficient use of physician time. Most of us did not go to medical school to spend our days switching between hard-to-find, slow-to-load, click-intensive screens on multiple programs to gather a patient’s laboratory findings, imaging studies and exam data. With a little effort, all of this can arrive in the EHR system automatically as structured data to be analyzed. For BCVA, the EHR system could report a yellow flag (with maybe a downward or upward arrow) for BCVA that is more than one line below (or above) historical levels? The same if refraction shows a significant change in spare or cylinder more than, say, 0.5 D.

How about using a flag for any IOP measurement that differs from the patient’s mean value by 1 standard deviation, raises above 21 mm Hg or exceeds that patient’s own “target IOP range?”

What about visual fields and OCTs? These sophisticated machines record fixation losses, mean deviation, pattern standard deviation, nerve fiber layer thickness, etc. Most EHR systems should be able to read these numbers, then analyze the change from baseline (or difference from normal for first timers) and report them with an appropriate alert level.

Currently, there are several promising software systems that automate patient care to some degree. Eyenuk created a software that analyzes fundus photographs of patients with diabetes to determine risk level as low, medium or high. Veracity is a cloud-based solution for aggregating data for planning cataract surgery. It raises alerts when exam and imaging data suggest steering away from (or toward) a particular type of lens choice. MDbackline is a system I created for automating web-based conversations with patients, allowing doctors to determine a patient’s level of cataract visual disability and pre-educate those patients about surgery (including premium lens options) before their first arrival in the office. After surgery, it helps identify patients with problems early, so doctors and staff can intervene to ensure a desired outcome.

To make these systems even more effective, all of these green/red/yellow flags should be presented in a concise format on one screen or page in the EHR system. During an exam visit, these data can even guide the AI-driven EHR system to pre-populate an impression and plan, based on standard-of-care protocols that would take into account the whole of a patient’s medical history.

Am I proposing that we relegate much of a doctor’s work to a machine? Sure, but no more than we have relegated the flying of airplanes to autopilots that now navigate challenging weather conditions more safely than human pilots. Doctors surely won’t be replaced by computers any time soon. Human judgement overseeing the machines, a warm smile and a live free-form conversation are what many patients look forward to. If we do this right, we’ll be relying on red/yellow/green diagnostics without patients even realizing that AI is playing a role. In fact, the time we save in viewing and collecting could be spent on human interaction, doing what doctors do best: looking our patient in the eye and communicating an informed plan based on a comprehensive understanding of the patient’s well-being.

Disclosure: Hovanesian reports he is a consultant to Veracity and founder of MDbackline.