Read more

July 21, 2020
13 min read
Save

Pandemic spurs paradigm shift in artificial intelligence

You've successfully added to your alerts. You will receive an email when new content is published.

Click Here to Manage Email Alerts

We were unable to process your request. Please try again later. If you continue to have this issue please contact customerservice@slackinc.com.

The pandemic has accelerated digitalization in all fields, including health care.

Data, artificial intelligence, digital health systems and connectivity have been aiding the fight against COVID-19 in multiple ways, uncovering new possibilities and showing a clear road map of how AI can be integrated into the health care ecosystem to enhance safety, efficiency and effectiveness, and ultimately improve quality of patient care.

“This pandemic has put health care under stress but has also facilitated the analysis of where we are and what we are doing. It has been a powerful and beautiful wake-up call to see that the management not just of the disease, but of the patient, can be improved,” Ursula Schmidt-Erfurth, MD, PhD, professor and chair of the department of ophthalmology at University Eye Hospital, Vienna, said.

Daniel Shu Wei Ting, MD, PhD
The successful application of AI to tackle a major, global public health challenge in 2020 will likely increase the public and governmental acceptance of such technologies for other areas of health care, according to Daniel Shu Wei Ting, MD, PhD.

Source: Daniel Shu Wei Ting, MD, PhD

Digital methods enable new ways for patients to receive care and will retain their validity beyond the COVID-19 emergency.

“Significant efforts in the AI and big data space are already underway, and the pandemic has made us aware that a rapid acceleration in the pace of adoption of AI is mandatory,” she said.

“The immediate use and successful application of AI to tackle a major, global public health challenge in 2020 will likely increase the public and governmental acceptance of such technologies for other areas of health care, including chronic disease, in the future,” Daniel Shu Wei Ting, MD, PhD, assistant professor and head of AI and digital innovation of ophthalmology at Singapore National Eye Center, said. He is also one of the executive committee members of the American Academy of Ophthalmology AI task force and the STARD-AI task force.

A crisis can provide an opportunity, and this great crisis of 2020 provides a great opportunity for digital technology.

Optimize screening, monitoring, treatment

Digital methods of data analysis allow for remote screening, diagnosis and monitoring of patients, a great asset in the course of a pandemic, but also an opportunity under normal circumstances. They also ease referral processes from primary to tertiary care through the sharing and exchange of images.

“There are multiple opportunities with multiple advantages for the patients and the entire health care system. Starting from the first step of screening, we need efficient methods to identify early disease. This is particularly true in retina, where early detection and early treatment are key for good vision outcomes,” Schmidt-Erfurth said.

PAGE BREAK

An estimated 200 million people worldwide are affected by early age-related macular degeneration, 300 million have diabetes, and 75% of those will develop diabetic eye disease. Screening is an enormous task that can only be met by systems of automated image analysis.

Another goal is to provide real-world treatment outcomes comparable to those of clinical trials.

“There is currently a gap, mostly due to undertreatment, and this means that we need to optimize monitoring frequency and precision, measuring the therapeutic response in terms of fluid resolution and fluid recurrence with objective, accurate and standardized methods,” Schmidt-Erfurth said.

Biomarkers of AMD progression

Ursula Schmidt-Erfurth, MD, PhD
Ursula Schmidt-Erfurth

Schmidt-Erfurth and a team at University of Vienna pioneered AI in ophthalmology by developing AI algorithms as early as 2013.

“We established a huge AI laboratory for image analysis. State funding allowed us to set up an interdisciplinary team of international computer science and retinal imaging experts who developed more than 20 validated deep learning algorithms for the identification and quantification of disease biomarkers,” Schmidt-Erfurth said.

To predict disease progression and monitor the effects of pharmacologic intervention, an algorithm was designed for fully automated detection and quantification of intraretinal and subretinal fluid.

“The inability to reliably identify, localize and quantify fluid on OCT results in variability in injection rates, often leading to undertreatment. The introduction of AI-based algorithms may allow retina specialists everywhere in the world to detect, localize and quantify fluid in a fast, reliable and automated manner, leading to better outcomes and health care savings,” she said.

Both supervised and unsupervised learning are used in the search of biomarkers. In supervised learning, the intelligent system is instructed to search for biomarkers that are already known, such as fluid, atrophy or drusen. In unsupervised learning, the machine screens large data sets and recognizes patterns of micro-changes that are not visible by observation and were never identified before.

“This will allow us to eliminate previous bias in biomarker search, broaden the spectrum of relevant biomarkers and identify features that might shed new light on the pathogenesis of retinal diseases. It will also help identify new therapeutic targets, which will orient our research and development of new therapies,” Schmidt-Erfurth said.

Detection of DR

By 2040, approximately 600 million people will have diabetes. Screening for diabetic retinopathy, a leading cause of visual loss, is a widely recommended strategy to prevent diabetes-related visual impairment. Early detection of DR also prompts early education and systemic intervention to optimize glycemic and other vascular risk factors control before development of further complications. Many DR screening services worldwide, however, are constantly challenged by the manpower and financial implication. Using deep learning and the cross-sectional training and testing data sets collected worldwide, researchers at National University of Singapore developed a deep learning system, called SELENA, for the detection of diabetic retinopathy, glaucoma suspect and AMD.

PAGE BREAK

“Deep learning has sparked the medical imaging fields since 2016. It is an extremely powerful machine learning technique that has overcome many technical unmet needs on image recognition, speech recognition and natural language processing. Based on a data set of nearly 500,000 retinal images, SELENA has excellent diagnostic performance in detecting DR, with an area under the curve of 0.93, 91% sensitivity and 90% specificity. This is a multicenter AI collaborative research effort with close to 30 co-investigators worldwide. Second, it is capable to detect prevalence rate of any DR, referable DR and [vision-threatening] DR and the DR-associated vascular risk factors in a much shorter grading time of the retinal images, 2 months vs. 2 years,” Ting said.

The generalizability of SELENA was demonstrated in multiple countries such as Singapore, Australia, the United States, Mexico, China and Hong Kong, as well as the low- to middle-income African population in Zambia. It has been now approved by the Singapore Health Service Authority and received a European CE mark as a fundus-based retinal screening device for DR, glaucoma suspect and AMD. The technical integration of SELENA is now complete and has been tested clinically for operational flow, with estimation of real-world deployment in 2021. It is also listed as one of the national AI strategies in Singapore.

“In Singapore, we started integrating the AI system into the Singapore Integrated Diabetic Retinopathy Programme since 2018. In fact, in a paper published in Lancet Digital Health, we also showed that the combination of human intelligence and AI yielded the best outcome from the health economic standpoint. We are expecting to see the patients’ outcomes in the next 3 to 5 years,” Ting said.

“Apart from fundus-based screening technologies, the team is actively researching into various other clinical diseases (eg, myopia, systemic vascular diseases), imaging modalities (eg, OCT, genomics) and novel technical methods (eg, generative adversarial network and explainable AI) to increase the diversity of the training and testing data sets and the explainability of the AI algorithms,” he said.

ROP diagnosis and severity

Automated image analysis and deep learning systems have the potential to overcome the multiple challenges of screening for retinopathy of prematurity, leading to improved and better targeted care, according to J. Peter Campbell, MD, MPH, assistant professor at Oregon Health & Science University (OHSU).

“There are a lot of babies who need to be screened, and ROP screening is inefficient in that sense because maybe 80% to 90% of the babies you screen do not need any sort of intervention. It is a stressful exam, usually performed in the neonatal ICU. Babies respond with slow heart rate and slow breathing and need careful monitoring. It is done by indirect ophthalmoscopy, the same way it was 60 years ago, and evaluation is subjective: Clinicians looking at the same baby, or picture of a baby, often don’t agree on what they are seeing,” he said.

PAGE BREAK
J. Peter Campbell, MD, MPH
J. Peter Campbell

As a result, infants with the same level of disease might be treated differently by different people, based on the subjective perception of disease severity. This runs the risk of overtreating infants who do not need to be treated and undertreating, treating too late or not treating at all infants who need treatment, leading to further complications, including blindness.

Plus disease is the most important clinical feature determining the need for treatment for ROP, but subjective biases affect also plus disease diagnosis and measurement. A collaborative team from OHSU, Harvard, Northeastern University and the University of Illinois at Chicago developed a deep learning system that classifies plus disease into the three categories of no plus, pre-plus and plus defined by the International Classification of Retinopathy of Prematurity.

“A deep convolutional neural network (CNN) was trained using a data set of 5,511 retinal images. Each image was previously assigned a reference standard diagnosis combining the image-based diagnosis by three independent expert graders and the clinical diagnosis of a specialist. The system was able to accurately classify unseen data as plus, pre-plus or no plus as well or more consistently than international ROP experts,” Campbell said.

AI was also used to develop an ROP severity score, running from 1 to 9. This tool enables quantitative disease monitoring and risk prediction, can help in the assessment of treatment response and post-treatment recurrences, and can also be used to collect and compare epidemiologic data.

Predicting glaucoma progression early

Artificial intelligence applied to detection of apoptosing retinal cells (DARC), an imaging method to track the process of retinal neurons apoptosis in vivo, showed the ability to predict glaucoma progression at 18 months.

“We have now an AI-aided biomarker for predicting glaucoma progression, with potentially wide clinical application and research application in the testing of new drugs,” M. Francesca Cordeiro, MD, PhD, chair and professor of ophthalmology at Imperial College London, said.

Retinal ganglion cell apoptosis is one of the earliest hallmarks of glaucoma, and DARC “opens a window” into the degenerative processes triggered by the disease at a cellular level.

“By confocal scanning laser ophthalmoscopy, after injection of fluorescently labeled annexin V, we are able to observe individual nerve cells dying in the living eye at the early stages of glaucoma, many years before any visual field changes occur,” Cordeiro said.

A drawback of DARC is the need to have trained observers to detect and manually count the single apoptosing retinal cells, appearing as annexin-positive hyperfluorescent spots in the retina. To enable faster and objective measurement of DARC, a CNN-aided algorithm was trained and validated using candidate DARC spots identified by at least two of five trained observers. When applied to a cohort of glaucoma patients, it was able to accurately gauge and measure signs of cell damage 18 months before OCT.

PAGE BREAK

“We were also able to establish a precise threshold value because every single patient who had a DARC count above 30 went on to progress 18 months later,” Cordeiro said.

M. Francesca Cordeiro, MD, PhD
M. Francesca Cordeiro

Such a powerful biomarker could speed up clinical trials on neuroprotective drugs, a promising new frontier never properly explored due to the slow progression of the disease, which requires many years of follow-up to show changes.

“Something we have been lacking in neuroprotection is good measures of how quickly people respond to successful treatment. Now we can shorten study times and set up smaller concept trials to then go on to larger ones when we have proved the efficacy,” she said.

Detecting systemic disease from the retina

AI-empowered DARC is now tested as a method to rapidly detect cell damage caused by other neurodegenerative conditions, including AMD, multiple sclerosis, Parkinson’s disease and dementia.

“As an extension of the brain, the retina provides a platform from which to study diseases of the nervous system. In many neurodegenerative conditions, early diagnosis is often challenging due to the lack of tests with high sensitivity and specificity. Retinal biomarkers in vivo are an additional diagnostic tool which may avoid the use of brain scans and other invasive tests,” Cordeiro said.

“In the recent New England Journal of Medicine, we showed that the deep learning algorithm is effective in detecting papilledema that could be due to the space-occupying lesions in the brain. In another Lancet Digital Health paper, we demonstrated the possibility of using deep learning to screen for the referable chronic kidney disease patients. These are some of the noninvasive AI-based potential alternatives that could be considered in the resource-constraint settings, especially for those low- to middle-income countries,” Ting said.

Retinal analysis in the future will play an important role in other medical fields, such as internal medicine, endocrinology and neurology, according to Schmidt-Erfurth.

“Even in a simple color photograph of the retina, algorithms can identify age, hypertension and can measure the blood glucose level in a noninvasive, inexpensive way. There is a completely new horizon opening here. Automated algorithms can be used as triage and screening tool by general practitioners and by non-medical professionals such as opticians and optometrists. It can help identify disease onset much earlier and organize specialist referral in a reliable, efficient and timely manner,” she said.

Transitioning from studies to clinical practice

The next challenge with AI in medicine is to translate the results of studies into practice.

PAGE BREAK

“We are continuing to validate the technology while we seek regulatory approval and a pathway to clinical implementation. The FDA has assigned breakthrough status, which shortens the process, but there is still some way to go. In the U.S., AI devices are treated as software medical devices, and you have to define the indications for use, the intended population, the precise camera and mechanism with which it will be used, who will be doing the interpretation and demonstrate that it works,” Campbell said.

“It is early days for AI. What we are doing is still mostly at an academic level. In our university, we have started to use the fluid quantification tool in clinical studies to evaluate how anti-VEGF therapy can be optimized in terms of visual outcome, economic burden for the system and treatment burden for the patients,” Schmidt-Erfurth said.

Implementing AI-based solutions into clinical settings is challenging and requires a concerted effort from all stakeholders, including regulators, insurances, hospital managers, IT teams, physicians and patients, according to Ting.

“It also requires a realistic business model that needs to consider reimbursement, efficiency and the ability to improve clinical performance over time,” he said.

The challenges of data sharing, ownership

In order to build a robust deep learning system, it is important to have two main components: the “dictionary” (the data sets) and the “brain” (the CNN). A large number of images and data sharing from different centers is an obvious approach to increase the amount of input data for network training.

“A simple analogy will be the more you read, the cleverer you get. The one caveat is that you need to read the right books. Thus, the ground truth and data sets will need to be robust and well phenotyped for different diseases. The performance of the network will depend on the number of images, the quality of the images and how representative the data are for the entire spectrum of the disease,” Ting said.

Data sharing also faces obstacles related to the regulations and privacy rules of individual countries.

“While regulations are aimed to ensure patients’ privacy, they sometimes form barriers for effective research initiatives and patient care. AI research groups worldwide should continue to collaborate to rectify this barrier, aiming to harness the power of big data and deep learning to advance the discovery of scientific knowledge,” he said.

Data ownership is another critical issue.

“In the information age, data is the new oil, but the question is: Who owns the data? We see a lot of abuse coming from large IT initiatives where companies buy data without patient consent. Doctors have always been responsible for protecting patients’ records, and it is now the medical community that should take over control and establish the rules and regulations on how the patients’ personal data should be handled in medical AI,” Schmidt-Erfurth said.

PAGE BREAK

Not a replacement, but a support

“Innovation always comes with disruption of the established, conventional settings. We have seen this multiple times in ophthalmology because we are very technology-dependent,” Schmidt-Erfurth said.

OCT, when it was introduced in the early 1990s, encountered a lot of resistance because ophthalmologists were skeptical about an imaging device that did not require their direct observation of the patient’s eye. Nevertheless, OCT has taken over the diagnostic field.

“We are taking the next logical step, which is to exploit the extensive imaging data set that OCT provides to train intelligent systems to detect pathological patterns and measure disease activity and therapeutic response more precisely than any person ever could do. This is the second experience in which doctors may feel they are losing control. It requires trust and a new mindset,” she said.

“We need to present new algorithms with a plausibility check that doctors can use as a decision support and not as a replacement for their own expert decision,” Schmidt-Erfurth said.

Ting said that the capabilities of deep learning, however, should not be construed as competence.

“What networks can provide is excellent performance in a well-defined task. Networks are able to classify DR and detect risk factors for AMD, but they are not a substitute for a retina specialist,” he said.

To improve clinical acceptance of deep learning systems, it is important to unravel the “black box” nature of deep learning.

“Deep learning has generated a lot of hype in the technical and medical world over the past 5 years. While it is heartening to see many robust AI algorithms in the medical field, it is more important to understand the limitations and intended use environment well to ensure successful clinical translation of the AI algorithms from the bench to bedside,” Ting said.

PAGE BREAK

Click here to read the Point/Counter to this Cover Story.