Explainable machine learning can improve efforts to diagnose CKD early
Click Here to Manage Email Alerts
Key takeaways:
- Explainable machine learning can offer accurate diagnoses and identify causes of chronic kidney disease in early stages.
- The authors said “a lack of trust” is slowing acceptance of AI among physicians.
Use of explainable machine learning combined with an AI classifier can aid health care providers in making accurate diagnoses and identifying root causes of chronic kidney disease at early stages, research data show.
“Chronic kidney disease (CKD) is increasingly recognized as a major health concern due to its rising prevalence,” Gangani Dharmarathne, of the Australian Laboratory Services Global, Water and Hydrographic, and colleagues wrote in the journal Intelligent Systems With Applications. They added, “Early detection of CKD is crucial, and machine learning methods have proven effective in diagnosing the condition, despite their often opaque decision-making processes.”
Current diagnostic methods often miss early signs of kidney damage, the authors wrote. “Traditional biomarkers, such as serum creatinine and urine albumin, have limited sensitivity in detecting mild to moderate kidney impairment,” the authors wrote. They added, “This situation calls for more sophisticated diagnostic tools that accurately identify CKD in its initial stages. With its capability to analyze complex datasets and identify subtle patterns undetectable by traditional methods, machine learning offers a promising solution.”
Diagnostic tools
In an interview with Healio, co-author D.P.P. Meddage said machine learning can provide additional tools in diagnosing illnesses. “Doctors are human and have limitations. If we have data indicating that a person is developing cancer, it may be difficult for doctors to detect this visually because they rely on what they can see with the naked eye.
“However, a machine learning (ML) model with reliable data can analyze small, localized changes at very early stages (difficult to notice by humans) and make informed decisions,” Meddage, of the school of engineering and information technology at the University of New South Wales in Canberra, Australia, told Healio.
The research team evaluated the following six machine learning classifiers: decision tree, k-nearest neighbor, support vector machine, random forest, extreme gradient boost (XGB) and artificial neural network for the modelling. Researchers also developed a graphical user interface to diagnose CKD by embedding explainable AI (XAI).
“This interface not only diagnoses CKD but also provides reasoning as to why an individual is likely to have CKD or not,” the authors wrote.
“It works using a trained machine learning model coupled with post hoc explanation method,” Meddage told Healio. “That pretrained model is written to the interface and, at some point, the explanations are embedded. Once the user provides his or her input conditions, the ML model predicts the likelihood and embedded explanation and ranks those inputs to say, ‘Okay, this factor is the main reason for your decision. Be aware of that.’”
Results
Results showed that the best performance in CKD detection was the XGB model combined with XAI. “Similar to previous studies highlighting XGB’s precision and productivity in different medical scenarios, our research validates its usability in the context of CKD,” the authors wrote. “However, the present research goes further than just focusing on diagnostic accuracy and highlights the significance of model interpretability using XAI, filling a crucial void in the medical AI field where comprehending the model’s reasoning is equally important as the diagnostic result.
“This study shows that explainable machine learning can be used reliably for CKD diagnosis, which can lead to incorporating these technologies into everyday clinical practice in the future.”
The next step, Meddage told Healio, is physician acceptance of AI. “[T]hese AI tools are not meant to replace doctors but rather to support the decision-making process,” Meddage said. “What we emphasize is the implementation of ML in the medical domain. Despite numerous research studies, these tools are still not widely implemented. Why? Simply because there is a lack of trust. For trust to be established, explainability — understanding why the model gives a particular result — is crucial. Our aim [with this study] was to improve that aspect,” he said.