Issue: June 2014
June 01, 2014
13 min read
Save

Despite some misconceptions, clinical practice guidelines are useful in orthopedics

Issue: June 2014

Historically, medicine has used an eminence-based paradigm, relying on expert opinion and clinical experience to support clinical decisions in the diagnosis and care of patients. Over time, medicine, including the field of orthopedics, has embraced evidence-based decision-making, turning to the best and highest-quality published reports to make clinical decisions.

As a result, evidence-based clinical practice guidelines (CPGs), treatment recommendations based on a systematic review of the literature and an evaluation of all available treatment options, have been gaining prominence. To date, the American Academy of Orthopaedic Surgeons (AAOS) has published 14 CPGs; five other CPGs are in development.

“CPGs are meant to inform clinicians on the best practices available based on evidence-based analysis,” Louis F. McIntyre, MD, an orthopedic surgeon in White Plains, N.Y., told Orthopedics Today.

Louis F. McIntyre, MD, said that clinical practice guidelines are meant to inform clinicians on the best practices available based on an evidence-based analysis, but noted that it is important to look at all the available evidence and the entire guideline process.

Louis F. McIntyre, MD, said that clinical
practice guidelines are meant to inform
clinicians on the best practices available
based on an evidence-based analysis, but
noted that it is important to look at all
the available evidence and the entire
guideline process.

Image: Westchester Orthopedics

Evidence-based CPGs should “serve as an educational tool [that are] based on the assessment of high-level clinical and scientific information and accepted approaches to treatment,” Frederick Azar, MD, president of the AAOS and chief of staff at the Campbell Clinic in Memphis, told Orthopedics Today.

CPGs also function to keep physicians apprised of ever-changing medical research. “CPGs are necessary because no one can keep up with the literature and no one can interpret the literature themselves except the people who basically do research in it,” Sheldon Greenfield, MD, Donald Bren professor and executive co-director of the Health Policy Research Institute at the University of California, Irvine, told Orthopedics Today.

“The average physician — orthopedic surgeon, general internist, cardiac surgeon — does not read the fine print in the article in the methods section, and they do not critique articles that come out in journals.”

In this issue, Orthopedics Today talks to orthopedic leaders on how CPGs are developed and disseminated, their benefits and drawbacks, how they may influence patient care as well as how they may affect insurance coverage.

Eliminating bias

There are some best practices for developing high-quality CPGs, according to a 2011 Institute of Medicine (IOM) report that Greenfield helped to formulate.

“A good clinical practice guideline deals directly with conflict of interest, composition of the committee, definition of the problem, a high-quality literature synthesis, systematic review, clear reporting of the information and updating of the information,” Greenfield said.

Sheldon Greenfield

Sheldon
Greenfield

“AAOS complies with all of the IOM standards for developing guidelines. The entire process is transparent,” Azar said. “At the start, we actively seek information from clinicians, patients and payers; anyone can submit a topic.”

The CPG process is multistep to ensure that the result is rigorously developed and evidence-based. “It is all based on quality — sort of what you put in is what you get out,” Azar said. “The studies have to be out there before you get good information. I think it is important that before we decide on a topic, we need to make sure there is sufficient evidence.”

He noted that the work group formulates questions, determines the inclusion criteria, and, with the assistance of a trained medical librarian, then performs a systematic literature review. The focus of the literature review is to find the highest-quality research on the given topic.

PAGE BREAK

“Within evidence-based medicine, there is a hierarchy of levels of evidence,” McIntyre said. “The highest level is level 1 or randomized clinical trial. Level 2 is a prospective cohort study. Level 3 is a case-controlled study; they can either be prospective or retrospective. [Level] 4 is case series and 5 is expert opinion.”

“A problem with reliance on level 1 studies is that is not all level 1 studies are well designed, well implemented or even well analyzed for conclusion,” McIntyre said.

Azar noted that all studies are evaluated according to quality and applicability criteria in the AAOS evidence-based process. Reviewers may upgrade a well done level 2 or 3 study, and a poorly designed level 1 study could be downgraded to moderate, low or very low quality evidence, depending on the presence of factors and problems. This is consistent with the accepted Grades of Recommendations, Assessment, Development and Evaluation evidence appraisal system used by the Cochrane Collaboration and many organizations. Based on the available evidence, the work group develops recommendations, which are assigned a strength rating.

All of this information should then be compiled in a standard format, describing the recommended action and when it should be performed, according to the IOM.

CPGs undergo peer review

Once the work group develops the CPG recommendations, they are ready for peer review, according to the AAOS website.

At minimum, there is a 30-day peer review process by the Guidelines Oversight Committee (GOC), the Evidence-Based Quality and Value Committee (EBQV) and peer reviewers from outside specialty societies, according to the AAOS.

There is a roughly 30-day public commentary period, with comments provided by the AAOS Board of Specialty Societies, Board of Councilors, Council on Research and Quality and the Board of Directors.

“It gets vetted by the entire organization and all stakeholders, content experts, specialty societies, the Board of Councilors,” Azar said. “It rises up from the evidence-based practice community to the council on research and quality. It goes through the AAOS Board of Directors before it gets disseminated.”

Once it is approved, the AAOS posts the CPG to the National Guidelines Clearinghouse and publishes guideline summary articles in the Journal of the American Academy of Orthopaedic Surgeons, AAOS Now and the Journal of Bone and Joint Surgery.

How to develop good-quality CPGs

A big challenge is how to develop good-quality CPGs, according to Greenfield. “That turns out to be a huge problem,” he said.

The biggest issues are completing systematic reviews and addressing conflict of interest. “Conflict of interest is a major thing in orthopedics,” Greenfield said.

Anyone being considered for a guideline committee must declare any potential conflicts of interest, he said. If there is a conflict of interest, that person should not participate. If the committee must include members with conflicts of interest, they should be in the minority.

Another way to combat conflict of interest is to form a balanced, multidisciplinary guideline committee.

“One of the things that was recommended [in the IOM report] — and this will annoy all orthopedic surgeons in the United States — was to have non-orthopedic surgeons on the committee. In fact, this report recommended that the chair be from another specialty,” Greenfield said.

All AAOS clinical practice guidelines are developed by multidisciplinary work groups. Depending on the topic, work group members may include representatives from physical therapy, radiology, pediatrics, family medicine, dentistry, geriatrics or other appropriate organizations. Multidisciplinary groups also participate in the peer review process and provide valuable input, Azar noted.

PAGE BREAK

Concerns about third-party payers

Many surgeons have expressed concern that CPGs might be used by third-party payers to make coverage decisions.

“That is inevitable,” Greenfield said. “The federal government or insurers are going to do that. That is what quality of care is all about. We pay for things that are shown to be effective. And a guideline is supposed to make recommendations about what works and what does not work.”

This has been a concern for the organizations that develop CPGs, too. “We have always been concerned about coverage decisions,” Azar said. “Are [CPGs] going to be used as coverage decisions? These guidelines state clearly that they are not intended to be used for that. In addition to the evidence, coverage determinations should include a risk-harm analysis and a cost-benefit analysis. We do not do these.”

Frederick Azar

Frederick Azar

“It would be a mistake for an insurer to base coverage decisions solely on the CPG,” Azar continued. “However, insurance companies look at the same published and accessible evidence that we look at, and they base their coverage determinations on their own reports.”

The AAOS CPG on the treatment of knee osteoarthritis contains a strong recommendation against the use of viscosupplementation.

Azar noted that this strong recommendation was based on a review of the results of three level 1 and 11 level 2 studies that evaluated intra-articular hyaluronic acid injections compared to placebo. The AAOS calculated clinical significance using two metrics, minimum clinically important improvement (MCII) and minimum clinically important difference (MCID). Clinical significance means that the effect size is large enough to be important to patients. A statistically significant finding may or may not indicate that the treatment or potential harm is clinically meaningful because even a miniscule and unimportant difference will be statistically significant if the sample size is sufficiently large to show a difference. “The AAOS adheres to the expectation that if a treatment effect is effective in the population, it should meet both statistical significance and be larger than the MCII,” Azar said.

In these studies, although hyaluronic acid yielded statistically significant treatment effects, it did not meet MCII thresholds. “What they determined based on their review was that viscosupplementation does not work using this singular metric (MCII),” said Jack M. Bert, MD, adjunct clinical professor at the University of Minnesota School of Medicine in St. Paul and Orthopedics Today’s Business of Orthopedics Section Editor.

Jack M. Bert

Jack M. Bert

Insurance carriers took notice. “By the utilization of the metrics, MCII and MCID, there have been a series of noncoverage decisions, which have dramatically affected patient care,” Bert said. To date, 14 insurance companies have issued noncoverage decisions.

“The last state to issue a noncoverage decision was Massachusetts, which occurred on April 1, 2014,” Bert said.

According to research by Raveendhara Bannuru, MD, and colleagues, however, the MCII metric may have been misapplied. The researchers found several flaws in how the MCII-based results were obtained, displayed and interpreted in this particular CPG. “The current state of research on MCII allows it to be used only as a supplementary instrument, not a basis for clinical decision making,” the authors wrote.

Furthermore, Bert noted that when the developer of the MCII and MCID metrics, Dr. Felix Angst in Switzerland, reviewed the methodology in which the AAOS based their decision, he agreed that the metrics were inappropriately applied to Recommendation #9 in the OA CPG.

William R. Beach

William R. Beach

“We have worked with the Academy and discussed with the Academy our feeling about the potential misuse of one of the metrics,” William R. Beach, MD, president of the Arthroscopy Association of North America, told Orthopedics Today. “The Academy is revisiting how they develop CPGs and is considering modification of their internal process.”

Azar noted that the AAOS reviewed the information by Bannuru and colleagues and determined that there were no errors in the calculation and application of clinical significance in the AAOS Treatment of Osteoarthritis of the Knee CPG and disagree with the conclusions of these authors. The AAOS evidence–based guideline program will continue to integrate techniques for assessing clinical meaningfulness, as it has done since 2008, to enable practical comparisons of overall effect size when clinical studies are examined cumulatively, he said.

PAGE BREAK

The best way to address this type of problem, correct errors that appear in a CPG or even include the latest data is through periodic updates. “The only real remedy that I know of for that is to keep updating the guidelines, maybe every year or two, which is kind of a big burden,” Greenfield said.

Lack of high-level data

One concern that has repeatedly been voiced regarding evidence-based medicine is that there is a dearth of high-level data in orthopedic research, according to Azar.

“At the end of the day, we learned a lot by this process in that we lack a lot of high-level evidence for what we do,” Azar said. “We know it works for our patients, but from a scientific standpoint, from the highest level methodology, you cannot always prove that what you are doing works.” An example is the AAOS Diagnosis and Treatment of Osteochondritis Dissecans CPG in which the majority of recommendations were rated limited, inconclusive or consensus because of a lack of quality research studies. However, this CPG identified important gaps in the literature that researchers are now trying to fill, Azar noted.

“What has been illustrated by the AAOS guidelines is that we do not have level 1 and level 2 literature to support a lot of the surgical things that we do,” Beach said. “And that is not something that is surprising if you believe in something called ‘clinical equipoise,’ where if you as a surgeon or physician honestly believe that one treatment is better than another, you are obligated to provide what you think is the best treatment. If the clinical guideline tells you to do something that is against your clinical equipoise, it is difficult. It puts you in the position of conflict.”

McIntyre said this paucity of data is because it is difficult to enroll and maintain patients in level 1 studies. “It takes a long time to put them together,” he said. “They are expensive.” For example, a study on the value of arthroscopic meniscectomy from Finland was published recently. “It took five centers 5 years to enroll 205 patients in this study, and 28% of the 205 dropped out because of refusal to be randomized or improvement in symptomatology,” McIntyre said.

Gauging physician adherence

It is hard to gauge physician adherence to CPGs because little research has been done in the orthopedic community on this topic. J. L. Matzon and colleagues found that adherence to the AAOS upper-extremity CPG was inconsistent by members of the American Society for Surgery of the Hand. For instance, in patients with carpal tunnel syndrome, although it is an option in the guidelines, 32% of respondents order electrodiagnostic testing when contemplating surgery. In addition, the guidelines suggest not immobilizing the wrist postoperatively; Matzon and colleagues showed that 30% of physicians immobilize anyway. Moreover, 11% of participants consistently prescribe vitamin C after a distal radius fracture. However, there are times when adherence is better as with recommending nighttime splinting (98%) and corticosteroid injections (85%) for carpal tunnel syndrome.

Azar said there is some evidence that CPGs influence physicians. Recently, AAOS CPG recommendations and evidence tables have been reprinted in textbooks and incorporated into board exams, and the CPGs are encouraging more research to fill in acknowledged gaps in the evidence. In 2007, when the AAOS first published a CPG on carpal tunnel syndrome, the guideline work group had a little more than 900 abstracts to review. “Seven years later, there are more than 10,000 abstracts to be evaluated because the team has refined the questions to ask and there is more research,” Azar said. “That meant 10 times the amount of information was out there.”

“It encourages better research, which leads to better treatment, better information [and] again, lifelong learning for our members,” Azar said.

To make improvements in the future, McIntyre thinks it is important to look at the entire CPG process. “To look at evidence-based guidelines as the Holy Grail and the only way that we can determine how we take care of our patients is potentially dangerous,” McIntyre said. “Because it is process-driven, you have to analyze that process to make sure it is doing what it is purported to do.”

McIntyre hopes to see the experts in a given area of health care designing and implementing the CPG processes. Also, critical reviews must not discount lower levels of evidence.

PAGE BREAK

“Just because a study is not level 1 does not mean it is not valuable,” McIntyre said. “Level 3 studies have aided us in taking care of patients for decades. There is value in case studies. … There is a lot to be learned from all levels of evidence; it all has to be examined critically.” This will permit a more robust literature analysis, which will result in better guidance for clinicians on the best patient care practices.

“[The CPG process] is a work in progress,” Beach said. “We have learned a lot about the development of CPGs. The real challenge going forward is to collect better data so that there are fewer questions that [remain] unanswered by the literature. As we create better studies, hopefully, we will have better CPGs.” – by Colleen Owens

References:
Atkins D. BMC Health Serv Res. 2004;22; 4(1):38.
Systems for grading the quality of evidence and the strength of recommendations: Critical approaches. The GRADE Working Group.
Bannuru R. Arthroscopy. 2014;doi:10.1016/j.arthro.2013.10.007.
Matzon JL. Orthopedics. 2013;doi:10.3928/01477447-20131021-22.
www.aaos.org/news/aaosnow/may13/research2.asp.
www.aaos.org/research/guidelines/Guideline_FAQ.asp.
www.iom.edu/~/media/Files/Report%20Files/2011/Clinical-Practice-Guidelines-We-Can-Trust/Clinical%20Practice%20Guidelines%202011%20Insert.pdf
.
For more information:
Frederick M. Azar, MD, can be reached at the Campbell Clinic, 1400 Germantown Rd., Germantown, TN 38138; email: fazar@campbellclinic.com.
William R. Beach, MD, can be reached at Tuckahoe Orthopaedics, 1501 Maple Ave., Richmond, VA 23226; email: beach@orv.com.
Jack M. Bert, MD, can be reached at Minnesota Bone and Joint Specialists, 9325 Upland Ln N #205, Maple Grove, MN 55369; email: bertx001@gmail.com.
Sheldon Greenfield, MD, can be reached at the University of California, Irvine, 100 Theory, Suite 110, Irvine, CA 92697; email: sgreenfi@uci.edu.
Louis F. McIntyre, MD, can be reached at Westchester Orthopedics, 311 North St. #102, White Plains, NY 10605; email: lfm@woapc.com.
Disclosures: Beach and Greenfield have no relevant financial disclosures. Azar receives stock or stock options from Pfizer and receives royalties, financial or material support from Elsevier. McIntyre receives stock or stock options from Tornier and research support from DePuy, a Johnson & Johnson company. Bert is on the speakers bureau/paid presentations Sanofi-Aventis. He is an unpaid consultant to Exactech Inc., Link Orthopaedics, Smith & Nephew, Tornier and Wright Medical Technology Inc.

POINTCOUNTER

How large a role should expert opinion play in evidence-based medicine?

POINT

Clinical acumen and experience important to EBM

The term evidence-based medicine (EBM) first appeared in 1990 in a document for applicants to the internal medicine residency program at McMaster University; EBM was described as an attitude of enlightened skepticism toward the application of diagnostic, therapeutic and prognostic technologies. Practicing EBM however, requires a clear delineation of relevant clinical questions, a thorough search of the literature, a critical appraisal of available evidence, assessment of applicability of this evidence to the clinical situation and a balanced application of the conclusions to the clinical problem.

Mohit Bhandari

Mohit Bhandari

The balanced application of the evidence (i.e., the clinical decision making) is the central point of practicing evidence-based medicine and involves, according to EBM principles, integration of our clinical expertise and judgment with patients’ and societal values, and with the best available research evidence.

So, should expert opinion play a role in EBM? Of course. While EBM is sometimes perceived as a blinkered adherence to research findings from randomized trials, it more accurately involves informed and effective use of all types of evidence. Clinical acumen and experience lie at the forefront of informed decisions and the judicious use of best evidence to optimize patient care.

Mohit Bhandari, MD, PhD, FRCSC, is professor and chair in the division of orthopedic surgery and Canada Research Chair in Surgical Outcomes at McMaster University in Hamilton, Ontario.
Disclosure: Bhandari has no relevant financial disclosures.

COUNTER

Expert opinion should have a limited role

Expert opinion should play only a very small role in evidence-based medicine (EBM). When you look at the criteria that has been established using levels of evidence as the framework for EBM, expert opinion is considered level 5 evidence, which is the lowest form of evidence that we have for making clinical decisions.

Kevin B. Freedman

Kevin B.
Freedman

Our goals should be to rely predominantly on level 1 and level 2 evidence when approaching clinical treatment decisions. Level 1 and 2 evidence primarily involves randomized controlled trials or well-designed cohort studies, which minimize bias and confounding, problems that plague expert opinion. They provide much more objective evidence for treatment decisions.

We recently performed a study that will be published in the American Journal of Sports Medicine showing how much progress we have made in the sports medicine literature publishing studies with higher levels of evidence. As researchers, we are improving the science in our field by performing quality studies utilizing higher levels of evidence during the past 15 years, which points toward using this information to make better clinical treatment decisions for our patients.

Historically, expert opinion has helped lead us to research questions that can advance our science. But overall, as we continue to advance into the future of clinical decision-making, we need to emphasize utilizing the highest level of evidence.

Kevin B. Freedman, MD, MSCE, is a sports medicine surgeon at the Rothman Institute in Bryn Mawr, Penn.
Disclosure: Freedman has no relevant financial disclosures.