Performance measures should depend on evidence-based practice, experts say
AAOS representatives explain process for creating guidelines as part of pay-for-performance initiative.
Click Here to Manage Email Alerts
When it comes to performance measures, evidence-based medicine plays a fundamental role in achieving the goal of pay-for-performance initiatives: improving health care quality, according to Joshua J. Jacobs, MD.
How we measure quality is actually a very complex topic, Jacobs said. But to the extent that we could measure quality by adherence to evidence-based guidelines and evidence-based pay-for-performance (P4P) measures, we may in fact be improving the quality of care.
Jacobs is chair of the American Academy of Orthopaedic Surgeons (AAOS) Council of Research, Quality Assessment and Technology, part of the responsibility of the council is the so-called (P4P) initiatives instituted by third party payers.
He and Charles M. Turkelson, PhD, director of the AAOS Research and Scientific Affairs Department, described the process for developing evidence-based guidelines and eventually evidence-based performance measures at the 120th Annual Meeting of the American Orthopaedic Association.
With the recent Physician Quality Reporting Initiative from the Centers for Medicare & Medicaid Services (CMS) and mandatory P4P program looming, evidence-based guidelines are increasingly used, Turkelson said. As of this year, 118 professional societies have evidence-based guidelines listed in the National Guidelines Clearinghouse.
The days when an expert can go before CMS or another payer and convincingly argue against evidence-based documents are passing us by. Evidence-based medicine and evidence-based guidelines in particular have become common, Turkelson said.
Jacobs said the AAOS reinitiated its work group for evidence analysis in 2005 the first time since 1999 largely in response to the P4P initiative.
Using new methodology, AAOS representatives created the evidence-based guidelines on the prevention of symptomatic pulmonary embolism and on the diagnosis of carpal tunnel syndrome, in June. Jacobs said the AAOS expects to have three to five guidelines completed in upcoming year.
Evidence-based guidelines form the basis for evidence-based performance measures, Jacobs said. That is part of the reason that we are advocating that the federal government slow down the pay-for-performance program, so that our high-quality evidence-based guidelines can catch up.
Real world influences
Jacobs defined evidence-based practice as the integration of the best research evidence, clinical expertise and patient values.
However, real world issues influence each of these points. For instance, few randomized, controlled trials exist in orthopedic surgery, and the literature on new technology often includes bias, Jacobs said. Positive results tend to get published and reported more than negative results.
There are also learning curves with many new technologies and treatments, and surgical skills differ, he said.
Patients may also try to sway surgeons into making decisions based on their desire to enhance sports performance or on cultural bias. There is also a reality that if a patient wants a certain device or technology that they saw in a direct-to-consumer ad or on the Internet, they will shop around to find a surgeon that will perform the desired procedure, regardless of the available evidence, Jacobs told Orthopedics Today.
Eliminating bias
Because of these realities and because not all guidelines that claim to be evidence-based are valid, Turkelson said surgeons need education in evidence-based medicine. It is very easy to do evidence-based medicine badly. People often use evidence to justify the conclusions they want. This is not evidence-based medicine, this is bias.
In order to eliminate bias, evidence-based guidelines should be based on well-defined rules, Turkelson said. The first step involves framing the question specifying the patient population, the interventions, the comparisons and the types of outcomes that a study should examine.
Next, Turkelson said, is to determine study relevance. In this step, investigators prepare inclusion and exclusion criteria to determine, for instance, what studies are too old or too small to be included in their evaluation. Animal studies and meeting abstracts are also rarely included.
The third step is finding the evidence. Turkelson suggested using PubMed, Embase, Cinahl and Cochrane Library databases, as well as article bibliographies.
What is important is that the literature search is comprehensive, he said. This is different from the traditional review in that youre not seeking articles to support your opinion. You are seeking articles with certain criteria and to answer very specific questions.
Turkelson also noted that high-end guidelines may contain de novo research, such as cost analyses or practice patterns. Expert opinion can also be used, but only when no other evidence is available.
Evaluating the evidence
The next step is evaluating the quality of the evidence. In evaluating quality, we are evaluating the amount of confidence you have in a studys results, Turkelson said. For example, we can have more confidence in results from well-designed randomized, controlled trials (Level 1 evidence), than in results from Level 5 evidence.
Next, investigators synthesize the evidence, asking both qualitative and quantitative questions, such as, Does the treatment work? and How well does the treatment work?
Finally, investigators write the recommendations, determining what should be done in whom, when, where and how often, Turkelson said.
Investigators then assign a grade to each recommendation again based on their confidence in the body of available evidence. We assign a Grade A recommendation when we have a lot of high level evidence and, therefore, have the highest degree of confidence, Turkelson said. This is the bottom line of evidence-based guidelines: the strength and the grade of a recommendation.
For more information:
- Joshua J. Jacobs, MD, can be reached at Rush University Medical Center, 1725 Harrison St., #1063, Chicago, IL 60612; 312-243-4244; e-mail: joshua.jacobs@rushortho.com. He has no financial conflicts to disclose.
- Charles M. Turkelson, MD, director, Research and Scientific Affairs Department, can be reached at the American Academy of Orthopaedic Surgeons, 6300 N. River Road, Rosemont, IL 60018; 847-384-4326; e-mail: turkelson@aaos.org. He has no financial conflicts to disclose.
References:
- Jacobs JJ. Symposium: P4P: Performance or paperwork? What is P4P, why is it important, and the goals of the initiative?
- Turkelson CM. Symposium: P4P: Performance or paperwork? How are guidelines created, why its important for orthopaedists and specialty societies to be involves in the process. Both presented at the 120th Annual Meeting of the American Orthopaedic Association. June 13-16, 2007. Asheville, N.C.