Issue: June 2014

Read more

June 01, 2014
15 min read
Save

Defining the quality (outcomes) of our care

Issue: June 2014
You've successfully added to your alerts. You will receive an email when new content is published.

Click Here to Manage Email Alerts

We were unable to process your request. Please try again later. If you continue to have this issue please contact customerservice@slackinc.com.

I am delighted to present a two-part Round Table discussion on defining the quality of our care, i.e., focusing on outcomes. We are privileged to have a great panel that includes Nicholas G.H. Mohtadi, MD, MSc, FRCSC, John Kuhn, MD, and John M. Tokish, MD. Value is defined as the best outcomes at the lowest cost. We, as orthopedic surgeons, will be measured in the future by quality. This Orthopedics TodayRound Table discussion will define and discuss outcomes, talk about what tools to collect, how to collect them and what types of questionnaires should be used.

Richard J. Hawkins, MD
Moderator

Roundtable Participants

  • Richard J. Hawkins, MD
  • Moderator

  • Richard J. Hawkins, MD
  • Greenville, S.C.
  • John E. Kuhn, MD
  • John E. Kuhn, MD
  • Nashville, Tenn.
  • John M. Tokish, MD
  • John M. Tokish, MD
  • Honolulu, Hawaii
  • Nicholas G.H. Mohtahi, MD, MSc
  • Calgary, Canada

Richard J. Hawkins, MD: Dr. Kuhn, can you talk about E.A. Codman and his contribution to outcomes?

John E. Kuhn, MD: In the early 1900s, Codman developed something he called the “end result idea.” He suggested physicians should follow their patients and see how they do after treatment. At the time, he was ridiculed. In fact, at the time many doctors were not concerned with how their patients did and were reluctant to have their results available for the public. So it is understandable that when Codman proposed this outcomes idea, he was ridiculed and ignored. This led to his dismissal from MGH Hospital.

He tried to start his own hospital and study outcomes, but unfortunately, no one would send him patients. Despite being considered the “father of evidence-based medicine in orthopedics,” Codman died a pauper and until recently has been in an unmarked grave in Boston. Now, of course, we are interested in outcomes and it has become more of a global initiative. Codman was a man ahead of his time.

Hawkins: In the United States, we have initiatives such as pay for performance, the Affordable Care Act (ACA), Institute of Medicine and MedPAC. Dr. Kuhn, can you elaborate on what we are facing now from governmental and other agencies?

Kuhn: In every other industry, people are paid for performance. We think about CEOs of companies or athletes who clearly have incentive-based initiatives in their salaries, yet this never has been the case in medicine. The Institute of Medicine, back in the 1990s, looked at complications and preventable deaths due to health care errors and suggested that physicians should be rewarded to encourage quality and try to reduce the complications and deaths that are preventable from health care errors. That is what started this concept of pay for performance. As time went on, different government agencies contributed and defined how this should be done. Some of the agencies like MedPAC now recommend that payment for physicians should be based on their performance, i.e., related to their patients’ outcomes. We know cost remains a significant factor with health care reform. Payers are looking for value from health care providers, which is essentially the outcomes over the cost of care.

Hawkins: These initiatives we are talking about do not get to how our patients do with treatment we provide. The pay-for-performance initiative satisfies certain measures. Giving antibiotics prophylactically or appropriate anticoagulation, we might be rewarded in a pay-for-performance plan.

HCAPs and CCAPs are mandated now and are the consumer’s perception of our treatment. The hospital is being graded on “Was the room warm? Were the nurses responsive to your needs? Did they come when the bell was rung?” In our office we are asked, “Did the doctor explain things to you? Was your scheduling appropriate and efficient? Did you wait too long in the office for your appointment to see the doctor?” All those things get a mark and they are published online so patients can access them on the Internet. That is part of how we are being graded now, which may not define “outcomes” for our treatment.

Kuhn: There are a few different measures of quality. 1) The patient experience is important and as you mentioned data is being collected on that. 2) Process measures are outcome measures already being collected that we are familiar with, such as giving prophylatic antibiotics appropriately. 3) Structure measures, such as having electronic medical records, will become important to collect and compare data as part of our outcome measures. Some of these measures relate more to the facility and equipment than they do to patient outcomes. What we treating physicians want to measure is the outcome of our treatments, which is based mostly on patient-reported outcomes (PRO).

PAGE BREAK

Hawkins: Joint registries are popular worldwide and in the United States. Dr. Tokish, can you define and discuss registries for the readership?

John M. Tokish, MD: There are a lot of different ways to look at data and outcomes, and we many consider the gold standard to be the randomized, clinical trial. This may be the best science we have for certain research questions. The biggest disadvantages are it takes a long time and is costly. Alternatives to that take advantage of what we would call natural experiments, and that is where registries play a role. The concept is that if you can enroll enough patients across a wide variety of practice types and models, and if you can get an “N” that is high enough, you can sort of bleed out many of the confounders and biases that plague a rigorously controlled experiment. The advantages are that once the system is up and running, it could be less costly. You can enroll a significantly higher number of patients. You can get to your answers much more quickly looking at the data either retrospectively or prospectively.

Registries can be exceedingly important in answering certain questions. Australia and New Zealand lead the world in many of these respects, especially with total joint registries. Total joints lend themselves well to registry-type questions because much of the treatment is standardized. For example, the metal-on-metal hip early failures identified in the past few years were a result of registry-type data. Registries can identify early problems with implant types or track widespread cost. The advantage of the registry is it is extremely broad. It has huge numbers. The disadvantage of the registry, however, is that it is not deep and registries do not get to the bottom line question of outcomes.

The largest ACL registry in the world is the Danish registry and they recently published on nearly 15,000 patients. It is mandated that all ACLs are entered in this system, at least in Sweden. They have added scores such as the KOOS and EQ-5D scores for depth.

The ideal registry as we go forward is going to combine the breadth of a wide-based registry with validated, patient-based outcomes to provide depth. When we can do that, we can answer questions quickly and be responsive over the course of time to utilize comparative effectiveness research.

Hawkins: Would it be fair to say that most registries today are broad and include aspects like complications, prosthesis used and readmission rates? The Swedish ACL registry is doing better in terms of including outcomes that demonstrate how our patients are doing.

Tokish: That is correct. I think that is one of the aspects that we tried to build with our military outcomes registry. We have a large, homogeneous population with a single medical record and a single payer in a population that is not very diverse — 18 years to 49 years old and mostly men — who have to pass certain physical activities. We have an outcome measure of our own in that is return to duty and being able to perform a yearly physical fitness test. If we can combine that population with validated outcome scores to get to the depth of how these patients do, then we can answer the questions of how we should treat these patients and determine the candidates for surgical vs. nonsurgical interventions.

Hawkins: Dr. Mohtadi, what do outcomes mean to you? How do you define them?

Nicholas G.H. Mohtadi MD, MSc, FRCSC: I think in transition from the registry to specific outcomes, one might say a registry is relatively low level of information whereas using a specific outcome will give us higher level of information, particularly from the patient’s perspective, which is arguably what we should use to drive all of our agendas. There are a variety of different ways of measuring outcomes, but the focus should be the patient.

PAGE BREAK

There are two ways to address outcomes. One is to have the patients determine what should be included in the outcome, or have the physician or surgeon determine what should be included in the outcome. Many of the existing outcomes are physician-determined or created vs. patient-derived or determined. Once we have decided on an outcome, if the patient reports it, then we call that a PRO. Ideally, we want to have something relevant to patients of a particular disease state, a particular joint problem or general health concern, and the outcome is derived from the patients and reported by the patients.

Hawkins:

Are you suggesting that a PRO is probably the wave of the future?

Mohtadi: There has been a significant evolution in this process from so-called objective ways to measure outcomes, such as range of motion or an X-ray to the patient-reported perspective. It is clear that in most of the things we do, there are correlations between objective and PROs. They measure different things. If we are an insurance company or a government agency and we are trying to be responsive to the taxpayer or the patient, then we need to understand what is most important from the patient’s perspective, and therefore, measure it in that way.

 

Kuhn: To reflect on Nick’s points, when your patient comes into the office with an ACL tear he does not care what his KT-1000 is. He wants to know if he can go back to playing sports without his knee pivoting. Looking at the results of treatment from the patient’s perspective is more important to the patient than looking at it from the physician’s perspective.

Hawkins: If we do it from the patient’s perspective, then would we get a better return and compliance on numbers of peoples involved? When we start adding physician input, it seems data capture compliance would suffer.

Tokish: I think you are exactly right. It is far easier for patients to fill out and to enter their own data. It is also more valid. There have been a number of attempts at looking at what we would call compliance in this area. For example, the California Joint Registry is trying to capture PROs and is about 35% compliant in their initial data input. The Norwegians and the Scandinavian registry, in their initial report was about 33% in their published studies. I think we are living in an exciting time with technology because the technology of web-based systems, where you can log in from your cell phone with an app that will allow you to access and directly enter data at any time, may be a huge step forward.

Hawkins: Why should we collect outcomes?

Kuhn: We need to know that what we are doing is effective for our patients. In addition, government and payers want to know what they are paying for the care. Going forward it may be part of how we are compensated.

Hawkins: Let us talk about Patient Reported Outcomes Measurement Information System (PROMIS). This is a large, NIH-funded program. In the future, we in orthopedics may be more involved in that, but presently not so much. Are you familiar with the PROMIS program?

Kuhn: I am somewhat familiar with the PROMIS effort. I know that so far, orthopedic conditions have not been high on their radar. They are looking to systematically develop standard outcome measures that are patient-oriented for other medical conditions and probably will approach orthopedic conditions at some point in the future.

Hawkins: You are right. Orthopedically, there is only one foot and ankle study out of Utah that uses the PROMIS program. They have a computer-adapted testing process where they have a lot of questions and then they take the problem and pare down the questions to get a manageable, small number that apparently work well to answer questions related to outcomes.

PAGE BREAK

Tokish: One of the most exciting parts of PROMIS is that it employs the concept of computer-adapted testing (CAT). One of the challenges with getting the data that we are talking about into these large-scale registries is survey fatigue. We have looked at our own data at more than 1 year and have found that the number of incomplete surveys is related to how many questions you ask the patient. The beauty of the CAT is they can decrease the time impact and time demands on patients. With CAT, one might get a higher compliance rate and compliance is critical to patient-related outcomes.

Kuhn: The other advantage of CAT is that it gets down to the patient’s complaint. When we talk about rotator cuff disease, for example, we talk about symptoms. Is it pain? Is it function? Is it weakness? These kind of computer-adapted systems will get down to what the patient’s complaints are and then when the patient gets re-evaluated after treatment, we can see if you have addressed the patient’s specific issues.

Hawkins: An orthopedic surgeon might think, “Would it not just be academic centers that want to do research that should look at outcomes?” Nick, could you address why and who?

Mohtadi: I would say the two things that drive adherence to doing questionnaires, whether it is computer or paper, are the questions meaningful to the patient and does the information get reviewed by their clinician back to the patient? If those two principles are followed, then the adherence or compliance with filling in questionnaires is higher and more valuable. When it comes to doing research or regular practice and how to utilize outcomes then the key would be to use things that are generic enough so that most patients can fill them out, but important enough to the patient population of interest. So if you are a practitioner who only does shoulder surgery, then you could pick some generic shoulder questionnaires for your patients and all of the important attributes would be addressed. If you have a more general orthopedic practice, then you would need to have a more generic, health-related or musculoskeletal-related outcome measure for your patients to fill out. And if you are moving toward doing research, then you want to use outcomes that are more sophisticated and typically more disease-specific.

Hawkins: Is it fair to say that on the research side of things, we may use a more sophisticated scoring system and we may require more data? If we are analyzing a rotator cuff problem, then we want to know from the operative report how big was the tear and how many anchors were used. Did we cut the biceps? Of course, we would. Did we take out the AC joint? So we get into more data requirements for research purposes that obviously involve a physician.

Mohtadi: Then, of course, that is a different purpose than a PRO. All of the different ways of evaluating a patient, particularly from the research perspective, have their own relevance and importance whether they are objective measures, whether they are information from an operative report, or whether they are a patient-reported, disease-specific outcome.

Hawkins: Dr. Kuhn, do you have some thoughts on the research aspects of outcome?

Kuhn: With research, the data you collect is dependent upon the question that you are trying to answer. Identifying the outcome measure to use is easy because if you have a specific question, for example, if your research question is “Does my new technique help rotator cuff repairs heal?” the data you want to collect would be an MRI scan or an ultrasound that shows that, yes, it improves healing. The challenge is what does the doctor in practice do? What data should he or she collect on patients? What is important? There are many outcome measures for different orthopedic conditions and it is difficult to choose which score to use.

Hawkins: We are going to address how the general practicing orthopedic surgeon can think about outcomes, which ones he or she might collect and how to go about collection. An important part of our discussion is to understand how we create a scoring system, how we validate it and what kind of format for the questions. Nick, you are certainly experienced with this. Can you tell us the best way to create a scoring system that turns out to be valid?

PAGE BREAK

Mohtadi: The most important concept when creating a scoring system or an outcome measure is to generate items or questions relevant and important to the patients. That is arguably the most critical step. Typically, when one does that, you end up with somewhere between 50 and 200 items that may be relevant to patients with that particular complaint or disease. Now, that number of items is too many to ask on a regular basis. So then you have to reduce the number of items, and there are various techniques to do that. This is called item reduction. You then have to ensure that the items or questions are reliable. In other words, if you fill in the questionnaire 1 day and 2 weeks later, are you going to get the same result? Those questions or items must not always behave the same way. For example, if all patients fill in a particular question and they all score high on that, or all score very low on that, then those are called ceiling or floor effects and we want to eliminate questions of that nature.

Once you have followed that process, you then add a scale to each question. That could be a simple binary yes/no scale. It could be a Likert scale where we have responses from one to five or one to seven. Or you could use a Visual Analog Scale where people put a slash on a line that represents a distance of 100 mm, therefore, score out of 100.

Hawkins: Would it be fair to say that if we establish a scoring system and have it score out of 100, with 100 being best, physicians would easily be able to interpret and understand it?

Mohtadi: That is what we have always tried to achieve because it makes it easier for everybody to follow. When the original, Visual Analog Scales were developed they were developed to measure pain and pain is a bad thing so more pain means a high score. Then most of our quality of life questionnaires consider quality of life to be a good thing, and therefore, a high score is a better score. So we have to be cognizant that some things are better represented at one end of the scale or the other, and we have to be clear that if we are using a 100-point scale or measuring things out of 100 that there is some consistency with respect to the convention that says 100 is a higher score, or a lower score, depending upon our purpose.

Hawkins: If we take, for example, a Likert score, which is on a one to five scale, i.e., unable to do, difficult to do, etc. with five levels, we have to convert these answers to a numerical system for clarity. So we try to convert those to eventually end up with a score out of 100 with 100 being the best. That seems, to me, the ideal scoring system.

Mohtadi: That is typically the way it has been done. I have chosen to measure out of 100 so we do not have to do any conversions.

Hawkins: For item generation and item reduction, this would represent a fairly sophisticated system to address for example, a disease-specific problem such as rotator cuff disease. One aspect with the American Shoulder and Elbow Surgeons’ (ASES) Score, which we will talk about for a shoulder, is that a group of us sat down and made up some questions and then decided this is what we should ask our patients. We never directly asked or tested it on patients. Yet, if you look at the score and its psychometrics, it comes out well. How would we validate a scoring system? Nick, please define validation for us.

PAGE BREAK

Mohtadi: Validation refers to the properties (psychometrics) of that particular score or questionnaire. Is it reliable? Is it reproducible or reliable? In other words, from one point to another point in time, assuming that the patient disease status does not change. Is it responsive? In other words, if somebody does change, if they have had an operation and by all accounts they are doing better, then a responsive questionnaire or outcome measure should reflect that. Or oppositely, it should reflect that the patient is doing worse. There are various properties of a questionnaire that would suggest that one questionnaire or another may be more or less responsive. That is important if we are trying to measure outcome over time or evaluate patients over time, and in particular if we are doing randomized trials.

The other concept of validation is, is the questionnaire valid. In other words, does it measure what it purports to measure? That simply is what you alluded to in your example. The ASES put down things that seemed to be valid, made sense to surgeons, and in fact, when they have looked at the score from the patient’s perspective, these are reasonably valid items as well. So the basic concept of validity is, “does it look the way it is supposed to?” Then we can measure that against other things to determine whether it is valid from the standpoint of does it compare to other scores of a similar nature, and therefore, developed a construct to measure against. These things are all termed psychometrics of the outcome measure.

Hawkins: Is it fair to say that as we analyze an outcome score or measure, we would want to know about the psychometrics that you just described. Is validity a yes/no answer or is there a grey zone?

Mohtadi: Typically, validity is not a yes/no answer with PROs because there is no gold standard to compare. You can not give it a yes/no or a number that is reflective for all outcome measures in order to compare one to the other.

A note from the editor

Look for part 2 of this Round Table discussion in the July issue of Orthopedics Today.

  • Richard J. Hawkins, MD, can be reached at Steadman Hawkins Clinic of Carolinas, 200 Patewood Dr., Suite C 100, Greenville, SC 29615; email: rhawkins2@ghs.org.
    John E. Kuhn, MD, can be reached at Vanderbilt Univ Medical Center, Medical Center East, South Tower, 1215 21st Ave. South, Suite 3200, Nashville, TN 37232; email: j.kuhn@vanderbilt.edu.
    Nicholas G.H. Mohtadi, MD, MSc, FRCSC, can be reached at University of Calgary Sports Medicine Centre, 2500 University Dr. NW, Calgary, Alberta, T2N 1N4, Canada; email: mohtadi@ucalgary.ca.
    John M. Tokish, MD, can be reached at Tripler Army Medical Center, 1 Jarrett White Rd., 4th Floor MCHK Tripler Army, Medi, HI 96859; email: jtoke95@aol.com.
    Disclosures: Hawkins, Kuhn, Mohtadi and Tokish have no relevant financial disclosures.