March 01, 2005
6 min read
Save

How to integrate evidence-based medicine into clinical practice

Murray Fingeret, OD [photo] Murray Fingeret, OD, is chief of the optometry section at the Department of Veterans’ Affairs Medical Center in Brooklyn and Saint Albans, N.Y., and a professor at SUNY College of Optometry. He is also a member of the Primary Care Optometry News Editorial Board. He may be contacted at St. Albans VA Hospital, Linden Blvd. and 179th St., St. Albans, NY 11425; (718) 526-1000; fax: (516) 569-3566; e-mail: murrayf@optonline.com. Dr. Fingeret has no direct financial interest in the products mentioned in this article, nor is he a paid consultant for any companies mentioned.

Today’s optometrist faces a major challenge in assimilating new information into clinical practice. Does one believe a speaker or author unconditionally because he or she said it from the podium or published it? With the growth of the Internet along with a plethora of journals and trade magazines, one is besieged with new information.

On a monthly basis, something new occurs in the field of glaucoma. Examples include the introduction of a new class of medications, the development of new drugs within the same class and the discovery of new uses for existing medications. Should one switch to the new agent? What resources should a clinician use in evaluating the information? In addition to the sales representative and advertising, what other options are available?

On the diagnostic front, new tests are being developed, new instruments or software arise that are meant to replace older versions and new philosophies of care evolve, such as when ocular hypertension should be treated. All of these situations require the clinician to understand and determine their importance and whether they should be incorporated into practice.

Case reports used as validation

Cases and experience are often used to validate practice patterns as clinicians, authors and lecturers attempt to justify their positions. Optometry is not the only profession where management decisions have been “validated” as proof based upon “clinical cases and a gut feeling,” as medicine has seen this same problem grow.

In response, a movement to modify how the clinician uses information and makes decisions is evolving. The concept is to integrate evidence-based medicine principles into clinical practice and use it to validate new ideas or concepts. As the name implies, evidence such as from published and unpublished studies along with other information is used to guide the clinician in making management decisions.

Understanding data, process

Evidence-based medicine evolves around the concept that the clinician appraises the data and understands the investigative process or pathophysiology, depending upon the type of work being evaluated. The decisions challenging the clinician may vary, such as which is the best imaging device, perimetric instrument or drug.

Evidence could come in different forms. By reviewing literature in peer-reviewed and trade journals as well as discussing the information with colleagues, clinicians should be better able to recognize which concepts, medications or instruments best meet their needs and their patients’ needs.

Levels of proof

The use of evidence-based medicine starts with the clinician pursuing “evidence” to understand and validate a new concept, instrument or medicine. There are different levels of “proof,” with the randomized, controlled clinical trial being the best (at least for a therapeutic intervention) and case series or case reports the worst. In between, in order of significance are the following: controlled, cohort, case control and cross-sectional studies.

The randomized controlled clinical trial randomizes patients to a treatment group and is performed in a prospective fashion. The cohort study assigns patients to a group based upon a particular characteristic and will then follow these individuals forward. It is done in a prospective fashion, but the different groups must be carefully matched to avoid bias. The case control study assigns patients to different groups based upon a previous event or characteristic and then compares them retrospectively. Case reports and anecdotal evidence illustrate certain principles based upon findings present in several individuals. It is prone to bias but serves a role by recognizing issues that may require further study.

FDA requirements

New instruments have been marketed without sufficient evidence. The burden of proof required for an instrument to be approved for sale by the Food and Drug Administration is not similar to that needed to establish its scientific integrity. This is different from the approval of medications by the FDA.

Instruments have been developed based upon in-house evaluations and have been sold, advertising scant published studies to validate the results. Clinicians can be fooled by advertisements that look great and appear credible, conveying the image that the instrument works. Using an evidence-based medicine approach will allow clinicians to recognize which new instruments offer marked improvements in clinical practice.

Analyze study merits

The merits of studies, whether prospective or retrospective, need to be analyzed. Who are the author(s), and are they credible? In what journal is the study published? Is the journal peer-reviewed? If in a trade journal that is summarizing results rather than providing original information, are references provided so the reader can pursue this area if desired? Have all the data been made available to the public to analyze and develop conclusions? Did the company whose product is being analyzed fund the study?

While excellent studies have been performed with grants provided by the company in question, this needs to be recognized by the reader. In addition, a study needs to be analyzed as part of a greater body of work. If the results are new or different, they need to be confirmed.

Scrutinize validity

As a study is reviewed, the concept of validity needs to be scrutinized. In regard to study design with therapeutic interventions, was masking used to avoid introducing a bias? Studies can be single masked (i.e., doctors do not know the therapy a patient is taking) or double masked (neither the patient nor doctor is aware of the therapy).

Double masked is the stronger model, but it is not always possible. Certain side effects or complications of a specific drug being studied may tip off the clinician as to the arm in which the patient was enrolled. For example, a clinician could easily detect if a patient were taking pilocarpine.

Was the sample size large enough to provide sufficient statistical power? A study that attempts to show that one instrument is better than another but includes only 10 patients would be open to scrutiny.

Are study populations similar at baseline? A study looking at how well an eye drop works, in which one group was older or different may affect the outcome. If a drug is being evaluated, was the washout period adequate to avoid bias? Because most drug studies use patients already taking glaucoma medication(s), it is important that the drug be washed out of the eye before new ones are introduced.

Were the outcome measures able to detect changes in the endpoint or differences between groups? Was the outcome measure appropriate? Was the monitoring sufficient with adequate follow-up to show the desired effect? If a software package used in a perimeter is being analyzed for the detection of glaucomatous progression, using mean deviation (MD) may not be the best outcome measure, as cataracts also affect this. In this case, it may be difficult to ascertain whether glaucoma or cataracts made the outcome measure get worse.

Scrutinize data analysis, presentation

The next area for the clinician to review is data analysis and presentation of results. Were patients analyzed according to initial randomization? Were results for drug studies done on an intent-to-treat population? Were results based upon a retrospective or post-hoc analysis? Did analysis that created secondary endpoints find differences when primary endpoints did not? Data can be analyzed in different ways, and results can be derived when only certain points are evaluated.

Take-home pearls

  • Analyze the merits of the study.
  • Consider study funding.
  • Evaluate the masking method.
  • Look at the sample size and study population similarities.
  • Gauge the appropriateness of the outcome measure.
  • Determine the statistical and clinical significance of the results.

Diagnostic tests are often presented using sensitivity and specificity. Sensitivity is the number of individuals with glaucoma in which the test being scrutinized is positive while specificity looks at normal individuals and provides the number being studied who are negative. Sensitivity and specificity values must be compared to the study population.

For example, a diagnostic test being evaluated in which the study consists of individuals with advanced glaucoma will show excellent sensitivity results. The sensitivity results will diminish if early glaucomas are evaluated, because there is greater overlap between normal and glaucomatous individuals in these groups. Also, it is difficult to compare sensitivity and specificity between different studies because the populations may be different.

A term often used instead of sensitivity and specificity is the likelihood ratio (LR). The likelihood ratio examines the probability that a person with glaucoma would have a particular test result divided by the probability that a person without glaucoma would have that test result.

Evaluating conclusions

The next area of review is the conclusions. Are they based upon predefined primary or secondary endpoints or on differences that are significant? Were the conclusions and results applicable to patients that would be seen in clinical practice? Can the results be transferred into most practices? Are the instruments or treatments practical, current, supportable and clinically relevant? Were the results statistically and clinically significant?

The inclusion of new information into clinical practice is part of the everyday routine for optometrists. At times, new information conflicts with pre-existing opinions, requiring that decisions be made. Evidence-based principles provide one pathway for clinicians to effectively integrate new information and weigh which information is most significant and should become part of everyday clinical practice. Evidence-based medicine asks that each individual form his or her own opinion, based upon published and unpublished studies, lectures, conversations with colleagues, editorials and even columns. Each of these will play a role as an opinion is developed, with different weight given to each.