Statistics for clinicians
Clinicians are bombarded with new information daily. Early in training we learn to remember the results of landmark studies, and to recite them ad infinitum. Pharmaceutical companies give us copies of their latest positive studies. Of course, they usually neglect to mention studies with negative or equivocal results.
I must admit: biomedical statistics was one of my least favorite courses in school. I am here to learn how to care for patients not to be a mathematician! I naively thought. It was a few years before I realized the error of my thinking.
Now, I encourage all students and residents to think critically when reading journal articles. Dont leave it up to the experts to tell you what a study means, make up your own mind! I encourage them. Without a basic understanding of statistics, it is impossible to critically read a scientific paper.
Some questions to ponder:
- Was the group studied similar to my own patient population? I would be very cautious in applying results to your own practice if the study was performed on a different population.
- Were the control and treatment groups comparable? Baseline differences in demographics, prognostic variables, therapies and other characteristics will affect the results.
- What type of statistical analyses was used? Using analyses inappropriate for the data could give inaccurate results. The use of an obscure little-used test raises questions.
- Was the data analyzed as per the original study protocol? Retrospective analyses can draw interesting conclusions. However, these might not always be correct.
- What about non-responders, withdrawals and outliers? Outliers may represent individual variation, or they could be due to errors in data acquisition, interpretation or other reasons. Including vs. removing non-responders and drop outs from the data will affect results.
- What about P values and confidence intervals? A P<.05 is considered statistically significant. However, even with a P<.01, do not forget that there is a remote 1 in 100 chance of a result appearing significant when it is not. On the other hand, a non-significant P value could mean that there was either no difference between groups or simply that the number studied was too few. Confidence intervals assist in determining the strength (or weakness) of a particular study.
- Were appropriate conclusions made based on the results? Association does not prove causation.
- Even if the results were statistically significant, are they clinically significant? At first glance, a 50% decrease in relative risk may seem like a lot. However, in a population at low risk, it may not be. Think about absolute risk and not only relative risk.
- Will the potential benefits of therapy outweigh cost, safety and other issues? This should be obvious but is too often ignored.
This is only a brief synopsis of what one might think about when reading the literature. There have been books and articles published on this subject. I have found the following reference to be useful:
Greenhalgh T. How to read a paper: Statistics for the non-statistician. BMJ. 1997;315:364-366, 422-425.