April 01, 2009
1 min read
Save

Evidence-based oncology and comparative effectiveness research

You've successfully added to your alerts. You will receive an email when new content is published.

Click Here to Manage Email Alerts

We were unable to process your request. Please try again later. If you continue to have this issue please contact customerservice@slackinc.com.

There’s been a lot of discussion about these topics lately — and about something somewhat related, the $200 million or so allocated to “Challenge Grants” — as part of this year’s national spending and investment in health research. As is usual in national, government-related topics, there are plenty of commentators who use these initiatives to concoct wildly exaggerated scenarios — whether “finally giving doctors the information they need” to conduct “evidence-based oncology,” or on the other hand, “giving government the right to make decisions about your care that only your doctors should be making.”

Of course, the real world is much messier and more ambiguous than any of that. I read a brief article in the Journal of Oncology Practice, entitled “Comparative Effectiveness: Its Origin, Evolution and Influence on Health Care” that I would recommend for anyone looking for a quick primer on the topic. One commentator cynically, though perhaps accurately, points out that practically speaking, comparative effectiveness research (or CER) is attractive to politicians because it sounds like a good thing to do, and it's attractive to health services researchers because it opens up another potential funding source.

On that topic, in fact, there are rumors that some universities are directing their faculties to send in one or two grants per researcher to compete for some of these funds (the challenge grants generally, rather than CER specifically, in fairness). Some estimate that funds will be especially difficult to obtain as many topics have already been created with specific projects and researchers in mind. As the article in the JOP points out, the data that will be used for some of these analyses will be existing sources (and usually not randomized trials or gold-standard evidence), from which most studies will generate ambiguous or less-than-certain conclusions at best.

So what will really make a difference in the end? Some say that it will take a fundamentally different infrastructure, a different way of collecting data, perhaps even a national institute akin to the U.K.’s National Institute for Clinical Effectiveness (NICE). Others say that the real issue will be using health information technology to create and enhance decision support for clinicians with evidence that we already believe to be true. How far will $1.1 billion (this year, anyway) go? Time will tell.