February 25, 2009
2 min read
Save

Testing and teaching

You've successfully added to your alerts. You will receive an email when new content is published.

Click Here to Manage Email Alerts

We were unable to process your request. Please try again later. If you continue to have this issue please contact customerservice@slackinc.com.

Yesterday, I took my oncology “In-training examination” (ITE), an annual fellowship event. I’ve taken my share of standardized exams at this point in my career, having become board certified in both internal medicine and pediatrics (and having taken several ITEs for each, through the years). This is of course not to mention Steps 1, 2, and 3 … or the MCATs … the SATs … I could go on.

ITEs always feel a little different than the real tests for certification, mostly because I inevitably feel underprepared. I typically don’t go through a lot of preparative self-study before ITEs, and I’m usually tired and otherwise busy in the middle of one clinical rotation or another. And so when I take an ITE, it’s a little more of a two-way interaction between myself and the test; I demonstrate what I know, and the test, through the intuitive lessons contained within its question stems and answers, teaches me a little of what I don’t know.

Maybe it was for this reason that I looked at yesterday’s ITE questions with a little bit of extra curiosity, wondering what I’d learn as I went along. I know that I’m not supposed to share specifics of the test, so I’ll mention a few generalities about the types of lessons I learned. I wonder if it would be over-generalizing to suppose that as an examinee, I was meant to take away and incorporate some of the lessons I learned into my approach to oncology practice?

I learned that with few exceptions, I should treat, treat and treat. In a few situations, I was offered a patient’s poor performance status and allowed to choose “supportive care” or “hospice referral,” but this was uncommon. There were, as far as I can remember, no instances in which I was asked to estimate and share prognosis with a patient or to incorporate a patient’s values or preferences into treatment planning or to consider the costs of one choice vs. another or to think about the overall costs of care. I felt that for the most part, I was to treat first and ask questions later.

And I learned a little bit about new therapies. I was asked in one instance to select a choice that (at least in my mind) alluded to a newly approved supportive care medication that had been profiled in a New England Journal of Medicine article in just the last year. In another, I was asked to recognize an appropriate laboratory safety monitoring plan for a targeted biologic drug being given to a patient who happened to be already dying of widespread disease. And in a last, I was asked to think about how much of a pre-existing condition was too much (and how much was not enough?) in order to decide whether the condition was really a contraindication (or not) to prescribing another targeted biologic. In some cases I felt a little bit like I was attending a CME event or reading a drug label.

I should point out that there was a lot more to the test than this. There were some bread and butter principles tested, and I felt that my competencies were, for the most part, fairly assessed. But the examples provided above do beg the question: what do these types of lessons, conveyed and learned over the course of a fellow’s in-training exam, say about the broader themes that the profession is trying to teach? And how might these lessons change in the world of comparative effectiveness research and practice that looms ahead?