Pharmacogenetics may improve warfarin dosing in outliers
Compared with fixed-dose and clinical models, a pharmacogenetic algorithm improved dosing accuracy among patients requiring the lowest and highest doses.
Click Here to Manage Email Alerts
Incorporating genetic information into clinical algorithms may help clinicians better determine optimal doses of warfarin for patients who require doses higher and lower than the mean, according to data from the International Warfarin Pharmacogenetics Consortium.
According to the researchers, their pharmacogenetic model estimates appropriate initial dose recommendations closer to required therapeutic doses than those obtained from a strictly clinical algorithm or fixed-dose approach. The results were published in The New England Journal of Medicine.
Given the known relationship between the genes CYP2C9 and VKORC1 and the efficacy of warfarin, the researchers hypothesized that an algorithm using both clinical and genetic data could better estimate the appropriate warfarin dose.
To test their algorithm on a large population base, the consortium, which consists of 21 research groups from nine countries and four continents, pooled clinical and genetic data for patients treated with warfarin. Information on demographic characteristics, genotype combinations, race, use or nonuse of amiodarone as well as initial and optimized warfarin doses was included.
The analysis included 4,043 patients who were used to create a dose algorithm based on clinical data only and an algorithm that added genetic information to the clinical variables. An additional 1,009 patients were included in a validation cohort to calculate the percentage of patients whose predicted dose was within 20% of the actual stable therapeutic dose for each algorithm. Dose predictions from three models were assessed: the pharmacogenetic model, a clinical model with no genetic factors and a fixed-dose model of 5 mg warfarin per day.
Estimating optimal doses
To develop the pharmacogenetic algorithm, the researchers used an ordinary least-squares linear regression method that predicted the square root of the dose, which they deemed to be the best approach. Minimizing the squared error in the prediction of the square root of the dose effectively minimizes the mean absolute error, the researchers wrote. They also developed a clinical algorithm using this approach.
When compared, the mean absolute error was lower for the pharmacogenetic algorithm (8.5 mg per week) than both the clinical (9.9 mg per week) and fixed-dose (13.0 mg per week) models.
The researchers reported that the clinical algorithm may often produce average values that overestimate or underestimate the dose. Additionally, adding genetic data to the clinical model altered the dose predicted by the clinical model and suggested that racial differences in dose requirements are explained by genotype.
The performance of each algorithm was analyzed according to dose: low (≤21 mg per week), intermediate (>21 mg and <49 mg per week) and high (≥49 mg per week). In the low-dose and high-dose groups, the pharmacogenetic algorithm was superior at predicting doses that fell within 20% of the actual dose compared with both the clinical (P<.001) and fixed-dose (P<.001) models. Dose prediction accuracy was similar among all three models in the intermediate-dose group.
In the low-dose group, using the pharmacogenetic algorithm resulted in fewer dose overestimations (59.7%) compared with the clinical (74.8%) and fixed-dose models (100%). The pharmacogenetic algorithm also yielded fewer underestimations in the high-dose group (66.7%) compared with the clinical (86.2%) and fixed-dose models (100%).
The pharmacogenetic algorithm correctly predicted low doses for 54% of patients who required low doses compared with 33% of patients using the clinical algorithm. The pharmacogenetic algorithm also accurately predicted high doses in 26% of patients who required them compared with 9% of patients who needed them using the clinical algorithm. Patients who required high or low doses accounted for 46% of the entire cohort.
Future steps to more accurate dosing
On the heels of these data is a prospective, multicenter, randomized trial from the NIH in which researchers will examine whether a gene-based model for prescribing the initial dose of warfarin will improve patient outcomes. The Clarification of Optimal Anticoagulation through Genetics (COAG) trial is scheduled to begin in April 2009, and the researchers plan to enroll 1,200 patients of various backgrounds and ethnicities.
The researchers will assess two approaches to determining the initial dose of warfarin in patients who are anticipated to need the drug for three months or longer. A dosing strategy similar to that developed in the International Warfarin Pharmacogenetics Consortium trial will be used, according to a press release. About 50% of patients will be randomly assigned to have their initial dose determined by clinical information alone; the other half of patients will have their dose determined using clinical and genetic information, specifically regarding CYP2C9 and VKORC1 variants. Patients will be monitored for six months.
Other issues such as bleeding problems, complications, quality of life and the cost of therapy will also be reviewed.
In an accompanying editorial Janet Woodcock, MD, director in the Center for Drug Evaluation and Research at the FDA, and Lawrence J. Lesko, PhD, FCP, director in the Office of Clinical Pharmacology and Biopharmaceutics in the Center for Drug Evaluation and Research at the FDA, said the potential benefits of pharmacogenetics exist among patients whose drug responses are not average.
Given the expected volume of genetic information and the relative paucity of randomized, controlled trials involving marketed drugs, we need clear thinking about what is required for the adoption of pharmacogenetic testing. by Stacey L. Adams
For more information:
- The International Warfarin Pharmacogenetics Consortium. N Engl J Med. 2009;360:753-764.
- Woodcock J. N Engl J Med. 2009;360:811-813.