Balancing ESA and iron therapy in a prospective payment environment
Click Here to Manage Email Alerts
This article is part of a special supplement published by Nephrology News & Issues in the February 2014 issue entitled, "Iron therapy and a quarter century of ESAs: What have we learned?"
The treatment of anemia with erythropoiesis-stimulating agents and iron in patients with end-stage renal disease requiring dialysis is complicated by unsure hemoglobin targets, the inability to adequately assess iron status, and the potential adverse side effects of both agents. In the absence of definite evidence-based guidelines for the use of iron and ESAs, the dosing of these drugs has been driven more by economics than pharmacology. Until the implementation of the prospective payment system by Medicare in 2011, ESA and intravenous iron administration represented the major profit center for dialysis units. According to the 2010 United Stated Renal Data System report for the years 2004–2008, the costs for outpatient ESA and IV iron were $7.8 billion and $1.0 billion, respectively.
From its earliest use in dialysis units, ESA dosing was driven by Medicare reimbursement policy. Recombinant human erythropoietin (EPO) was approved for the treatment of anemia by the U.S.. Food and Drug Administration in 1989. In June 1990, Medicare established a fixed reimbursement rate of $40 for each administration of EPO, regardless of the amount of drug given. It is reported that this reimbursement rate was calculated by dividing the annual Medicare amount budgeted for EPO reimbursement by the expected number of EPO administrations to be given. The assumption was made that approximately 90% of dialysis patients would receive three doses of EPO weekly. Dialysis units calculated that, given the price of EPO at the time, they could break even by administering about 3,000 units per treatment.
After several months, most dialysis units found themselves drastically decreasing the EPO dose to avoid excessively high Hb concentrations, which were rarely seen previously in dialysis patients. As a result, the average EPO dose decreased from 3,000 to about 1,000 units per treatment. Since Medicare reimbursement for EPO was fixed, dialysis units began experiencing their first EPO-driven windfall profits by giving small EPO doses frequently. Coincidentally, transfusion rates sharply declined.1
In January 1991, Medicare reimbursement policy for EPO changed from a fixed price per administration to a fixed price per 1,000 units. Dialysis facilities recognized the potential to maximize income over expenses by administering larger doses of EPO less frequently. The new reimbursement rules, combined with lucrative volume price discounts on EPO led to a gradual increase in the amount of EPO given. Six years later, the first practice guidelines for the treatment of anemia in CKD were published.2 However, little evidence was available for these recommendations and they were largely opinion based. Almost 10 years after EPO was introduced, large studies were undertaken to investigate the concept of Hb normalization in CKD and ESRD patients. These studies revealed unintended harmful effects of unrestricted EPO dose escalation.3
The findings were largely ignored until it became clear that reimbursement for EPO would be folded into a bundled composite rate and ESAs would become a cost center for dialysis units rather than a source of profit. Around that time, additional compelling evidence for decreasing ESA usage was discovered, prompting the FDA to add a black box warning of increased risk of death and eventually to revamp the package insert for ESAs altogether.4 Finally, after 25 years, the economic drivers of ESA use and the evidence base for ESA dosing in ESRD patients are concordant: “Use the least ESA dose possible to achieve a Hb level sufficient to avoid transfusions.”
The need for iron
When EPO was first introduced to correct the severe anemia seen in hemodialysis patients, iron supplementation was rarely needed, because most dialysis patients were iron overloaded from multiple transfusions. However, very soon after the Hb level was increased, iron stores were consumed and iron losses accelerated. Blood loss with an Hb level of 12 g/dL results in twice the iron loss from blood with an Hb level of 6 g/dL. Parenteral iron replacement was judiciously provided in dialysis units as 1,000 mg bolus doses of iron dextran, when iron deficiency ensued. Despite the fact that iron dextran was inexpensive, a small profit was maintained by separately billing Medicare, as with EPO. When iron gluconate and iron sucrose became available, two corporate sales forces successfully touted the safety of their iron supplements over iron dextran. The problem was that these new iron suspensions contained such small molecules that, to avoid adverse reactions to labile iron, much smaller amounts had to be given in each dose. As a result, frequent small doses were given and the era of iron maintenance therapy was born.
As the ominous deadline for the inclusion of both iron and ESAs into the composite rate approached, studies touting the expanded use of less expensive iron supplementation as a substitute for more expensive EPO appeared, even in patients with ferritin levels as high as 1,200 ng/mL.4 The result is predictable. According to the Dialysis Outcomes and Practice Patterns Study (DOPPS), the percentage of patients in the United States receiving parenteral iron increased from 55% to 70% during 2011 and has remained at that level through April 2013. Median serum ferritin levels increased to 795 ng/mL with 15% of patients having ferritin levels over 1,200 ng/mL. DOPPS has not observed similar changes in other countries, providing further evidence that the U.S. payment system is now driving the use of iron.
Although physiology-driven dosing of EPO was proposed very early after the introduction of this agent, it has been largely overlooked.5 While the original product label stipulated dose individualization, it simultaneously advocated fixed 25% dose adjustments. Furthermore, the original dose-ranging study for EPO focused on the rate of rise of Hb only without paying attention to the time to reach steady state.
Defining iron deficiency
As far as physiologic iron supplementation is concerned, physicians had been struggling with the concept of functional iron deficiency. While the standard marker of iron storage, serum ferritin, is reliable in diagnosing absolute iron shortage, it is not useful to inform about iron flow through the blood stream to the bone marrow. This is done by transferrin saturation (TSAT), which provides the average number of iron binding sites in transferrin that are actually occupied by iron. In an otherwise healthy individual, approximately 1/3 of the transferrin sites are taken (TSAT = 33%). However, TSAT has been repeatedly shown to suffer from large biological and analytical variability, putting its diagnostic reliability into question.
In a healthy individual, iron release from storage is controlled by iron demand in the bone marrow, as driven by erythropoietin concentration. It is likely that dosing ESAs at superphysiologic levels in CKD patients pushes this well-synchronized mechanism to its limits and thus increases the number of red cells with less than normal Hb content.
Because the Hb levels in CKD patients are lower than normal, this phenomenon, otherwise known as microcytic or iron deficiency anemia, is very likely to remain undiagnosed when the standard iron markers of ferritin and TSAT are used. Instead, the presence of this “covert” iron deficiency anemia should be monitored with the hematimetric indices: mean cellular Hb (MCH), mean cell volume (MCV), and red cell volume distribution width (RDW).
The dynamics of red cell population are largely determined by the red cell lifespan, which may vary anywhere between 60 to 120 days in CKD patients. However, the concept of red cell lifespan, essential to the long-term Hb variability, is notoriously elusive. The red cell lifespan has two implications on dosing of EPO and iron: 1) any single dose adjustment should not be expected to have an “immediate” corrective effect, and conversely 2) any single dose adjustment may have a prolonged effect over time. These two aspects of EPO and iron dosing are not easily appreciated in the clinical environment because it is rather difficult to envision a patient’s response more than two to four months ahead of time.
Use of mathematical modeling to balance iron, EPO needs
To better manage the concurrent administration of ESAs and iron and to achieve the optimal balance between these two agents, we propose the use of mathematical modeling to represent the physiology of erythropoiesis. This approach, also referred to as "systems biology," can help overcome the obvious shortcomings of the prevalent opinion-based trial-and-error approach to dosing iron and ESAs. Physiologic models created from combinations of experimental and routine clinical data can be used as a preclinical tool to validate new markers and design new interventions without exposing humans to undue risk. A block diagram of a simple physiologic model of the erythropoietin-iron axis is shown in Figure 1. In this model, the iron dose affects the storage (ferritin) and serum (transferrin saturation) of iron. The erythropoietin dose and the iron released from the storage to the bone marrow affect the total Hb concentration. However, the link between the bone marrow iron and erythropoietin is represented here by three hematimetric indices: MCH, MCV, and RDW. A validated systems biology model, such as this one, can be used to analyze clinical scenarios of interest and then simulate appropriate interventions. For example, in a situation where MCH and MCV are decreasing while RDW is simultaneously increasing, we predict the occurrence of microcytic anemia discussed above. Depending on the iron storage (ferritin) and the erythropoietin dose, potential interventions would include increasing the iron dose if the storage is depleted or decreasing the ESA dose. To increase the level of sophistication, we can employ advanced optimization and control algorithms to automatically derive from this model dosing intervention in order to achieve the optimal balance between Hb level, iron storage, and iron delivery to the bone marrow, while using minimum necessary amounts of ESA and iron.
Summary
Ever since the introduction of EPO, ESAs and iron dosing have been driven by financial incentives. When ESAs were a profit center for providers, large doses were used. With ESAs becoming a cost center, a new trend has appeared, gradually replacing their use with iron to achieve the same therapeutic effect at lower cost. This financially driven approach of treating ESAs and iron as alternatives is not consistent with human physiology where these agents act in a complementary manner.
It is likely that we are still giving unnecessarily large doses of ESAs and iron, relative to what our patients’ true needs are. Although we have highlighted the economic drivers of this outcome, many other factors play a role. These include our lack of understanding of the complex interplay of the anemia of chronic disease, inflammation, poor nutrition, blood loss through dialysis, ESAs and iron deficiency. We propose that physiology-driven modeling may provide some insight into the interactions between erythropoiesis and ferrokinetics. This insight can then be used to derive new, physiologically compatible dosing guidelines for ESAs and iron. -by George R. Aranoff, MD, MS, FACP; Adam E. Gaweda, PhD, MS
References
1. Aronoff GR, Duff DR, Sloan RS, et al. The treatment of anemia with low-dose recombinant human erythropoietin. Am J Nephrol. 1990;10(Suppl 2):40-3.
2. NKF-DOQI clinical practice guidelines for the treatment of anemia of chronic renal failure. National Kidney Foundation-Dialysis Outcomes Quality Initiative. Am J Kidney Dis. 1997;30(4 Suppl 3):S192-240.
3. Besarab A, Bolton WK, Browne JK, et al. The effects of normal as compared with low hematocrit values in patients with cardiac disease who are receiving hemodialysis and epoetin. N Eng J Med 1998;339(9): 584–90.