Issue: June 25, 2013
June 01, 2013
14 min read
Save

Despite increased scrutiny, bias remains ‘concerning’ component of cancer research

Issue: June 25, 2013
You've successfully added to your alerts. You will receive an email when new content is published.

Click Here to Manage Email Alerts

We were unable to process your request. Please try again later. If you continue to have this issue please contact customerservice@slackinc.com.

The potential for bias is an inescapable reality of cancer research.

Whether it is due to an unintentional, unrecognized sway in the balance of data collection or a conscious effort on the part of investigators to manipulate endpoints, the possibility that results are distorted — and therefore cannot be reproduced — has been one of the greatest threats to the validity of medical research since rigorous scientific investigation began.

However, as the cost of cancer research has increased, so has the number of entities with vested financial interests in the outcomes. Consequently, the potential for bias in reporting has never been higher.

Robert Dreicer, MD 

Robert Dreicer

A study by Jagsi and colleagues, published in Cancer in 2009, revealed nearly one-third of clinical cancer research published in high-impact journals involves a conflict of interest. The most frequent type was industry funding, reported in 17% of papers evaluated, whereas 12% of papers included a study author who was an industry employee.

“Over the last 5 to 10 years, because of the way our system is set up, there has been an increased scrutiny on conflicts of interest,” Robert Dreicer, MD, chair of solid tumor oncology and director of clinical research at Cleveland Clinic’s Taussig Cancer Institute, told HemOnc Today.

Although it might be easy to assume that financial interests are the sole cause of biased reporting, particularly when large grants and the pharmaceutical industry are in play, unintentional bias — found in the selection of patients or in the way data analyses are conducted — remains a significant concern.

Alan Hutson, MA, PhD, professor of oncology and chair of the department of biostatistics and bioinformatics at Roswell Park Cancer Institute, urged clinicians to review protocols of published studies to determine whether researchers accurately report primary endpoints or oversell the study by focusing on secondary endpoints. 

Alan Hutson, MA, PhD, professor of oncology and chair of the department of biostatistics and bioinformatics at Roswell Park Cancer Institute, urged clinicians to review protocols of published studies to determine whether researchers accurately report primary endpoints or oversell the study by focusing on secondary endpoints.

 

Source: Photo courtesy of Roswell Park Cancer Institute.

 

“Phase 3 trials are the most effective in eliminating bias, but even they are not perfect, regardless of who is doing the research,” Alan Hutson, MA, PhD, professor of oncology and chair of the department of biostatistics and bioinformatics at Roswell Park Cancer Institute, said in an interview. “In epidemiological or other types of studies, all you can do is minimize the bias. We just have to accept that it will be there.”

HemOnc Today spoke with several experts about the extent and nature of bias in hematology and oncology research, the role of the peer-review process in mitigating the problem, and how bias may affect both study design and clinical practice.

Kristjan Paulson, MD, FRCPC 

 

Kristjan Paulson

 

“Bias is by nature hard to understand,” said Kristjan Paulson, MD, FRCPC, a hematologist at CancerCare Manitoba and the University of Manitoba in Canada. “Thus, it’s hard to put an exact value on how big a problem it might be. However, almost all studies that have sought to understand bias in cancer research have found something concerning.”

Emphasis on the positive

The study by Jagsi and colleagues included 1,534 original studies published in 2006 by the five top oncology journals and three top general medical journals. The investigators found randomized trials with reported conflicts of interest were more likely to have positive findings.

A study by Vera-Badillo and colleagues, published in January in Annals of Oncology, examined a related concern: the trend toward reporting only positive outcomes.

The researchers conducted their investigation to evaluate the quality with which primary outcome measures and toxicity profiles were reported in randomized phase 3 trials of breast cancer treatments. They reviewed several databases and identified 164 relevant studies conducted between 1995 and 2011.

Results indicated that 33% of the trials demonstrated bias in reporting of the primary endpoint and 67% showed bias for toxicity.

Primary endpoints were more likely to be highlighted in the concluding statements of the abstract if significant differences were observed that favored results for the experimental arm, the researchers reported.

Among 92 trials with a negative primary outcome measure, 59% used secondary endpoints to suggest that the study drug or therapy offered a benefit. Just 32% of studies contained data on the frequency of grade 3/4 toxicities, and studies that demonstrated a positive primary endpoint trended toward the under-reporting of toxicity data, Vera-Badillo and colleagues reported.

PAGE BREAK

Ian Tannock 

 

Ian F. Tannock

 

“We looked at toxicity warnings in the latest FDA labels and found a high proportion of these toxicities were not described in any trial and were not in the initial FDA label,” researcher Ian F. Tannock, MD, PhD, DSc, professor of medical oncology at Princess Margaret Hospital and University of Toronto, told HemOnc Today. “When we looked at endpoints, we saw that surrogate endpoints like DFS and PFS were often emphasized rather than OS. We definitely saw spin and bias in terms of not reporting primary endpoints.”

Hutson said he was particularly troubled by what he called “the over-emphasis on not reporting” toxicities and adverse events.

“I have seen trials in which the researchers failed to publish adverse events because they knew it would undergo FDA review,” Hutson said. “This places a big burden on the FDA to screen out drugs that aren’t safe.”

A conscious effort to promote positive results can have dangerous consequences, Paolo Boffetta, MD, MPH, director of the Institute for Translational Epidemiology at Icahn School of Medicine at Mount Sinai, said in an interview.

Paulo Boffetta, MD, MPH 

 

Paolo Boffetta

 

“Negative studies are important to show how things shouldn’t be done, which is just as clinically relevant as how things should be done,” Boffetta said. “Rather than focusing on just what is positive, we should be focused on promoting high-quality, true, reproducible results.”

Hutson urged clinicians to go beyond what is presented in an abstract and seek information on clinicaltrials.gov, a searchable database that provides current information about clinical research studies around the world.

“The protocols are available for anyone to read,” he said. “You can see what the primary endpoint was at the outset and whether the researchers have reported this accurately, or whether they have oversold the study by reporting on time to events or PFS or some other key secondary endpoint.”

Pressure to publish

Paulson and Matthew Seftel, MD, MPH, FRCPC, also a hematologist at CancerCare Manitoba and the University of Manitoba, have studied the discrepancies between the information that appears on the government registry and the findings that are highlighted in peer-reviewed journals.

They evaluated publication bias at the 2006 Center for International Blood and Marrow Transplant Research/American Society for Blood and Marrow Transplantation tandem meeting.

Paulson and Seftel categorized 501 abstracts by type of research, funding status, the number of centers involved, sample size and direction of the results. Of those abstracts, 217 (43%) were later published as full manuscripts.

Results of their study, published in Blood in 2011, indicated that abstracts with positive results were more likely to be published than those with negative or unstated results (P=.001).

“It is much more interesting to find out that a new treatment works than to find out that it doesn’t work,” Paulson said. “This kind of bias isn’t isolated to the researchers themselves. It is likely that journal reviewers and editors, pharmaceutical companies, government research funders and other parties involved in research feel the same way.”

Boffetta said the system for publication and promotion within the academic and clinical communities may be partially to blame.

“The whole system is set up with a strong incentive to publish strongly positive results,” Boffetta said. “For the individual, it can improve a CV if a study has been published in a good journal. In many places, the impact factor of the journal is a major component of how research is evaluated and serves in the advancement of career promotion, tenureship, [and] grant applications and review. A long list of publications in [The New England Journal of Medicine] or JAMA will carry more weight to someone applying for a grant or a job. And, of course, the journal can gain publicity if they publish positive results.”

PAGE BREAK

The study by Paulson, Seftel and colleagues also examined the impact factors of the journals that later published results presented at the tandem meeting. The mean impact factor of journals that published studies with positive results was 6.92, and the mean impact factor of journals that published negative or unstated results was 4.3 (P=.02).

This finding challenges whether the top-tier journals actually publish the best results or just the most positive results, Seftel said.

Major journals — such as The Lancet, JAMA and NEJM — are “legendary” for the rigor of the review processes, Dreicer said.

“They re-do the statistics,” he said. “They have in-house biostatisticians to handle the numbers and place an enormous amount of emphasis to make sure statistics are straight, above board and as conservative as can be.”

Boffetta disagreed.

“My understanding is that you should be as cautious about a paper in a high-impact journal as you should about any other,” Boffetta said.

Peer-review process

Dreicer said he cannot speak for everyone, but he believes most clinicians have faith in the peer-review process.

“They respect the effort that goes into major journal reviews,” he said. “They can sense what smells fishy, whether or not the declared disclosures provide a basis for some potential bias.”

Tannock said the issue might be more personal for each clinician.

“If I get asked to review by JAMA or NEJM, I’ll do it,” he said. “But I might be more likely to pass if it is a journal that doesn’t have as high a reputation. I suspect many others feel the same way.”

Many key opinion leaders will only review for the major journals, Tannock said.

“Some of those individuals are good and unbiased, but others are there because companies like them,” he said. “It’s a mix. The general standard of oncology reporting in those journals is pretty good. People recognize when they’re reviewing for The Lancet or NEJM that it’s important to do a good job, but they’re not infallible.”

For Paulson, the issue comes back to bias within the research itself.

“The larger journals have more resources, and scrutiny of articles submitted is perhaps a little higher,” Paulson said. “That might allow them to better detect some forms of bias. However, some of the more subtle forms of bias … are harder to pick up by reviewers.”

The good news, according to Dreicer, is that reviewers generally do not have to deal with research that contains wildly incorrect or biased information.

“There are consequences to fraudulent papers and everyone knows it,” Dreicer said. “Reviewers might say, ‘We question whether the data presented supports some of your conclusions and request revisions,’ but there is no penalty to something like this because it happens regularly,” Dreicer said. “In those instances, the researchers are generally willing to make the changes. On this level, at least, the peer-review process is effective.”

Conflicts of interest

The experts who spoke with HemOnc Today agree outright fraud is rare; however, associations between publication and financial interests have been found in several studies.

Cosgrove and colleagues investigated databases through November 2010 and identified 61 papers that addressed links between antidepressant use and risk for breast or ovarian cancers.

Results, published in PLoS One in 2011, indicated that 33% of studies reported positive associations between antidepressant use and cancer. The other 67% reported no link or antiproliferative effect. The pooled OR for the link between antidepressant use and incidence of either cancer was 1.11 (95% CI, 1.03-1.20).

The researchers then used multimodal screening techniques to examine the nature of financial relationships between the investigators involved in those 61 studies and industry.

They identified 15 researchers with industry affiliations and 46 with no such affiliations.

Researchers who had industry affiliations were significantly less likely to conclude that antidepressants increased the risk for breast or ovarian cancer (0% vs. 43.5%; P=.0012), Cosgrove and colleagues reported.

PAGE BREAK

A study by Bariani and colleagues, designed to test whether there was a link between the self-reported conflict-of-interest information declared by investigators and the conclusions they drew in cancer studies, reached a different conclusion.

The analysis included 150 phase 3 trials and 150 editorials culled from 1,485 papers published between January 2008 and October 2011.

The findings, published in April in the Journal of Clinical Oncology, showed positive results were reported in 54.7% of the studies.

Fifty-two percent of studies were entirely or partially funded by industry.

Conflicts of interest were reported in 68.7% of the studies and 47.3% of the editorials.

Multivariable analysis showed trial results were the only significant predictor for a positive conclusion by trial authors (OR=92.2; 95% CI, 19.7-431.6). The investigators observed no association between sponsorship and positive outcomes (OR=0.86; 95% CI, 0.3-2.5).

Boffetta put these conflicting results in context.

“I would temper the conclusions whenever there is great financial interest,” he said. “But, in fact, the truth is probably somewhere in the middle. We should not suggest that it is black and white, that all sponsored trials are true or all non-sponsored trials are false. There is a range.”

Dreicer said there is enough self-policing to keep financial interests in check.

“Conflict-of-interest disclosures and the source of funding information are increasingly prominent in most journals,” he said. “The clinical community is spending more time to mitigate it.”

Still, financial contributions from industry are undeniable, Tannock said.

“Over the past 30 years, the proportion of trials supported by academia has decreased and the proportion of trials supported by industry has increased,” Tannock said.

Continued education and training regarding the consequences of financial misconduct are essential, particularly for young researchers, Hutson said.

“If people want to be self-centered and manipulate things, it is hard to stop them,” Hutson said. “We just make sure to let them know what the checks and balances are — suspensions and the like. In academia, if you can’t apply for a grant for 5 years, you might as well just resign.”

No perfect study

Aside from the peer-review process, the clinical community has tried to preserve research integrity by ensuring study protocols are free from bias.

Reeve and colleagues described a propensity score to help adjust for differences between study groups and confounding variables in observational studies of cancer.

The approach matches cases to controls on the basis of the single variable — the propensity score — rather than multiple variables.

Their findings, published in 2008 in Health Care Financing Review, concluded that the score is an efficient and useful way to create a matched case-control study from the results of a cohort analysis.

The range of scores across the sample was 0.0089 to 0.1656, according to the results.

Such propensity scores have been proposed before, Boffetta said.

“There is nothing incredibly novel about this,” he said. “They can be a more efficient way to adjust for possible sources of bias. Not surprisingly, they say that matching with the score is better than adjusting for variables.

“It seems like an easy solution, but it depends on underlying variables and misclassifications,” Boffetta added. “A propensity score seems to account for these sources of error, but — although it can be a good solution — it depends on the quality of the underlying data.”

Simply increasing awareness is another way authors, editors and readers can help understand the forms of bias to which they might be susceptible, Paulson said.

Boffetta offered a more long-term solution.

“We need to change the way research is evaluated,” Boffetta said. “One way to change this is to have some system by which a study being negative or positive is seen in the longer term. We should have a system where we can look at the study 2 years or 5 years down the line to see if the results hold up, regardless of whether they are positive or negative.”

PAGE BREAK

Reproducibility is key, he said.

“If, in the long term, the result in the paper becomes a standard, this is what should be recognized rather than short-term impact of the paper,” Boffetta said.

When asked the extent to which the average clinician recognizes bias in the literature, Tannock said most might be aware of about 10% to 20% of the issues at hand.

“This is obviously not an exact figure, but the point is that the average clinician may not recognize all of these forms of bias or the forces pushing these drugs,” Tannock said.

The physicians are not necessarily to blame, Tannock said.

“Most of us are pretty busy,” he said. “We don’t have time to read or, if we do have time, we don’t have time to do the background research to validate what we read in these journals. We have to trust the system.”

It also is important to accept that bias exists on all levels, Tannock said.

“There is no such thing as a perfect study,” he said. “There is only good, better or best, but you can’t control for everything. The best we can hope for is that clinicians are aware of these issues as they read the research.” – by Rob Volansky

References:

Bariani GM. J Clin Oncol. 2013;doi:10.1200/JCO.2012.46.6706.

Cosgrove L. PLoS One. 2011;6:e18210.

Jagsi R. Cancer. 2009;115:2783-2791.

Paulson K. Blood. 2011;118:6698-6701.

Reeve BB. Health Care Financ Rev. 2008;29:69-80.

Vera-Badillo FE. Ann Oncol. 2013;24:1238-1244.

For more information:

Paulo Boffetta, MD, MPH, can be reached at The Mount Sinai Medical Center, 1 Gustave L. Levy Place, New York, NY 10029-6574; email: paolo.boffetta@mssm.edu.

Robert Dreicer, MD, can be reached at Cleveland Clinic Main Campus, 9500 Euclid Ave. R35, Cleveland, OH 44195; email: dreicer@ccf.org.

Alan Hutson, MA, PhD, can be reached at Roswell Park Cancer Institute, Elm and Carlton streets, Buffalo, NY 14263; email: alan.hutson@roswellpark.org.

Kristjan Paulson, MD, FRCPC, can be reached at 675 McDermot Ave., Winnipeg, MB, R3E0V9, Canada; email: kristjan.paulson@cancercare.mb.ca.

Ian F. Tannock, MD, PhD, DSc, can be reached at Princess Margaret Hospital and University of Toronto, 610 University Ave., Suite 5-208, Toronto ON M5G 2M9, Canada; email: ian.tannock@uhn.ca.

Matthew Seftel, MD, MPH, MRCP, FRCPC, can be reached at CancerCare Manitoba and University of Manitoba, 675 McDermot Ave., Winnipeg, MB, R3E0V9, Canada; email: matthew.seftel@cancercare.mb.ca.

Disclosure: Dreicer reports research support from Millennium and Progenics, as well as consulting roles with Dendreon, Endo Pharmaceuticals, Janssen, Lilly, Medivation and Millennium. Tannock reports research support from Sanofi, as well as contributions to his research fund in lieu of honoraria from other companies. Boffetta, Hutson, Paulson and Seftel report no relevant financial disclosures.

POINTCOUNTER

Should we only conduct health care research without financial help from industry?

POINT

No, of course not.

Industry is not inherently evil. Lots of good research has come out of partnerships with these companies. They do millions of dollars of research annually with wonderful results. It would be foolish to ban them from the process.

The problem is not necessarily the funding itself. It is more in how the data are used. When the data don’t reflect the product in a positive light, the study may be delayed or never published at all.

This is problematic. Not knowing about negative data may cause another investigator to go down a useless path, which in turn puts patients at risk to be exposed to useless therapies.

As clinicians, we should be looking at study design. Does it ask a critically important question? Are the funds being used for marketing a drug or doing research?

On the back end, even well-conducted studies demonstrate publication bias when industry fails to release data that are not useful to their purposes, or even detrimental to their purposes.

We should not denigrate industry involvement in research, but we shouldn’t assume that it is always done with pure intent. Trust but verify.

David H. Johnson, MD, is the Donald W. Seldin distinguished chair in internal medicine, as well as professor and chairman of internal medicine, at UT Southwestern School of Medicine. He can be reached at UT Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX 75390-9030; email: david.johnson@utsouthwestern.edu. Disclosure: Johnson reports no relevant financial disclosures.

COUNTER

It is critical that strong checks and balances are put in place.

During the pre-approval phase, regulatory agencies can — and generally do — exert the required oversight to ensure that research is of high quality and new agents meet all requirements for approval.

Things are more difficult in the post-approval phase, where underreporting of harms and selective reporting of benefits in industry-supported research are real and well-documented problems.

To promote objective reporting of research findings, we need strong peer review and transparent disclosure of all financial and non-financial conflicts of interest. We also need better education of clinicians on research methods to enable them to become more critical users of the literature.

Most importantly, however, we need robust alternative funding streams to facilitate independent health care research without support from industry.

Not only does independent research funding — awarded based on scientific merit — improve the likelihood of objective reporting of research findings, it is the only way to ensure that all important questions get asked.

Unfortunately, independent funding for heath care research is a challenging proposition in times of tightening government budgets.

Lastly, we need to encourage publication of negative findings and generally put more emphasis on the body of evidence rather than any individual study.

Industry-supported health care research can be an important and valuable source of evidence, but it would be naïve to ignore its problems. To ensure balance and to improve the evidence base beyond areas of direct interest to industry, we have to maintain and strengthen industry-independent researchers and research mechanisms.

Ultimately, clinical decisions and guidelines should be based on the totality of high-quality research findings, including those obtained with and without industry support.

Tobias Gerhard, PhD, is an assistant professor at the Ernest Mario School of Pharmacy and Institute for Health, Health Care Policy and Aging Research at Rutgers University. He can be reached at 112 Paterson St., New Brunswick, NJ 08901-1293; email: tgerhard@ifh.rutgers.edu. Disclosure: Gerhard reports no relevant financial disclosures.