Issue: April 2014
March 12, 2014
2 min read
Save

Discrepancies found between trial results in journals, public registry

Issue: April 2014
You've successfully added to your alerts. You will receive an email when new content is published.

Click Here to Manage Email Alerts

We were unable to process your request. Please try again later. If you continue to have this issue please contact customerservice@slackinc.com.

Inconsistencies were common between clinical trial results published in “high-impact” journals and on a public clinical trial registry, according to a Yale research team.

“Our findings raise questions about accuracy of both ClinicalTrials.gov and publications, as each source’s reported results at times disagreed with the other,” Joseph S. Ross, MD, assistant professor of general internal medicine at Yale University, and colleagues wrote in JAMA. “Further efforts are needed to ensure accuracy of public clinical trial result reporting efforts.”

Joseph S. Ross, MD 

Joseph S. Ross

The 2007 FDA Amendments Act requires that the public clinical trial registry report results for all FDA-regulated medical products within 1 year of a trial’s completion.

Ross and colleagues identified 96 clinical trials reporting primary results on ClincialTrials.gov that were published in 19 high-impact journals from 2010 to 2011. For each trial, they compared information about cohort characteristics, trial interventions and primary and secondary efficacy endpoints and results between the two sources, categorizing each trial as concordant, discordant or incomparable. Each time a discrepancy was found, the researchers sought to determine whether it affected the interpretation of the study.

According to the researchers, 73% of clinical trials were financially supported by industry. The most common medical conditions studied were cardiovascular disease, diabetes and hyperlipidemia (23%); cancer (21%); and infectious disease (20%). The trials were most frequently published in TheNew England Journal of Medicine (24%), TheLancet (19%) and JAMA (12%).

Information about the study cohort, intervention and efficacy endpoint was reported in both sources for 93% to 100% of the trials, but 93 of the 96 trials had at least one discrepancy between the two sources. Discordance ranged from 2% to 22% in clinical trials that reported information for each cohort characteristic and intervention, and mostly occurred in information describing completion rates and trial interventions. Conflicting information about dosages, frequencies and the duration of the trial interventions were common in these instances.

Among 132 endpoints described in high-impact journals and on ClinicalTrials.gov, 16% were classified as discordant. In most cases, discordant reporting of results did not alter the interpretation of the study, but for six studies, it did.

Slightly more than half (52%) of primary efficacy endpoints and 16% of secondary efficacy endpoints were reported in both sources and appeared to be concordant.

According to Ross and colleagues, discrepancies that occur when a journal and public registry report the same endpoint may be explained by typographical and reporting errors and changes made during the peer-review process. However, for those instances when one source reported a result that was not included in the other, a possible explanation may be the “intentional dissemination of more favorable endpoints and results” in high-impact journals.

“Because articles published in high-impact journals are generally the highest-quality research studies and undergo more rigorous peer review, the trials in our sample likely represent best-case scenarios with respect to the quality of results reporting,” the researchers wrote.

Disclosure: See the study for a full list of financial disclosures.