Findings of the study, published ahead of print April 24 in the Journal for Healthcare Quality, reveal poor data monitoring and reporting that researchers say are hurting national efforts to study disease, guide patient choice of optimal treatments, formulate rational health policies and track in a meaningful way how well physicians and hospitals perform.
“Our results highlight the acute need to improve the way clinical outcomes data are collected and reported,” says senior investigator Marty Makary, M.D., M.P.H., professor of surgery at the Johns Hopkins University School of Medicine. “Failure to measure and accurately track patient outcomes remains one of the greatest problems in modern health care, curtailing our ability to understand disease, evaluate treatments and make the health-care industry a value-driven marketplace.”
In addition, the failure to track patient outcomes in a systematic way is tantamount to not measuring the performance of a sector that claims one-fifth of the nation’s economy, the research team says.
Clinical registries are databases of patient outcomes developed and maintained by medical organizations and medical specialty groups.
To evaluate the quality of clinical registries, Makary and colleagues say they created “a registry of registries” to study the way the health care industry measures its performance.
“We found it’s the Wild West,” Makary says. “With a few notable exceptions, most registries are underdeveloped, underfunded and often are not based on sound scientific methodology.”
The investigators assessed 153 U.S. clinical registries containing health service and disease outcomes data. On average, a registry contained information on more than 160,000 patients treated across more than 1,600 hospitals.
Less than one-quarter of registries adjusted their results for differences in disease complexity — information statistically reflective of disparities in illness severity and socio-economic status among patients treated across hospitals. Unadjusted data, the researchers say, could be misleading and should be interpreted with great caution.
Less than one-fifth of registries contained independently entered data — information entered by clinicians other than the ones involved in care — an important principle in mitigating the well-established bias of self-reported data, the researchers say.
Although one-quarter of registries — 40 in total — were funded by taxpayers, only three shared their data publicly. Of note, 84 percent (98 of 117) of U.S. recognized medical specialties had no national clinical registries — a significant gap in the efforts to compare the efficacy of treatments and evaluate the quality of care on a large scale.
The researchers say such failure to capture and measure patient outcomes is troubling because the insights gleaned from such information could have a direct and profound impact on scientific research and human lives.
“A robust clinical registry can tell doctors in real time what medications work well and which are harming patients, yet the infrastructure to achieve that is vastly under-supported,” says study co-author Michol Cooper, M.D., Ph.D., a surgical resident at the Johns Hopkins University School of Medicine. “The same rigorous standards we use to evaluate how well a drug does ought to apply to the way we report patient outcomes data.”
Makary and team point out that several organizations maintained exemplary registries with rich, carefully analyzed data, audited and reported in a meaningful way. For example, information in the United Network for Organ Sharing registry has led to important research and discoveries that in turn became the catalyst for the creation of new national policies on organ transplantation. Likewise, Makary says, data from the National Surgical Quality Improvement Program, maintained by the American College of Surgeons, have generated valuable insights about surgical infections, transformed practice and improved patient outcomes. The National Cardiovascular Data Registry of the American College of Cardiology has also led to improvement in the rates of inpatient mortality among participating hospitals. The detailed and accurate data in the Cystic Fibrosis Foundation Patient Registry, Makary says, has allowed for hundreds of research trials, the results of which have played pivotal roles in developing better therapies and prolonging the lifespans of people with this genetic disease.
“These organizations’ databases illustrate the power and potential of clinical registries to improve patient outcomes and inform best practices,” says study lead author Heather Lyu, a research fellow at the Johns Hopkins University School of Medicine. “And if we really want to get serious about measuring and improving performance, we need to develop criteria that will help others run similarly successful registries.”
The hallmarks of a good registry, the authors say, include:
- Data accounting for differences in patient case complexity across hospitals that allows for meaningful comparisons of outcomes
- Broad hospital participation
- Measurement of complications that matter to patients and affect their quality of life
- Independent data collection that eliminates the bias inherent in self-reporting
- Public reporting and open access to hospital performance for taxpayer-funded registries
Makary is currently working with colleagues at the Johns Hopkins University Bloomberg School of Public Health and the Brookings Institution to develop formal guidelines for establishing and maintaining useful registries.