How Helpful Are Hospital Rankings and Ratings for the Public’s Health?

Tags:
Op-Ed

The public must have better information to make sound decisions about which hospitals would be the best choices for themselves or for their families. Currently, many such decisions are based on where their physicians have admitting privileges, advice from friends who had prior experiences with specific hospitals, or the general reputations of hospitals and medical centers in the community. But to augment the bases for such important decisions, several elaborate hospital rankings and ratings have been developed.

The two best-known evaluations of hospitals are those published by the US Centers for Medicare and Medicaid Services (CMS) and by the weekly news magazine US News & World Report.

The CMS star rating, ranging from a rank of 1 (the lowest) to 5 (the highest), evaluates some 4,600 hospitals and is designed to provide comprehensive, quality information about patient care. It ranks hospitals on 64 quality measurements, including patient care for myocardial infarctions and pneumonia, post-surgical infection rates, joint replacement complications, and emergency room waiting times.1 However, the CMS rankings currently do not include data on the quality of care or patients’ health outcomes.2

The US News & World Report ratings evaluate some 5,000 medical centers based on such metrics as death rates, patient safety, and hospital reputation, which are reported by 30,000 physicians.3

In the 2016 CMS report, only 102 (2.2%) of the 4,600 hospitals received an overall rating of 5 stars, 934 (20.3%) received a 4-star rating, 1,770 (38.5%) received a 3-star rating, 723 (15.7%) received a 2-star rating, and 133 (2.9%) received a 1-star rating. An additional 937 (20.4%) received no rating because they either did not report, or did not have, the minimal amount of data required to make a decision. This last scenario might occur with the quality measurement of new or small hospitals or those admitting an insufficient number of patients. The number of beds, however, did not seem to affect the ratings. For example, 51 of the 5-star hospitals had between 1 and 99 beds, 19 had 100 to 199 beds, and 27 had 200 or more beds.1

The 2016 US News & World Report did not rank hospitals by stars, but rather listed them numerically with the top 20 listed as their “honor roll.” The top 5 honor roll hospitals listed were the Mayo Clinic, the Cleveland Clinic, Massachusetts General Hospital, Johns Hopkins Hospital, and University of California Los Angeles Hospital. This report also lists the top 3 hospitals for each of 16 specialties.3

Both ratings employ a great deal of overlapping data and information, so one would expect that the results of both rankings would be similar. However, a review of the CMS rankings and the US News & World Report rankings shows this is not so.

In the CMS ratings, so-called safety-net hospitals (ie, public or private institutions that have relatively large number of Medicaid patients), had a median rating of 2.9 stars, which was lower than the 3.1-star median rating of non-safety-net hospitals. Teaching hospitals accounted for 80 (60%) of the 1-star hospitals, which is a much higher percentage than might be expected. Conversely, in the US News & World Report rankings, all 20 “honor roll” recipients, and essentially all of those top ranked by specialties, were teaching hospitals.

The question that arises is what can account for these ranking systems, which use so much overlapping data, yielding such variance in results?

To begin, both ranking systems have been criticized. One criticism of the US News & World Report rankings, for example, is that it relies to some extent on the number of patients requiring readmission within 30 days to the same hospital. Obviously, if many patients are referred in from a distance, readmission might occur at another hospital and would not be recorded by the first hospital. Moreover, the US News & World Report rankings do not consider the patients’ socioeconomic status, and many of the higher ranked hospitals appear to have wealthier patients. Nevertheless, US News & World Report is widely accepted as credible by hospital administrators and other health care professionals. Indeed, the rankings are often used to promote the “winning” hospitals in advertisements, informational materials, and even on bright banners hung in their corridors.

The US News & World Report rankings are hardly alone in creating controversies. For example, the American Hospital Association (AHA) and many hospital administrators have criticized the CMS system to the extent that the original date of release of its findings was postponed from April 2016 to July 2016, largely because of pressure from the US Congress. Based on the preliminary release of the report, the AHA and hospital administrators complained to their congressmen and senators about the confusing methodologies used, the potential compromising outcomes for disadvantaged communities, and the danger that such problematic rankings would confuse or misinform the public. Perhaps more problematic, the AHA and hospital administrators complained that they do not reach the same conclusions as CMS using the same data sets and methodologies.

Consequently, 60 US senators sent a letter to CMS urging delay of the release. During the 3-month delay, CMS listened to the concerns of hospital administrators and answered questions. After the report was released, however, many hospital administrators continued their objections because they believe the CMS rankings unfairly penalize teaching hospitals and other hospitals that serve some of the poorest people living in America.

There currently seems to be no ideal method to identify and rank a hospital on a scale from excellent, very good, and good, to poor. Not considering the socioeconomic status of patients in these analyses might well be a driving reason for the large discrepancies between the CMS and US News & World Report rankings. Another reason might be that the CMS ratings currently are based on patients’ reports whereas the US News & World Report system is based on reports made by physicians.

A recent study of the CMS 5-star ratings, which depended only on patient experience based on the Hospital Consumer Assessment of Healthcare Providers and Systems,4 offers another potential problem. The researchers focused on Medicare patients and found that a higher CMS hospital star rating was associated with lower mortality and readmission rates. The differences were primarily at the extremes, with there being relatively comparable results with those hospitals clumped in the 2- to 4-star range.2 The patient experiences considered in this ranking included 27 items, 18 of which are considered critical aspects, such as communication with doctors and nurses, responsiveness of staff, cleanliness of the environment, pain management, communication about medications, and discharge information. None of these items are considered in the US News & World Report rankings.

As complex and confusing as these rating systems might be, they do provide the public a piece of the puzzle in determining which hospital best suits their needs. The charge now is to develop a far less confusing, less controversial, more aligned, and more scientifically based rating system so that patients are armed with the more complete picture they need and deserve when making such important decisions.

References

  1. Terry K. CMS releases overall hospital quality star ratings. Medscape Med News. http://www.medscape.com/viewarticle/866763. Published July 28, 2016. Accessed September 13, 2016.
  2. Wang DE, Tsugawa Y, Figueroa JF, Jha AK. Association between the Centers for Medicare and Medicaid Services hospital star rating and patient outcomes. JAMA Intern Med. 2016;176(6):848-850.
  3. Best Hospitals: Rankings & Advice. US News & World Report. http://health.usnews.com/best-hospitals?int=98f808. Accessed September 13, 2016.
  4. CAHPS Hospital Survey. Centers for Medicare & Medicaid Services HCAHPS. http://www.hcahpsonline.org/home.aspx. Accessed September 13, 2016.

Author(s): Catherine D. DeAngelis

Read on Wiley Online Library

Volume 94, Issue 4 (pages 729–732)
DOI: 10.1111/1468-0009.12227
Published in 2016



About the Author

Catherine D. DeAngelis is Johns Hopkins University Distinguished Service Professor Emerita and professor emerita at the Johns Hopkins University Schools of Medicine (Pediatrics) and Public Health (Health Policy and Management), and editor-in-chief emerita of JAMA, where she served as the first woman editor-in-chief from 2000 to 2011. She received her MD from the University of Pittsburgh’s School of Medicine, her MPH from the Harvard Graduate School of Public Health, and her pediatric specialty training at the Johns Hopkins Hospital. She has authored or edited 12 books on pediatrics, medical education, and patient care and professionalism and has published over 250 peer-reviewed articles, chapters, and editorials. Her recent publications have focused on professionalism and integrity in medicine, conflict of interest in medicine, women in medicine, and medical education. DeAngelis is a member of the Institute of Medicine and a fellow of the American Association for the Advancement of Science and the Royal College of Physicians (United Kingdom). She currently serves on the advisory board of the US Government Accountability Office, is a member of the board of Physicians for Human Rights, and serves on the board of trustees of the University of Pittsburgh.

See Full Bio