The Milbank Memorial Fund is an endowed operating foundation that publishes The Milbank Quarterly, commissions projects, and convenes state health policy decision makers on issues they identify as important to population health.
We focus on a number of topic areas identified by state health policy leaders as important to population health.
The Center for Evidence-based Policy at Oregon Health & Science University is a national leader in evidence-based decision making and policy design.
Keep up with news and updates from the Milbank Memorial Fund. Get the latest from thought leaders, including Christopher F. Koller, president of the Fund.
We publish The Milbank Quarterly, as well as reports, issues briefs, and case studies on topics important to population health.
September 2012 (Volume 90)
September 2012 | Mary Dixon-Woods, Myles Leslie, Julian Bion, Carolyn Tarrant
Context: Performance measures are increasingly widely used in health care and have an important role in quality. However, field studies of what organizations are doing when they collect and report performance measures are rare. An opportunity for such a study was presented by a patient safety program requiring intensive care units (ICUs) in England to submit monthly data on central venous catheter bloodstream infections (CVC-BSIs).
Methods: We conducted an ethnographic study involving ∼855 hours of observational fieldwork and 93 interviews in 17 ICUs plus 29 telephone interviews.
Findings: Variability was evident within and between ICUs in how they applied inclusion and exclusion criteria for the program, the data collection systems they established, practices in sending blood samples for analysis, microbiological support and laboratory techniques, and procedures for collecting and compiling data on possible infections. Those making decisions about what to report were not making decisions about the same things, nor were they making decisions in the same way. Rather than providing objective and clear criteria, the definitions for classifying infections used were seen as subjective, messy, and admitting the possibility of unfairness. Reported infection rates reflected localized interpretations rather than a standardized dataset across all ICUs. Variability arose not because of wily workers deliberately concealing, obscuring, or deceiving but because counting was as much a social practice as a technical practice.
Conclusions: Rather than objective measures of incidence, differences in reported infection rates may reflect, at least to some extent, underlying social practices in data collection and reporting and variations in clinical practice. The variability we identified was largely artless rather than artful: currently dominant assumptions of gaming as responses to performance measures do not properly account for how categories and classifications operate in the pragmatic conduct of health care. These findings have important implications for assumptions about what can be achieved in infection reduction and quality improvement strategies.
Author(s): Mary Dixon-Woods, Myles Leslie, Julian Bion, and Carolyn Tarrant
Keywords: patient safety, infection control, intensive care units, qualitative research, implementation science
Read on Wiley Online Library
Read on JSTOR
Volume 90, Issue 3 (pages 548–591)
Published in 2012
Fundamental Causes of Colorectal Cancer Mortality: The Implications of Informational Diffusion
“If We Build It, Will It Stay?” A Case Study of the Sustainability of Whole-System Change in London