In This Issue

As has occurred with many complex innovations in health care, evaluation research has been used to assess, and perhaps influence, programs to implement health information technology (HIT) in health care. The findings are not always welcomed. Despite the successful implementation of HIT in such huge organizations as Kaiser-Permanente and the U.S. Veterans Health Administration, the results in other contexts have sometimes been disappointing. We begin this issue of the Quarterly with evaluators’ reflections on policymakers’ responses to an evaluation of the British National Health Service’s implementation of its “summary care record.” The article is entitled “Why National eHealth Programs Need Dead Philosophers: Wittgensteinian Reflections on Policymakers’ Reluctance to Learn from History” by Trisha Greenhalgh, Jill Russell, Richard Ashcroft, and Wayne Parsons.

Evaluations that disappoint sponsors or advocates are frequently resisted or rejected. Greenhalgh and her colleagues discuss some of the reasons for this, particularly why this was the case for Britain’s national initiative to introduce eHealth programs. Using the evaluation of the summary care record as an example, they find answers in writings by philosophers like Ludwig Wittgenstein (1889–1951), who, it is fair to say, is rarely cited in the health policy literature. Using Wittgenstein’s ideas about language and communication, Greenhalgh and her colleagues argue that a national eHealth program “is best conceptualized not as a blueprint and implementation plan for a state-of-the-art technical system but as a series of overlapping, conflicting, and mutually misunderstood language games that combine to produce a situation of ambiguity, paradox, incompleteness, and confusion.” This view contrasts with policymakers’ rational approach to the design and implementation of a program. Greenhalgh and her colleagues conclude that “we need fewer grand plans and more learning communities.”

A large literature exists on the relationship between research and policy. Articles on this topic, many of which have been published in the Quarterly, usually focus on the challenge of getting evidence from research used in policymaking. Policymakers’ use of researchers has received much less attention and is the topic of “Galvanizers, Guides, Champions, and Shields: The Many Ways That Policymakers Use Public Health Researchers,” by Abby Haynes, James Gillespie, Gemma Derrick, Wayne Hall, Sally Redman, Simon Chapman, and Heidi Sturk. This article is based on a study of Australian public servants, health ministers, and ministerial advisers who public health researchers identified as “research engaged.” Haynes and her colleagues found four ways—those listed in the article’s title—in which these policymakers made use of researchers, thus adding a new dimension to the topic of evidence-based policymaking.

The next article addresses yet another aspect of the research-policy nexus: the effects of a particular public policy on both research and researchers. The public policy in question is the set of regulatory requirements designed to protect the rights and welfare of human research subjects. In “Burdens on Research Imposed by Institutional Review Boards: The State of the Evidence and Its Implications for Regulatory Reform,” George Silberman and Katherine Kahn present the results of a systematic review of the research literature regarding the costs in both dollars and time imposed on research by institutional review boards (IRBs).

IRBs have been difficult to evaluate because each study they review is distinctive and because there are no widely agreed-on criteria against which performance can be judged. Much of the research that Silberman and Kahn review arose from the natural experiments that occur when several institutions’ IRBs review the same protocol for a multicenter clinical trial. Much of the research literature on IRBs looks at the differences in the IRBs’ treatment of the same research protocols. Silberman and Kahn conclude that even though IRB review is necessary for many types of research, the system imposes unneeded burdens that are difficult to quantify. Revised regulatory requirements are now working their way through the rule-making process in the U.S. Department of Health and Human Services and may address some of these problems.

Psychiatric epidemiology is a field of research with implications for both policy and practice. In “The Checkered History of American Psychiatric Epidemiology,” Allan Horwitz and Gerald Grob trace the study of the frequency and distribution of mental disorders from the first statistics on institutionalized populations, compiled in the nineteenth century, to the modern era of population-based surveys. Enormous changes have taken place in not only the research methods but also the types of etiological factors assessed. Horwitz and Grob contend that mental health policy has been influenced more by professional and ideological concerns than by advances in knowledge about the causes of mental disorders. This will continue to be the case, they believe, until we have an adequate classification system for mental illness.

Health disparities in national populations are of interest to policymakers, health advocates, and researchers because of their implications for justice and also because these disparities signal opportunities for improvement. In the United States, the primary concern has been racial/ethnic disparities (AHRQ 2011), but the main focus in many other countries has been on socioeconomic disparities. A leading example is Scotland. In “Best Practice Guidelines for Monitoring Socioeconomic Inequalities in Health Status: Lessons from Scotland,” John Frank and Sally Haw propose seven criteria for use in assessing (1) the utility of various outcome measures and (2) analytic approaches that can be used in monitoring socioeconomic disparities in health status and guiding the development of public policy. Five of the criteria are epidemiological (e.g., that the data used are reasonably complete and accurate), and two are concerned with effective communication (e.g., that the measure can be understood by nonscientists).

Frank and Haw illustrate the utility of their criteria by applying them to three reports on socioeconomic disparities that were published by the Scottish government between 2008 and 2010. Even though these reports represent the state of the art, Frank and Haw conclude that they are of limited use for policy purposes because the health status measures they use are not very sensitive to intervention. They propose that efforts to monitor disparities should concentrate on outcomes that occur relatively early in life, that can be changed within a half decade by feasible policies, that predict individuals’ future life chances, and that are strongly patterned by socioeconomic status.

An important question regarding socioeconomic disparities in health is whether increases in income lead to improvements in health. This topic has been difficult to study because of the problem of reverse causation— declines in income may result from adverse health events. Jeff Larrimore addresses this problem using instrumental variable analysis in “Does a Higher Income Have Positive Health Effects? Using the Earned Income Tax Credit to Explore the Income-Health Gradient.” Larrimore uses state-level changes in the earned income tax credit, which provides support to low-income working people, as an income measure not affected by changes in health. The specific question he addresses is whether improvements in self-reported health status are associated with increases in the earned income tax credit. He finds little evidence of a relationship, suggesting that changes in public income support programs may be of limited use in improving short-term health outcomes.

This issue of the Quarterly concludes with “Economic Burden of Occupational Injury and Illness in the United States,” by J. Paul Leigh, which provides the most comprehensive picture currently available of the total costs of workplace-related injuries and illnesses. The article updates a previous article by Leigh and John Robbins in the Quarterly (Leigh and Robbins 2004).Using data from multiple sources, Leigh estimates the number and costs of fatal and nonfatal work-related illness and injuries in 2007. The costs ($250 billion) are very large and, more importantly, have increased substantially in inflation-adjusted terms since 1992, when the cost was estimated at $217 billion.

Bradford H. Gray
Editor, The Milbank Quarterly

References

AHRQ (Agency for Healthcare Research and Quality). 2011. 2010 National Healthcare Disparities Report. Rockville, MD: U.S. Department of Health and Human Services.

Leigh, J.P., and J.A. Robbins. 2004. Occupational Disease and Workers’ Compensation: Coverage, Costs, and Consequences. Milbank Quarterly 82(4):689–721.

Author(s): Bradford H. Gray

Read on Wiley Online Library

Read on JSTOR

Volume 89, Issue 4 (pages 529–532)
DOI: 10.1111/j.1468-0009.2011.00641.x
Published in 2011