Department of Corrections
Every editor of every peer-reviewed, scientific journal has the same nightmare, and this past May it came true for the editors of Science, one of the world’s most prestigious publications. In brief, Science published a study in its December 12, 2014, issue by Michael LaCour, a political science graduate student at the University of California, Los Angeles, and Donald P. Green, a professor of political science at Columbia University who served as the senior author.1 The paper described how attitudes toward the marriage of same-sex couples could be changed after brief, face-to-face conversations with individuals who had a personal stake in the issue. Coming on the heels of sweeping societal changes in marriage equality rights, these findings garnered wide media attention.
The study’s credibility quickly began to unravel after 2 graduate students at the University of California, Berkeley, David Broockman, now an assistant professor at Stanford, and Joshua Kalla, were unable to reproduce its findings. They contacted Professor Green, as well as Peter Aronow, a political scientist at Yale, regarding the paper’s irregularities. Green confronted LaCour, who admitted to falsely describing some of the details of his study. Green then contacted the editors at Science and asked them to retract the paper, even though Michael LaCour disagreed with that decision. Also involved in breaking this story was the editorial staff of the website Retraction Watch, which in recent years has played a major role in the retraction of a great many falsified articles. After further scrutiny, more problems emerged. Specifically, LaCour falsely claimed that cash payments were given to the subjects as incentives to participate in the survey. Moreover, he falsely credited several foundations or institutions with funding his work and made untruthful assertions about where or how he obtained his data set and even about the survey firm he supposedly used during the study.
On May 20, 2015, the editors of Science published an online “editorial expression of concern.” This awful scenario next made its way onto the front page of the May 25 edition of the New York Times.2(pA1) Given the interconnectivity of media in the 21st century, the story was swiftly reported by just about every major media outlet imaginable. The Science editors formally retracted the paper on June 5, 2015.1
The point in repeating this story is not to chastise the good women and men who edit and publish Science. Instead, it is to bemoan the fact that if this errant researcher managed to duplicitously worm his “doctored” work onto Science’s pages, it can happen to any journal. That said, peer-reviewed journal editors must continue to develop the means to ferret out the most egregious “bad actors.” Indeed, the integrity of population health, scientific, and medical journals depends on the careful scrutiny of all submissions, just as it depends on the honor and trustworthiness of our authors.
We remain confident that more than 99.9% of our authors in The Milbank Quarterly are good citizens who are committed to the honest and thorough reporting of their studies. Nevertheless, in the fall of 2014, The Milbank Quarterly adopted a number of criteria set forth by the International Committee of Medical Journal Editors (ICJME) to bolster the requirements for authorship. We began by strengthening the financial support reporting for every submission and requiring a detailed declaration of potential conflicts of interest. Moreover, we required that each author formally describe his or her role with respect to the conception or design of the work, contributions to the writing, and approval of the final version. Each author must also agree to be “accountable for all aspects of the work” (see Instructions to the Authors, https://www.milbank.org/the-milbank-quarterly/for-authors/author-instructions).
This last requirement represents a key factor in the Science debacle: if Donald Green had thoroughly reviewed Michael LaCour’s data, the paper might never have been submitted for publication. On May 25, 2015, Professor Green told a reporter for the New York Times that he was not more forceful in demanding to review the raw data because “it’s a very delicate situation when a senior scholar makes a move to look at a junior scholar’s data set. This is his career, and if I reach in and grab it, it may seem like I’m boxing him out.”2(pA1)
I must respectfully disagree. Professor Green should have done much more than he did. Every major peer-reviewed journal, including Science, clearly requires all authors of any paper submitted to be intimately familiar with every aspect of the paper, from the raw data to the final version. The Milbank Quarterly expects nothing less from its authors.
To their credit, the editors of Science have since worked diligently to develop a slate of scientific standards that require even more transparency, openness, and reproducibility in the data and papers they publish.3 They are hardly alone in this quest. For example, researchers in the population health fields have worked diligently to create the STROBE statement (Strengthening the Reporting of Observational Studies in Epidemiology) and the RECORD statement (REporting of Studies Using Observational Routinely Collected Health Data).4
It is clear, however, that one checklist will not fit all publications, and as a result, most journal editors of late (including at The Milbank Quarterly) have been consulting with one another to develop standards that are most applicable to the papers that each journal routinely publishes.
The most elegant statement defining standards for transparency, openness, and reproducibility of data that we have seen thus far is from the editors of Nature.5 For now, these are the guidelines we will adopt, with some additions and revisions germane to the breadth of papers we publish in The Milbank Quarterly. We do this in order to take a responsible and leading role in keeping peer-reviewed journalism, and the reporting therein, clean. The new statement follows and will take effect for the June 2016 issue and thereafter. All potential Milbank Quarterly authors submitting a paper for consideration of publication will be required to sign a form agreeing to these principles.
This may not be the last time the Quarterly runs a “Department of Corrections” (n.b., the pun is deliberate), but we decidedly hope it will be. Nor is it likely that these will be our final words on additional requirements for authors, even though we hope they might be, as well. In sum, we rely on the honesty, openness, and willingness of our authors, reviewers, and readers to help in monitoring their work and the work of all our colleagues. An open exchange of ideas, methods, and data is the lifeblood of knowledge production, and it has been the essence of The Milbank Quarterly for nearly a century. The credibility and, when appropriate, the reproducibility of the papers we publish are our greatest assets. If, indeed, you uncover something we have missed or you find awry, my email address is always at your service: (email@example.com).
Statement of Transparency, Openness and Credibility (and, Where Appropriate, Reproducibility) Regarding the Availability of Data and Methods:
A condition of publication of a paper in The Milbank Quarterly is the requirement that authors make all their materials (including, but not limited to, primary data and secondary sources, and, if applicable, open source or proprietary software, as well as pertinent information about the paper’s methodology) promptly available to readers upon request without undue qualifications (such as a reasonable payment to cover out-of-pocket costs of distribution).
Any restrictions on the availability of data or information about the paper must be disclosed to the editors at the time of submission. Such restrictions must also be disclosed in the submitted manuscript and may be a factor in editorial decisions to reject or accept a manuscript. We acknowledge that there may be situations that do not permit full disclosure (eg, interviewing subjects for qualitative studies while preserving anonymity of sources) and will discuss these issues with each author on a case-by-case basis. The editors reserve the right to review the materials during the review process and at any time thereafter.
After publication, readers who encounter refusal or delay of responses from the authors for more than one month (or who receive partial or incomplete responses) or other noncompliance with these policies should contact the editor-in- chief of The Milbank Quarterly.
In those cases in which the editor-in-chief is unable to resolve a complaint, the journal may refer the matter to the author’s employing institution and/or funding institution. The journal may also initiate an escalating series of remedial actions, beginning with a “Statement of Correction” (attached online to the publication stating that readers have been unable to obtain the necessary materials to replicate the findings) up to and including full retraction and removal of the article from The Milbank Quarterly’s website (with a detailed explanation).
With that housekeeping begun, let us now turn to the contents of the March 2016 issue of The Milbank Quarterly. We begin with our Op-Ed section. Our guest contributor, John McDonough, of the Harvard T.H. Chan School of Public Health, asks whether the fate of the Affordable Care Act (ACA) has been settled; Georges Benjamin, the executive director of the American Public Health Association and another guest contributor, writes about the response of health care systems to climate changes; Sara Rosenbaum analyzes still another US Supreme Court case challenging the ACA’s birth-control mandate; Jonathan Cohn explores the fine print in the 2016 presidential candidates’ health proposals; Catherine DeAngelis comments on Big Pharma’s profits and how they affects health care, health policy, and health costs; Lawrence Gostin reports on 4 new commissions on global health security issues; Joshua Sharfstein examines the thorny relationship between science and politics; Gail Wilensky looks at our propensity to spend so many health care dollars on the latest technological advances, such as robotic surgery, and use them far more than it makes clinical or economic sense; and David Rosner considers health disparities and socioeconomic class.
Our lead article is by Paula Lantz, W. Douglas Evans, Holly Mead, Carmen Alvarez, and Lisa Stewart. They developed a nationally representative survey to study knowledge of and attitudes toward evidence-based guidelines for and against clinical preventive services, including those developed by the US Preventive Services Task Force (USPSTF). They found that 36.4% of adults reported knowing that the Affordable Care Act requires insurance companies to cover proven preventive services without cost sharing, but only 7.7% of the respondents had heard of the USPSTF. Approximately 1 in 3 (32.6%) reported trust in a government task force that would make fair guidelines for preventive services. Perhaps even more interesting, 38.2% of those interviewed believed that the government uses guidelines to ration health care. Most of the respondents endorsed the notion that research/scientific evidence and expert medical opinion are important for the creation of guidelines and that clinicians should follow guidelines based on evidence. But when presented with patient vignettes in which a physician made a guideline-based recommendation against a cancer-screening test, less than 10% believed that this recommendation alone, without further dialogue and/or the patient’s own research, was sufficient to make such a decision. Given these demonstrated low levels of knowledge and mistrust regarding guidelines, coupled with a strong preference for shared decision making, Lantz and her colleagues argue that better consumer education and decision supports for evidence-based guidelines for clinical preventive services are greatly needed.
Our next article is a study of Health Information Exchanges (HIEs) by Joshua Vest and Bita Kash. The US federal government has invested billions of dollars to encourage the adoption of information technologies to exchange health information and to enable providers to efficiently and effectively share patient information with other providers. Using semistructured interviews with 40 policymakers, community and enterprise HIE leaders, and health care executives from 19 different organizations, Vest and Bash identified several factors that influenced differing strategies to meet information-sharing needs and various health systems’ choice to participate in either publicly supported, community HIEs or enterprise HIEs. Enterprise HIEs are employed by sophisticated health systems controlling a technology network of desired trading partners using a strategic resource. Community HIEs support obtaining patient information from the broadest set of providers, but with more dispersed benefits to all participants, the community, and patients. Although not an either/or decision, community and enterprise HIEs do compete for such finite organizational resources as time, skilled staff, and money. Both approaches face the challenges of vendor costs and less-than-interoperable technology. Moreover, both community and enterprise HIEs support aggregating clinical data and following patients across settings. Although they can be complementary, community and enterprise HIEs also compete for providers’ attention and organizational resources. Health policy may encourage the community exchange of public information, but the business case for enterprise HIEs appears stronger. As a result, the sustainability of a community HIE as a public good may necessitate ongoing public funding and supportive regulation.
Charles Phelps, Guruprasad Madhavan, Rino Rappuoli, Scott Levin, Edward Shortliffe, and Rita Colwell are the authors of a study of strategic planning in population health and public health practice. Scarce resources in population health and public health practice emphasize the importance of strategic planning. Yet current priority-setting efforts in public health agencies are often narrow, at times opaque, and focused on single metrics such as cost-effectiveness. The authors propose new approaches to strategic planning—such as those demonstrated by SMART Vaccines, a decision support software system developed by the Institute of Medicine and the National Academy of Engineering—that allow the formal incorporation of multiple stakeholder views and flexible, transparent, and clear multicriteria decision making that surpass even the sophisticated cost-effectiveness analyses widely recommended and currently used for public health planning. The authors recommend that higher education institutions can and should respond by building improved, modern strategic-planning tools as they educate their students on contributing to the betterment of population health.
David Merritt Johns, Ronald Bayer, and Amy Fairchild investigated the history of the rise and decline over the past 3 decades of the “Counseling and Testing” paradigm for HIV prevention at the US Centers for Disease Control and Prevention (CDC). In 1985, the accuracy of the new antibody test for HIV was uncertain. As a result, public health officials at the CDC, AIDS activists, and community groups all agreed that counseling should be provided both before and after testing to ensure that patients were tested voluntarily and that they understood the meaning of their results. Over the following 20 years and as the “exceptionalist” perspective framing HIV that evolved in the early years receded, the purpose of HIV test counseling shifted from emphasizing consent to providing information, and then to encouraging behavioral change. With this increasing emphasis on prevention, HIV test counseling faced mounting doubts about whether it “worked.” Divisions within the CDC over whether test counseling could be justified on the basis of efficacy and cost also emerged by the 1990s. Johns and his colleagues conclude that the complex response to the public health crisis, the development of allegiances to interventions designed by necessity in the absence of evidence, and uncertainty over how best to use limited federal resources all contributed to the delay in removing test counseling from the HIV prevention picture.
Amy Drahota and colleagues conducted a systematic review of the literature on community-academic partnerships (CAP). It is well known that communities, funding agencies, and institutions are increasingly involving community stakeholders as partners in research to provide firsthand knowledge and insight. Despite the greater emphasis and use of CAPs across multiple disciplines, the definitions of partnerships and methodologies vary greatly, and no systematic reviews consolidating this literature have been published. As a result of applying internationally accepted standards for conducting systematic reviews, the authors contribute to refining the methodology for future primary research on this important subject. Drahota and her colleagues describe CAP characteristics, the terms and methods used, as well as the common influences on the CAP process and outcomes. From this work, the authors identify 23 common facilitating and hindering factors influencing the CAP collaboration process, with the most common being developing or refining tangible products.
We hope that you enjoy the March 2016 issue of The Milbank Quarterly and, to lift a phrase by Alexander Pope, that “hope springs eternal” for all.
- LaCour MJ, Green DP.When contact changes mind: an experiment on transmission of support for gay equality. [retracted in: Science. 2015;346(6215):1366-1369.] Science. 2015;348(6239):1100.
- Carey B, Belluck P. Doubts about study of gay canvassers rattle the field. New York Times. May 25, 2015.
- Nosek BA, et al. Promoting an open research culture. Science. 2015;348(6242):1422-1425.
- Benchimol EI, et al. The REporting of Studies Using Observational Routinely Collected Health Data statement. PLOS Med. 2015;12(10):e1001885. doi:10.1371/journal.pmed.1001885. eCollection 2015.
- Nature Publishing Group. Availability of Data, Materials and Methods. New York, NY: Macmillan; updated April 15, 2015. http://www.nature.com/authors/policies/availability.html. Accessed October 29, 2015.
Author(s): Howard Markel
Volume 94, Issue 1 (pages 5–12)
Published in 2016