Practitioner Insights: EPA’s Flawed ‘Secret Science’ Plan Puts Good Science at Risk

Turn to the nation's most objective and informative daily environmental news resource to learn how the United States and key players around the world are responding to the environmental...

The EPA has proposed a far-reaching set of restrictions on its use of scientific data to support regulatory action to protect human health. The April 30 proposal would bar the Environmental Protection Agency from considering a study in making “significant regulatory decisions” unless “dose response data and models underlying pivotal regulatory science ... are publicly available in a manner sufficient for independent validation.”

The EPA proposal is flawed and misconceived. In the name of “transparency,” it will burden agency scientists with unnecessary and costly procedures that run counter to the agency’s long-standing obligation to base public health decisions on the best available science.

Rush to Judgment

Despite its dramatic consequences, the proposal contains remarkably little explanation of the goals it seeks to achieve, its costs, and its practical impacts. The types of studies and regulatory actions the proposal covers are unclear. Its relationship to the EPA’s core statutes and effects on the quality of agency science are largely ignored. Key terms like “validation” are not defined and others like “dose response models and data” are defined poorly. Implementing the proposal will present myriad complexities, but these challenges receive short shrift. The lack of thought with which the proposal was cobbled together, the absence of careful analysis, the truncated review process from the Office of Management and Budget, and its hasty release are unprecedented for a policy change of such magnitude.

The press release accompanying the proposal points to a “replication crisis” in the scientific community and claims Administrator Scott Pruitt is ending the “era of secret science at EPA.” But no backup for these assertions is provided. While the press release cites a “growing recognition that a significant proportion of published research may not be reproducible,” it offers no documentation and identifies no case where the EPA improperly relied on a study that couldn’t be replicated.

Litmus Test for Reliability

The unspoken premise of the proposal is unless the EPA can guarantee full public access to a study’s underlying data, the study must be deemed unreliable and should play no role in assessing a pollutant’s or chemical’s effects on human health. This premise ignores the many ways in which the scientific community, regulators, and the public have traditionally determined the quality and relevance of study results. Study reports typically explain the protocols used to gather data; methods used for data analysis; doses or exposure concentrations at which effects were and were not observed; nature, severity, and incidence of such effects; and any unusual occurrences that could affect interpretation of the results.

This information plays an important role in the peer review process, informing the judgment of independent reviewers as to whether a study is worthy of publication in the scientific literature. Agency reviewers likewise consider these indicators of reliability in deciding how much weight a study deserves in making judgments about hazard and risk. Moreover, the EPA and independent scientists don’t consider study findings in a vacuum but assess their significance in relation to the totality of available evidence. While access to underlying data may be relevant in reviewing some studies, it’s not a meaningful factor in most cases and, until now, has never been the sole litmus test for whether a study can contribute to the scientific understanding of a pollutant or chemical’s impacts on health.

All Scientific Evidence

In its narrow focus on a single criterion for study acceptability, the proposal departs from the comprehensive, multifaceted approach that Congress has directed the EPA to follow in arriving at the “best available science” to inform decision-making. The Clean Air Act explicitly requires that air quality criteria “accurately reflect” the “latest scientific knowledge” which is “useful” in assessing “all identifiable effects on public health.” The EPA has interpreted this provision to require it to consider all available studies, whether underlying data are publicly available or not.

In American Trucking Association v EPA, the District of Columbia Circuit Court of Appeals refused to “impose a general requirement that the EPA obtain and publicize the data underlying published studies on which the Agency relies,” concluding that “the Clean Air Act imposes no such obligation” and “agree[ing] with EPA that requiring agencies to obtain and publicize the data underlying all studies on which they rely ‘would be impractical and unnecessary.’”

Other laws likewise focus on examining the totality of available evidence and thus preclude using access to underlying data as a bright line test of a study’s reliability. The 1996 amendments to the Safe Drinking Water Act require the EPA to set standards using “the best available, peer-reviewed science and supporting studies conducted in accordance with sound and objective scientific practices.” Courts interpreting these requirements have placed heavy emphasis on peer review in determining whether the science the EPA relies on is the “best available,” City of Waukesha v. EPA.

Similarly, the Information Quality Act requires agencies to issue guidelines “ensuring and maximizing the quality, objectivity, utility, and integrity of information.” The implementing guidelines issued by the White House regulatory review office within OMB recognize that a study may be “objective,” even where the underlying results are not replicable due to “ethical, feasibility, or confidentiality constraints.” Both the OMB and EPA guidance under the act provide that external peer review is generally sufficient to create a presumption of “objectivity.”

TSCA and Weight of Evidence Analyses

The 2016 amendments to the Toxic Substances Control Act provide the latest congressional thinking on the role of science in decision-making. The amendments reaffirm the need to ground judgments about health risks of chemicals in a holistic assessment of all available evidence. Section 26(h) of the law, titled Scientific Standards, states that “the Administrator shall use scientific information ... in a manner consistent with the best available science” and identifies several factors affecting the reliability and relevance of scientific studies that the EPA “shall consider.”

None of these factors is in itself disqualifying. The EPA must simply consider them “as applicable” and thus must examine the reliability of a study by balancing and weighing several indicia of scientific validity on a case-by-case basis. Significantly, the availability of sufficient underlying data to “validate” or “reproduce” study results is not among the relevant factors that the law requires the EPA to consider. Section 26(h)(4) provides that the EPA should take into account “the extent of independent verification or peer review” of scientific information—but this language indicates that, even without independent verification, peer review of a study could provide sufficient assurance of its reliability to warrant its use by the agency.

Section 26(i) of TSCA also requires the agency to base its decisions on the “weight of the scientific evidence.” In its proposed rule of Jan. 19, 2017, the Obama EPA explained that weight of the scientific evidence “process involves a number of steps starting with assembling the relevant data, evaluating that data for quality and relevance, followed by an integration of the different lines of evidence to support conclusions concerning a property of the substance.”

In its final rule of July 20, 2017, the Trump EPA affirmed this approach, defining weight of the scientific evidence as a “systematic review method, applied in a manner suited to the nature of the evidence or decision, that uses a pre-established protocol to comprehensively, objectively, transparently, and consistently identify and evaluate each stream of evidence, including strengths, limitations, and relevance of each study.”

This approach—which entails an inclusive analysis of all available studies to determine the “weight of the scientific evidence” and “best available science” for the case at hand—cannot be reconciled with the automatic rejection of studies based solely on the extent of access to underlying data.

EPA as Guarantor of Reproducibility?

In principle, no one disputes the benefits of improving access to underlying data for research on chemicals and pollutants. As the EPA proposal notes, the goals of “open science” have received support from several organizations and leading scientific journals and research institutions have adopted practices and policies to maximize data access. These voluntary efforts, however, don’t justify the unprecedented step of requiring the EPA to guarantee access to the underlying data for every study it may use for decision-making and to forfeit the ability to consider a study if this requirement hasn’t been met.

EPA scientists working on risk and hazard assessments collect and review thousands of studies. Published reports of these studies typically don’t include all underlying data. In such cases, the EPA would have to contact the researcher, ascertain the nature and extent of underlying data, and put in place a mechanism for the public to access the data. Analyzing House legislation that would impose similar obligations on the EPA, the Congressional Budget Office and EPA staff concluded that the costs of implementation would be at least $250 million a year. This doesn’t include the impact of delaying assessments during the protracted process of obtaining access to underlying data. Moreover, rather than devoting time and effort to assuring access to underlying data, the EPA staff could follow the path of least resistance and simply drop many studies from consideration, shrinking the body of scientific evidence on which decisions are based.

Practical Constraints on Data Availability

Even with diligent effort by the EPA, there are many reasons why disclosure of data sufficient to replicate a study may be impossible. For epidemiology and other studies of human cohorts, privacy protections will often block release of individual medical records. Industry-conducted studies could contain confidential business information required to be withheld by law. In addition, companies may have intellectual property rights that would be violated if access to underlying data allowed competitors to rely on a study without replicating it. For studies based on human exposure measurements, replication may be impossible because exposure conditions have changed. Studies attempting to capture the impacts of onetime events like spills or plant explosions also will be inherently unreproducible. And for older studies predating digital technology, retrieving full study records could be difficult or impossible.

The EPA proposal duly notes these obstacles to study replication and provides that exemptions may be granted on a case-by-case basis where “compliance is impracticable.” But the proposal is silent on the process for obtaining exemptions, the information that must be submitted to justify them, and the exact criteria to be applied. More important, an exemption process will add to the considerable cost and effort required to implement the proposed rule and could result in disputes and even litigation over whether exemptions are justified. To avoid this quagmire, EPA staff could steer clear of studies that cannot readily be replicated, even at the price of limiting the “best available science” for decision-making.

Where’s the ‘Secret Science?’

Is the damage it will inflict on the quality and timeliness of EPA science justified by the benefits of the proposed rule? To explain why the rule is needed, Administrator Pruitt and his allies have painted a bleak picture of the agency’s reliance on “secret science” developed behind “closed doors” and “based on data that has been withheld from the American people.” But is this the reality?

EPA science assessments generally include an exhaustive and critical review of relevant studies and a full explanation of how they are being interpreted. Extensive information about each study is typically part of the public record, even if all underlying data may not be included. With the advent of “systematic review” methods touted by the Trump EPA among others, the agency’s evaluation of the quality and relevance of individual studies will become only more transparent. EPA assessments are normally subject to public comment and independent peer review, providing a vehicle for dialogue and feedback on the treatment of individual studies. And members of the regulatory community are free at anytime to replicate studies they deem flawed or to independently seek access to underlying data and reanalyze them. That this is an uncommon occurrence is evidence that concerns about reproducibility are relatively rare and don’t represent a systemic issue in the science assessments supporting EPA regulatory decisions. In short, the “problem” that the proposed rule seeks to fix is largely imaginary.

Administrator Pruitt needs to take a step back and rethink his proposed rule. The stakes for EPA science and the protection of public health are simply too high to finalize this deeply problematic and unnecessary proposal.

Bob Sussman is counsel to Safer Chemicals, Healthy Families, and was EPA deputy administrator (1993-1994) and senior counsel to the EPA administrator (2009-2013). Before that, he was a partner at the law firm of Latham & Watkins.

The opinions expressed here do not represent those of Bloomberg Environment, which welcomes other points of view.

Request Environment & Energy Report