When “Strong Evidence” is Not Sufficient

Photo By: Ivan Roma Manukrante is licensed under the CC BY-NC 4.0 license.

When “Strong Evidence” is Not Sufficient

In the fall of 2017, a major fire engulfed and devastated Southern California, especially Napa and Sonoma counties, charring nearly a quarter of a million acres of land and thousands of homes. Generally, this area is foggy and cool, due to moisture-laden air blowing in from the Pacific and vegetation is ample but damp; however, prior to the fire, the usual winds switched direction coming from the arid east, causing plummeting humidity and drying vegetation. The search for the cause of this major fire is on-going, but experienced fire experts believe that attributing this fire to a single cause would be overly simplistic; rather they believe this devastating fire was the result of a configuration of causes. That is, this fire was the result of multiple conditions that were both necessary and sufficient, not just one cause such as a lit cigarette or transmission line spark (i.e., ignition source). Rather, the fire was an outcome of a “set of causes” that included the presence of changing and sustained winds, dry vegetation, along with residential houses built too close to forests and fire barriers not maintained by residents. Fire officials in California knew from previous experience with such large-scale fires that, due to the complexity of the natural environment systems as well as human interaction with those systems, that the “cause” of the fire was most likely a vast array of influences, circumstances, and conditions, which if understood well would help in learning how to prevent fires of this size in the future.

Ironically, international development organizations that operate in similarly complex natural, social, political, and economic environments, and have missions to produce large-scale positive impacts, are told by various international development research initiatives (e.g., What Works Clearing House, 3ie, JPA-L, IPA, Coalition for Evidence-based Policy, and the Campbell Collaboration) to conduct experimental or quasi-experimental design (EQD) studies, mostly random control trials, in order to understand “what works.”

EQD studies are promoted as the best and often the only method to determine if the results and impacts achieved in international development can be attributed to the intervention(s) of an international organization. In order to determine this, EQD studies test a limited hypothesis or question(s) in a controlled setting to eliminate confounding influences. In contrast to the fire experts, international development organizations all too often employ EQD in order to seek more simplistic accounts of major outcomes and impacts in order to establish a causal claim that “the outcome is due to ‘our’ intervention(s).”

In international development, academics and practitioners are realizing that the attempt to use EQD in order to attribute a large-scale result to a singular cause is overly simplistic and not representative of what actually occurs. Rather, learning “what works” involves investigating and understanding a “configuration of causes” that produced a result or set of results at a particular time and in a particular context. All major fires have a similar result, immediate destruction, however all major fires are the result of differing and unique causal combinations; no one condition is sufficient, such as an ignition source. Moreover, contexts enable and constrain possible causal combinations, such that each causal combination can be different. Consequently, any prescription that prioritizes a rigid methodology (EQD) over content (questions) greatly hinders studying and understanding configurations of causes and eventually learning about “what works.” Why?

First, an EQD is a method that requires a narrow question that is tested using a closed and controlled setting, which is quite different than the number of complex questions that need to be asked in an open and emergent context of international development. Second, EQDs help establish strong inference about “efficacy” between a cause and effect or in other words, “what did happen” in a relatively closed, controlled experimental setting. However, such findings do not provide similar inference of “effectiveness” in an open natural setting because the context can add or subtract from both the cause and effect that was not observed in the experimental study. Third, findings from an EQD study may demonstrate a necessary link between a cause and an effect; however, it cannot demonstrate if the necessary cause is sufficient by itself to create the same effect at a particular time and context. Fourth, since EQDs can investigate only a limited number of causal factors, they are not appropriate to investigate a configuration of causal mechanisms actually operating within a specific context that influences whether an intervention is effective or not.

International development programs and interventions are increasingly being characterized as: embedded in open complex systems; need to be locally customized; should be diversified yet integrated; tend to produce results that are non-linear; and involve multiple causal paths with emergent effects, as well as causes and effects that interact and influence each other. If this characterization of international development efforts is somewhat accurate, then prioritizing one method is shortsighted for learning “what works” in international development. Instead, this characterization suggests that international organizations should not be disadvantaged by having “one hand tied behind their back,” but rather should utilize a multitude of designs, and preferably multiple methods combined, to investigate and answer fundamental questions such as “what works” in the setting in which it is being implemented.

If fire officials in California were told to prioritize an EQD to study the cause of the fire, very little would have been learned about “what works” to prevent another major fire in southern California. To the benefit of southern Californians, fire experts employed a set of flexible methods of critical investigation and observation. Hopefully, international development organizations will come to recognize that EQDs do not, in and of themselves, provide “strong evidence” about “what works” in a complex developmental context. Learning about “what works” will require international development organizations having “strong evidence” derived from asking comprehensive (not limited) questions about combinations of (not singular) causes in open (not closed) settings using multiple (not a single) methods with a focus to “improve” and not always to “prove.”