Annex 11: How Mixed Methods Can Strengthen QUANT Evaluation Designs

 

Issue

Potential Contribution of Mixed Methods

Evaluation Design Issues

1. Limited construct validity: Many strong evaluations use secondary data sources and must rely on proxy variables that may not adequately capture what is being studied. Findings can therefore be misleading.

  • Exploratory qualitative studies can strengthen understanding of the key concepts being studied.
  • Focus groups and PRA can provide beneficiary perspective on concepts and constructs.

2. Decontextualizing the evaluation: Conventional impact evaluation (IE) designs ignore the effect of the local political, economic, institutional, socio-cultural, historical and natural environmental context. These factors will often mean that the same project will have different outcomes in different communities or local settings.

  • Ethnographic, key informants and other qualitative techniques can provide information on the local context. Contextual analysis can be incorporated into regression analysis through the creation of dummy variables.

3. Ignores the process of project implementation – the problem of the “black box”: Most IEs use a pre-test comparison and do not study how the project is actually implemented. As a result, if a project does not achieve its objectives it is not possible to determine if this is due to design failure or implementation failure.

  • Qualitative techniques such as participant and non-participant observation and key informants can be combined with program monitoring to combine quantitative and qualitative information on implementation and other project processes.

4. Designs are inflexible and cannot capture or adapt to changes in project design and implementation, or in the local context: IEs repeat the application of the same data collection instrument, asking the same questions and using the same definitions of inputs, outputs, outcomes and impacts. It is very difficult for these designs to adapt to the changes which frequently occur in the project setting or implementation policy.

  • Panel studies, participant observation, key informants, etc. have the flexibility to detect and observe changes in the project or its setting.

5. Hard to assess the adequacy of the sampling frame: Evaluations frequently use the client list of a government agency as the sampling frame.  This is easy and cheap to use but frequently the evaluation ignores the fact that significant numbers of eligible families or communities are left out – and these are usually the poorest or most inaccessible.

  • Small-scale, rapid studies of selected areas can be used to assess the adequacy of sampling frames.

6. No clear definition of the time-frame over which outcomes and impacts can be measured: The post-test measurement is frequently administered at a time defined by administrative, rather than theoretical considerations. Very often the measurement is made when it is too early for impacts to have been achieved, and it may be concluded that the project did not have an impact.

  • Program theory models can be used to define the time-frame over which outcomes and short, medium and long-term impacts can be expected to occur. This can both help define when the impact evaluation should be conducted and also the initial indicators that a project is on track to achieving its outcomes/impacts.

7. Difficult to identify and measure unexpected outcomes: Structured surveys can also measure the expected outcomes and effects and are not able to detect unanticipated outcomes and impacts (positive and negative).

  • Program theory models can identify preliminary indicators that can be measured early in the project and that provide evidence that the project is on track.
  • Qualitative methods such as key informants, participant observation and focus group can also provide early indicators whether the project is on track.

Data Collection Issues

8. Reliability and validity of indicators: Many strong designs only use a limited number of indicators of outcomes and impacts, almost all of which are quantitative.

  • Mixed methods can combine multiple quantitative and qualitative indicators which, in combination, can enhance validity and capture different dimensions of what is being studied.
  • Mixed methods make extensive use of triangulation through which estimates obtained from different indicators are systematically compared and refined, and understanding is enhanced by comparing different perspectives.

9. Inability to identify and interview difficult-to-reach groups: Most QUANT data collection methods are not well suited to identify and gain the confidence of sex-workers, drug users, illegal immigrants and other difficult-to-reach groups.

  • Ethnographers and other qualitative researchers have extensive experience in reaching inaccessible groups.

10. Difficult to obtain valid information on sensitive topics: Structured surveys are not well suited to collect information on sensitive topics such as domestic violence, control of household resources, and corruption.

  • Case studies, in-depth interviews, focus groups and participant observation are some of the many qualitative techniques available to study sensitive topics.

11. Lack of attention to contextual clues: Survey enumerators are trained to record what the respondent says and not to look for clues such as household possessions, evidence of wealth, interaction among household members or the evidence of power relations to validate what is said.

  • Observation and key informants are two of the many useful techniques.

12. Often difficult to obtain a good comparison group match: Adequate secondary data for using propensity score matching is only infrequently available and often control groups must be selected on the basis of judgment and usually very rapid visits to possible control areas.

  • Judgmental comparison group selection can be strengthened through rapid diagnostic studies, consultations with key informants, etc.

13. The vanishing control group: Control groups get integrated into the project or they may be eradicated through migration, flooding or urban renewal.

  • Panel studies and observation techniques can monitor changes in the size and composition of the comparison group, can help explain the dynamic of the changes, and can provide early-warning when corrective actions must be taken.

14. Lack of adequate baseline data: A high proportion of evaluations are commissioned late in the project and do not have access to baseline data.  Many IEs collect baseline data but usually only collect QUANT information.

  • There are a range of qualitative techniques that can be used to help “reconstruct” baseline data.

Analysis and Utilization Issues

15. Long delay in producing findings and recommendations that can be used by policy makers and other stakeholders: Conventional IEs do not produce a report or recommendations until the post-test survey has been completed late in the project cycle or when the project has ended. By the time the report is produced it is often too late for the information to have any practical utility. 

  • Formative evaluation can provide periodic feedback to stakeholders throughout the life of a project. Some of this information can be generated by the planning and initial data collection phases of the impact studies, building up a constituency for the later findings of the quantitative studies.

16. Difficult to generalize to other settings and populations: This is a particular challenges for randomized control trials (RCTs) and similar designs that estimate average effects by controlling for individual and local variations.

  • Techniques such as quota sampling can use small samples to study variations in the population studied, providing a stronger basis for assessing the populations for which program replication are most and least likely to be successful.

17. Identifying and estimating influence of  unobservables: Participants who are self-selected, or who are selected by an agency interested in ensuring success, are likely to have unique characteristics that affect, and usually increase, the likelihood of success. Many of these are not captured in structured surveys and consequently positive outcomes may be due to these pre-existing characteristics rather than to the success of the project.

  • PRA techniques, in-depth interviews, and key informants can help identify and study “unobservables” that cannot be easily addressed through formal surveys.