Annex 11: How Mixed Methods Can Strengthen QUANT Evaluation Designs
|
Issue |
Potential Contribution of Mixed Methods |
|---|---|
|
Evaluation Design Issues |
|
|
1. Limited construct validity: Many strong evaluations use secondary data sources and must rely on proxy variables that may not adequately capture what is being studied. Findings can therefore be misleading. |
|
|
2. Decontextualizing the evaluation: Conventional impact evaluation (IE) designs ignore the effect of the local political, economic, institutional, socio-cultural, historical and natural environmental context. These factors will often mean that the same project will have different outcomes in different communities or local settings. |
|
|
3. Ignores the process of project implementation – the problem of the “black box”: Most IEs use a pre-test comparison and do not study how the project is actually implemented. As a result, if a project does not achieve its objectives it is not possible to determine if this is due to design failure or implementation failure. |
|
|
4. Designs are inflexible and cannot capture or adapt to changes in project design and implementation, or in the local context: IEs repeat the application of the same data collection instrument, asking the same questions and using the same definitions of inputs, outputs, outcomes and impacts. It is very difficult for these designs to adapt to the changes which frequently occur in the project setting or implementation policy. |
|
|
5. Hard to assess the adequacy of the sampling frame: Evaluations frequently use the client list of a government agency as the sampling frame. This is easy and cheap to use but frequently the evaluation ignores the fact that significant numbers of eligible families or communities are left out – and these are usually the poorest or most inaccessible. |
|
|
6. No clear definition of the time-frame over which outcomes and impacts can be measured: The post-test measurement is frequently administered at a time defined by administrative, rather than theoretical considerations. Very often the measurement is made when it is too early for impacts to have been achieved, and it may be concluded that the project did not have an impact. |
|
|
7. Difficult to identify and measure unexpected outcomes: Structured surveys can also measure the expected outcomes and effects and are not able to detect unanticipated outcomes and impacts (positive and negative). |
|
|
Data Collection Issues |
|
|
8. Reliability and validity of indicators: Many strong designs only use a limited number of indicators of outcomes and impacts, almost all of which are quantitative. |
|
|
9. Inability to identify and interview difficult-to-reach groups: Most QUANT data collection methods are not well suited to identify and gain the confidence of sex-workers, drug users, illegal immigrants and other difficult-to-reach groups. |
|
|
10. Difficult to obtain valid information on sensitive topics: Structured surveys are not well suited to collect information on sensitive topics such as domestic violence, control of household resources, and corruption. |
|
|
11. Lack of attention to contextual clues: Survey enumerators are trained to record what the respondent says and not to look for clues such as household possessions, evidence of wealth, interaction among household members or the evidence of power relations to validate what is said. |
|
|
12. Often difficult to obtain a good comparison group match: Adequate secondary data for using propensity score matching is only infrequently available and often control groups must be selected on the basis of judgment and usually very rapid visits to possible control areas. |
|
|
13. The vanishing control group: Control groups get integrated into the project or they may be eradicated through migration, flooding or urban renewal. |
|
|
14. Lack of adequate baseline data: A high proportion of evaluations are commissioned late in the project and do not have access to baseline data. Many IEs collect baseline data but usually only collect QUANT information. |
|
|
Analysis and Utilization Issues |
|
|
15. Long delay in producing findings and recommendations that can be used by policy makers and other stakeholders: Conventional IEs do not produce a report or recommendations until the post-test survey has been completed late in the project cycle or when the project has ended. By the time the report is produced it is often too late for the information to have any practical utility. |
|
|
16. Difficult to generalize to other settings and populations: This is a particular challenges for randomized control trials (RCTs) and similar designs that estimate average effects by controlling for individual and local variations. |
|
|
17. Identifying and estimating influence of unobservables: Participants who are self-selected, or who are selected by an agency interested in ensuring success, are likely to have unique characteristics that affect, and usually increase, the likelihood of success. Many of these are not captured in structured surveys and consequently positive outcomes may be due to these pre-existing characteristics rather than to the success of the project. |
|