Decolonizing Evaluation: 4 Takeaways from a Donor Panel

Photo By: Migue Roth is licensed under the CC BY-NC 4.0 license.

Decolonizing Evaluation: 4 Takeaways from a Donor Panel

Over the past few years, a growing chorus of voices calling for the decolonization of aid has emerged, necessitating a reevaluation of the way programs are designed to the way they are delivered. Evaluation—the process of critically and systematically assessing the design, implementation, improvement, or outcomes of a program—is part and parcel of this broader decolonization conversation.

When it comes to evaluating a project or program, it is worth reflecting on several questions. What constitutes “effectiveness,” how is that determined, and according to whom? Whose values, priorities, and worldviews are shaping the evaluation?

Historically, donors and international non-governmental organizations (INGOs)—in other words, outsiders—have determined what is evaluated, when it is evaluated, by whom, and with what methodologies, with little meaningful input from the people those programs are intended to reach. This needs to change. But what does change look like?

Decolonizing evaluation requires us to focus on who is doing the work and how that work is done. First, it means leveling the playing field between funders, evaluators, implementers, and communities. Second, it means identifying and addressing power imbalances across the evaluation system, from the design and implementation of an evaluation to the dissemination and utilization of its findings.

This is challenging as it forces us to reconsider not only the way we do evaluations (e.g., methods used), but even the way we think about evaluations (e.g., purpose of the evaluation).

On September 22, three funder representatives sat down with InterAction’s Evaluation and Program Effectiveness Community of Practice (EPE CoP) to talk about how they are grappling with these challenges.

Subarna Mathes from the Ford Foundation, Colleen Brady from USAID, and David Burt from the Start Fund talked about what it means to decolonize or shift power in evaluation practice and what their organizations are doing about it. Here are four takeaways from that conversation:

  1. Move beyond tokenism: Shifting the power in evaluation requires more than tokenistic participatory approaches; it requires meaningfully engaging local stakeholders throughout the evaluation process, and even before. “Often, the first place our minds go for participatory approaches is how we can bring local communities into data collection or data analysis processes; for example, hiring local staff as enumerators or field agents,” says Colleen. She added, “Participatory approaches in evaluation need to start with participatory approaches in implementation,” before the evaluation even takes place.Subarna echoed these thoughts, noting that “if you don’t think about ways to diffuse power in the [program] design and in the distribution of who gets resources, then the evaluation is a little late in the game.” By integrating partners’ voices in the program design process, an organization can take steps to decolonize not just evaluation but the program itself. This takes time and intentionality but generates a stronger evaluation with greater buy-in.
  1. Evaluate for all stakeholders’ learning: When it comes to evaluations, there should be greater emphasis on learning and adaptation as opposed to compliance and accountability. Ultimately, the main goal of evaluations is to produce useful knowledge. But for whom should the knowledge be useful and whose “usefulness” is prioritized when designing an evaluation?Evaluation cannot be just about learning on the funder side. It is critical to learn from evaluations in ways that benefit both funders and communities. Allocate time and funding into getting data back to communities. Close feedback loops by sharing evaluation results with all stakeholders, thereby ensuring that learning is encouraging continual improvement and ownership of results at all levels. As one participant noted on an interactive MURAL board during the panel, “Evaluation needs to create as much or more value for participants so that it’s relational and additive, not extractive.”
  1. Don’t impose evaluation methods or approaches: Within the sector, there has historically been a preference for, or reliance on, certain evaluation methods and approaches. Funding is often contingent on reporting certain metrics or evaluating topics that are of interest to funders. As a result, measurement and evaluation frameworks are imposed on organizations and influenced by power dynamics. David points out that “the fear of not getting future funding is often enough to put organizations off trying new things or changing their methodologies,” even if the methods or metrics don’t make much sense.The danger of sticking with a funder’s preferred method or approach, regardless of context or circumstances, is that it might mean missing out on key knowledge and learning. For example, imposing a particular method or metric without accounting for context or the viewpoints of local communities can produce misleading conclusions, which means that an evaluation’s findings may not accurately reflect the experiences of the people served. Instead, funders should be open to working with partners, evaluators, and communities to determine the appropriate methods and approaches in each context. Evaluation should be a co-creation between all stakeholders.
  1. Unburden local partners: Panelists identified several ways that funders can reduce burdens on their partners. One is using the local language. In practice, this could include issuing requests for proposals (RFPs) or accepting evaluations written in other languages. Requiring English creates a barrier for non-English speakers or those for whom English is not their primary language. Rather than focusing on the work that matters, partners will be concerned with translating documents. Using the local language improves accessibility for local communities as well, ensuring they can review, confirm, and share the findings in their language.

    A second practical step is to not impose burdensome requirements, whether that’s responding to lengthy RFPs, conducting extensive data collection efforts, or reporting that is simply a check-box exercise that is not utilization-focused. Subarna explained how Ford has taken steps to simplify its RFP process for evaluators, including by removing page limits on submissions and by requiring neither a detailed budget nor work plan. Instead, Ford takes a high-level approach to start a conversation with the evaluator(s) before a decision is made. In terms of data collection, funders can deemphasize the collection of large volumes of data that will never be used or are only tangentially related to the program at hand. If it isn’t central to the program, partners shouldn’t spend valuable time collecting data on it.

    Third, funders should clearly communicate their expectations from the beginning, starting with the RFP process. Many evaluators have been on the receiving end of opaque RFP processes that require significant financial and human resources and time commitment on the part of the evaluator, without much clarity about what the funder is actually looking for. Funders can help evaluators by clearly outlining exactly what they want, when, and how. For instance, within an RFP, state your objectives and evaluation questions, identify the budget, and explain what you want in an evaluation partner and the criteria to select one. Provide a timeline for the review and selection process. And, importantly, ask for feedback from all applicants to improve processes in the future.

Want to hear more about what these funders had to say about decolonizing evaluation and how their organizations are doing it? The full event recording is available here.

Want to join the Evaluation & Program Effectiveness Community of Practice for our next discussion? InterAction Member organization staff can sign up here.


About the panelists

Subarna Mathes is the Senior Strategy and Evaluation Officer on the Strategy and Learning team at the Ford Foundation.

Colleen Brady is a Senior Monitoring, Evaluation, and Learning Specialist who supports USAID’s Local, Faith and Transformative Partnerships Hub as a contractor through ZemiTek, LLC.

David Burt is the Monitoring, Evaluation, Accountability, and Learning Manager for the Start Fund, a rapid humanitarian response fund managed by members of the Start Network.