How To: A Local Ownership Approach to Evaluation In Practice

So your organization is interested in promoting a local ownership approach to evaluation. Where do you begin? Local Ownership in Evaluation: Moving from Participant Inclusion to Ownership in Evaluation Decision Making provided a rationale and identified necessary conditions to undertake a local ownership approach to evaluation. But what does it take to put this approach into practice? Who should be involved and when? And how is it most important for them to be involved? These are some of the key questions we will address in this “how to.”

Local ownership in evaluation: an overview

In principle, ensuring that those on the receiving end of international assistance have a say in how that assistance is evaluated makes sense. But what exactly is a local ownership approach to evaluation? Why should organizations and evaluators use this approach?

What is a local ownership approach to evaluation?

To ensure local ownership in evaluation, program participants* need to be involved in evaluation decision making, not just serve as data sources. This means being involved in key decision moments:

  • Determining that an evaluation ought to take place;
  • Deciding an evaluation’s purpose;
  • Developing evaluation questions;
  • Determining what will be judged as success (and what will not);
  • Analyzing and interpreting data;
  • Validating findings;
  • Developing recommendations; and
  • Disseminating findings in such a way so they can inform decision making by donors, implementers, partner institutions and communities.

These are decision points in which evaluation funders and commissioners, as well as evaluators themselves, are typically involved. In a local ownership approach, program participants also have a voice in these.

Who is a participant?

In this "how to," the term "program participants" or simply "participants" refers to those directly affected by an intervention. This term is used instead of "beneficiaries," which many feel casts those meant to benefit from international assistance as passive recipients. Other organizations use terms such as constituents, customers or clients. "Participants," while still an imperfect term, is an attempt to move closer toward the idea of program partners.

In many cases, participants will be individuals or communities. For programs that do not involve direct service delivery, but rather seek to help organizations strengthen their capacities, etc., participants might include local partners - whether from civil society, the private sector or government.

How is local ownership in evaluation different from participatory evaluation?

While participatory evaluation promotes local ownership in evaluation, locally owned evaluations may or may not use a participatory evaluation approach. It may or may not be necessary and/or appropriate. Participatory evaluation includes participants as co-evaluators, involving them in every aspect of an evaluation. While in some cases, participants might welcome this level of involvement, in others, it might create too great of a burden, limiting the participants who can be involved to those who have sufficient time available to dedicate to the endeavor. By allowing participants to focus their involvement on key evaluation decision-making moments, a greater range of participants might be able to be involved.

It is also important to distinguish participatory evaluation from evaluations that use participatory data collection methods. Participatory evaluation makes participants co-evaluators and involves them as equal decison-makers in all evaluation phases. While an evaluation may use participatory methods to collect data, the use of such methods alone does not make it a participatory evaluation.

Why take a local ownership approach to evaluation?

Experience has shown that a local ownership approach to evaluation can:

  • Increase the chances that evaluation results will be used and used quickly, in part because it puts information into the hands of some of the key users: program participants or local institutions meant to benefit from international assistance;

  • Provide a fuller and more accurate picture of an intervention’s effects, as long as diverse voices are successfully included;

  • Enrich learning for all involved in the program and its evaluation, including donors, implementers, and those meant to benefit from an intervention;

  • Strengthen participants’ capacity, especially their evaluative thinking skills;

  • Improve communication and understanding among stakeholders involved in an intervention; and

  • Make organizations’ approaches to evaluation consistent with their approaches to programming, which often emphasize participants’ inclusion in design and implementation.

Preparing your organization for a local ownership approach to evaluation

Once you or your organization have decided to take a local ownership approach to evaluation, there are a few things you can do to try to ensure that this approach becomes ingrained in your organization.

  • Raise awareness about the value of participant voice and a local ownership approach to evaluation among staff members, including leadership, program staff and all others involved in program evaluation. This includes everyone from those who determine evaluation’s importance in the organization and evaluation budgets to those who work most directly with community members.

  • Conduct an assessment to determine where your organization is in terms of promoting local ownership in evaluation: How are program participants typically involved in the evaluation process? At which stages of an evaluation is their input solicited? Do they participate in evaluation decision making, and if so, in what ways? 

  • Develop or revise existing evaluation policies, standards and/or guidelines to more fully include participants in evaluation decision making. These standards should reference inclusion of participants in evaluation decision making in each evaluation stage, or call for justification explaining why participants were not included.

Examples of standards/guidelines that call for participant inclusion (at least to some extent) are:

Save the Children Evaluation Handbook (2012)

World Vision Learning through Evaluation with Accountability & Planning (LEAP) (2nd Edition) (2007)

OECD Development Assistance Committee Quality Standards for Development Evaluation (2010)

DFID Ethics principles for research and evaluation (2011)

  • Make sure evaluation consultants are aware of, understand and adhere to these evaluation policies, standards and/or guidelines. One way to do this is to include them in evaluation terms of reference and evaluator contracts.

  • Request participant input on the evaluation terms of reference. You may also want to consider getting participants’ input on the terms of reference to ensure that the evaluation also responds to participants’ information needs. If you decide to do so, keep in mind that you will need to allocate sufficient time for this process. This terms of reference for an evaluation commissioned by World Vision Albania was prepared with partners’ input.

  • Train M&E staff and others involved in the evaluation process in the skills they will need to promote a local ownership approach to evaluation (e.g., facilitation, listening, conflict resolution, etc.). Useful resources include: 

  • Pilot a local ownership approach with a few evaluations. Look for evaluations that have flexible funding arrangements and flexible timelines. Projects that plan for evaluation from the beginning and that take a participatory approach may be the best candidates. Use these pilots to learn what is required to successfully embrace a local ownership approach, and to help highlight to others within your organization the benefits of doing so.

TIP: Do not be afraid to push for a local ownership approach to evaluation. Donors or partners may not have thought of using such an approach themselves, but may respond positively if it is suggested.

Remember that adopting this approach may need to be a gradual process. As staff strengthen their skills, as trust grows between an organization and the people who participate in the programs it supports, and as program participants strengthen their skills to participate in evaluation decision making, then program participants can take on greater roles.

Applying a local ownership approach to evaluation in the context of a specific program

Applying a local ownership approach to a specific program evaluation requires a number of steps:

Determine whether a local ownership approach is feasible, ethical and appropriate. Here is a list of things you should consider in making this decision.

1. Participants’ input will INFLUENCE decision making. As one organization put it: “Don’t ask if you’re not willing to listen. 

2. Participants will GAIN from being involved. Participants are busy people. The benefit of involvement must outweigh the costs. Benefits could include:

  • Knowledge that their input has resulted in program decisions that will help improve their lives, the lives of people they seek to assist, or the lives of others in their communities;
  • Strengthened relationships with program and community decision makers or international and national assistance providers that they can later leverage to influence other decision-making processes; and
  • Enhanced capacities to analyze program processes and results, and to use evaluation findings to improve practices.

3. Participants WANT to be involved.There are some things participants may not want to do or may not have time to do, and that is fine. The important thing is to discuss this and to make sure everyone understands who will be doing what, as well as the implications for the evaluation decision-making process. ​It is also important to ensure that participants do not feel that they must participate. To avoid this, make sure participants understand that their choice to be involved or not be involved will in no way affect their involvement in or the benefits they are receiving from an intervention.

4. Participants CAN be involved. Evaluators and participants need to have access to each other. Sometimes geography, security, cultural restrictions or language can make access difficult. However, these are not excuses for exclusion, as long as inclusion does not pose a risk to participants (see below). Nonetheless, ensuring access - including those who are traditionally excluded from decision-making processes - requires effort and negotiation during the evaluation design:

  • ​Is the evaluation adequately budgeted and does it allow sufficient time for inclusion?
  • Does the evaluator have use of the required transportation to get to participants? If not, can technology help provide that access?
  • Do security conditions allow the evaluator and participants to meet? (see below)
  • Can the evaluation be designed to respond to cultural restrictions? For example, can those participating in evaluation decision making do so in groups divided by gender? Does the evaluation team contain both men and women, so that men evaluators can meet with men, and women evaluators can meet with women?
  • Does the evaluator or evaluation team speak all relevant languages and/or have access to reliable translation services

​5. The evaluation team has the SKILLS to facilitate participation in the evaluation decision-making process. Bringing participants in as evaluation decision makers requires skills that not all evaluators have: cultural sensitivity, evaluation capacity building, facilitation, negotiation, mediation, and conflict resolution. To meaningfully include participants in evaluation decision making and avoid doing harm in the process, an evaluation team must contain these skills.

6. Participants will NOT be put at RISK. Risk can stem from the environment in which the evaluation is taking place, power structures and dynamics within a community, or the sensitivity of either the program topic or the evaluation findings.

  • ​Where programs and their evaluations are taking place in insecure locations because of ongoing violent conflict or violent crime (including gender-based violence), then finding places and times to meet that are safe for all involved is critical. Where these cannot be found, technology may help provide access, although potential technology surveillance must also be considered. 
  • Involving participants in evaluation decision-making processes can upset existing power structures and dynamics, potentially putting participants at risk. Understanding these structures and dynamics, and taking them into account in the design of an evaluation decision-making process is needed to minimize risk for those involved.
  • Some programs may be sensitive (for example, addressing issues of gender-based violence or corruption), and some evaluations may unearth sensitive findings (such as corruption within a program or biased targeting). In these cases, participants might put themselves at risk by being involved in evaluation decision making, and letting evaluation decision making remain in the hands of outsiders may be preferable. 

7. Involving participants in evaluation decision making will RAISE EXPECTATIONS that cannot be met. Programs using a local ownership approach to evaluation have ideally employed a participatory approach in their program design and implementation. If so, participants would have already developed the expectation of being involved in decision-making processes and know how their involvement influences program design and implementation. When a local ownership approach to evaluation is applied to a program that has not been participatory in its approach, it risks raising false expectations that participants will be included in other decision-making processes in the future. This might constitute a healthy push toward greater and broader inclusion. However, where that is not possible, evaluators must clearly explain to participants the limits to their inclusion and the rationale for that.

Identify program participants. To the extent possible, participants included in evaluation decision-making should be representative of all population segments involved in or affected by a program, including groups traditionally marginalized in decision-making processes (remember, for programs not involving direct service delivery, participants may be local partner organizations). Hopefully, these population segments have been identified through program design and implementation processes. If not or to validate this information, the evaluation team can conduct community mapping exercises aimed at identifying these groups, ensuring broad enough participation in the exercises so that no one is left off the map.

​Identify who should be involved in the evaluation. In most cases, given the number of participants involved in a program, the implementer and/or the evaluator will need to select a smaller, representative sample to be part of the evaluation decision making process. Those selected should be in a position to use the evaluation findings, and have the time and interest in serving in this role.There are a number of important things to keep in mind as you decide who should be involved:

1. Make sure that the selection process is transparent and is viewed as fair by program participants. Some evaluators have organized democratic votes, while others have asked program participants to nominate people to include, but maintained ultimate decision-making power in order to minimize bias and ensure adequate representation.

2. Be aware of any potentially negative power dynamics, and be careful not to reinforce them.

3. All participants should be represented, including traditionally marginalized groups.

4. Those selected should be perceived as legitimate representatives of participant groups. Official or traditional leaders may not always be viewed as legitimate.

5. Everyone has biases that can affect the evaluation process. The important thing is to understand what these biases are, and to involve people with differing perspectives to balance these biases.

6. Those selected should be able to step out of their roles as program participants to objectively examine the program and its effects on participants as a whole.

The evaluators and program participants who will potentially be involved in the evaluation decision making process need to determine together which program participants can most usefully contribute to the evaluation in the role of informants (because of the information and perspective they offer) vs. as members of the evaluation decision-making team. In some cases, participants in the evaluation decision making process have worn two hats:

1. They step back from their personal roles to provide input to evaluation decision making; and

2. As program participants, they also are included as informants for the evaluation, providing their insights as part of the data collection process.

TIP: Keep in mind that this may be the first time that participants are involved in an evaluation in this way. They may be skeptical at first. If so, recognize that it will take some time for participants to trust that their involvement is really desired and valued. It may also take time to build their confidence to engage in the process.


To think through who is involved, the following resources might be of use:

Participatory Program Evaluation Manual (by Judi Aubel, a joint publication of the Child Survival Technical Support Project and Catholic Relief Services)

Rapid Rural Appraisal and Participatory Rural Appraisal: A manual for CRS field workers and partners

UNDP Institutional and Context Analysis Guidance Note (for guidance on conducting a stakeholder analysis in order to understand the power dynamics among those who are participating in or affected by a program)

Identify how participants should be involved. Ideally, in a local ownership approach to evaluation, participants share decision making with others as equal partners. However, achieving this might take time. The table below summarizes some of the key steps in the “ladder of participation” presented in Local Ownership in Evaluation: Moving from Participant Inclusion to Ownership in Evaluation Decision Making and illustrates participants’ involvement at various stages of the evaluation process. Where available, examples and useful resources are provided.

TIP: Leave jargon behind. Use plain language, even when talking about evaluation.

Participant Involvement

Evaluation design

Data collection and analysis

Findings and recommendations

Disseminating and using results

Rung 8:

Participants share decision making with others as equal partners


(NOTE: In some cases, evaluators might maintain final decision-making power, if that is necessary to ensure the perception of an evaluation’s credibility.)

Together, participants and evaluators

- Develop the program’s theory of change

- Identify outcomes

- Determine evaluation questions

- Select relevant indicators

- Identify criteria for success

- Design data collection approaches

- Decide data sources and samples

Participants help develop appropriate measures (see this example from CARE)

Participants are involved in data analysis.

Participants are involved in framing evaluation findings and developing evaluation recommendations.

Participants are involved in disseminating evaluation findings.

Participants are responsible for acting on some of the evaluation recommendations.

Rung 5: 

Participants consulted and informed

Participants provide input into the evaluation design.

Participants provide data  and may also be involved in data collection;

Participants assist with data analysis.

Data is shared with participants for their input.

Participants provide feedback on/validate the evaluation findings and recommendations.


Participants provide input on how evaluation results are disseminated and used.

Rung 4: 

Participants informed

Information about the purpose and nature of the evaluation is provided to participants.

The data collected and evaluator-conducted analysis are shared with participants.

Participants are presented with the finalized evaluation findings and recommendations.

Which stages participants are involved in and the nature of their involvement will depend largely on the time and budget available for the evaluation, and on the preferences of participants themselves. If resources are scarce, implementers and evaluators should think carefully about where participants’ involvement will be most meaningful and useful for all involved.

Transparency is critical both in the selection of participants and in the determination of how they will be involved. Setting clear expectations about how decisions will be made and by whom can help reduce misunderstandings and conflict.


Feinstein International Center’s Participatory Impact Assessment approach (2014)

World Vision’s Building Consensus Tool (2010)

Budgeting time and resources for locally owned evaluations

Including participants in evaluation decision making requires additional time and resources. Potential time and cost implications to consider include:

  • Meetings with participants to explain what evaluation is and what it entails;
  • The process for selecting participants for inclusion in evaluation decision-making processes;

  • Participants’ inclusion in an evaluation advisory group or team (e.g., travel, communications, etc.);

  • Meeting(s) with participants to design the evaluation and/or shape the evaluation terms of reference;

  • Meeting(s) with participants to share and analyze data;

  • Meeting(s) with participants to validate findings and develop recommendations;

  • Cost of translators, if the evaluator or participants don’t have the necessary language skills;

  • Costs associated with making the evaluation process accessible for all participants, including people with disabilities;

  • Time associated with building trust among all involved in the evaluation decision-making process and ensuring that there is a shared understanding of the evaluation process and results, as well as of each other’s inputs;

  • Dissemination of evaluation results in different formats that are appropriate for different audiences (e.g. report, summaries, radio bulletins, information shared on information boards, etc.). This includes the broader group of participants, beyond those who were involved in the evaluation decision-making process. This might be done by the organization itself, or through representatives, and might involve travel, material and translation costs.

TIP: Think carefully about how evaluation resources should be spent. One option is to invest more in inclusive approaches and less in the polish of a final report.


One good overview of budgeting for evaluations in general is the W.K. Kellogg Foundation Evaluation Handbook, which provides a framework for thinking about evaluation as a relevant and useful program tool. It was written primarily for project directors who have direct responsibility for the ongoing evaluation of W.K. Kellogg Foundation-funded projects.

Case studies

  • World Vision Albania: Experimenting with a New Approach to Evaluation: In early 2013, World Vision Albania commissioned a formative evaluation of its Korca Area Development Program. In keeping with the program's emphasis on partnership, the Korca ADP evaluation "was designed to include full participation by local partners at every step of the evaluation process." The goal was to include local partners as co-equals in the evaluation, in large part to ensure that the evaluation supported and furthered ongoing learning in the program. 
  • EnCompass LLC: Building Local Ownership of Evaluation: In 2013, the John D. and Catherine T. MacArther Foundation funded a portfolio of grants in Nigeria to increase government accountability for maternal health. As part of a series of learning-focused evaluative activities, the Foundation commissioned EnCompass LLC to refine the grant portfolio theory of change, conduct a baseline study and midline evaluation, and build capacity of grantee organizations (all civil society organizations) to monitor grant implementation and results. EnCompass LLC is committed to utilization-focused evaluation, and creating ownership of evaluation findings, conclusions and recommendations. It therefore used a participatory, collaborative approach to the evaluation.


Preparing your organization for a local ownership approach to evaluation

Checklists for evaluation commissioners and practitioners (from Beneficiary Feedback in Evaluation, a UK Department for International Development Working Paper prepared by Leslie Groves)

Evaluation Terms of Reference: Korca Area Development Program, World Vision Albania 

World Vision International’s Development Guides

Applying a local ownership approach to evaluation in the context of a specific program

Participatory Program Evaluation Manual (by Judi Aubel, a joint publication of the Child Survival Technical Support Project and Catholic Relief Services)

Rapid Rural Appraisal and Participatory Rural Appraisal: A manual for CRS field workers and partners

UNDP Institutional and Context Analysis Guidance Note

CARE: To be well at heart: women’s perceptions of psychosocial wellbeing in three conflict affected countries (by Martha Bragin, Karuna Onta,TaakaJanepher, Generose Nzeyimana & Tonka Eibs)

Feinstein International Center’s Participatory Impact Assessment approach (2014)

World Vision’s Building Consensus Tool (2010)

Budgeting time and resources for locally owned evaluations

W.K. Kellogg Foundation Evaluation Handbook

Embracing a locally owned approach to evaluation demands commitment, patience and flexibility. Managed well, it can lead to better development outcomes. Managed poorly, it can lead to conflict and make people less likely to take part in any future participatory process. With that in mind, make the decision to use a locally-owned approach be a locally-owned decision. When planning an evaluation, consult with communities, program implementers, funders and the evaluation team to ensure that the approach will be appropriate and welcome. Don’t think about participants’ involvement as participation, but as a partnership. This means being aware of and seeking to meet their expectations and needs for the evaluation, as well as your own and those of your donor.