Monitoring & Evaluation: Practitioner Tools

Monitoring & Evaluation
Practitioner Tools

Drawing on the experience and expertise of its members and other experts, InterAction's Evaluation and Program Effectiveness Working Group regularly produces resources meant to benefit the wider community. These resources - addressing a wide variety of topics and ranging from webinars to guidance notes - are targeted at practioners seeking to improve monitoring, evaluation, accountability and learning practices within their organization.


Agency Level Measurement

Since 2012, the EPEWG has worked on the issue of measuring agency level results. This has involved a series of meetings, webinars, and most recently, the commissioning of a white paper to bring together learning from a wide variety of organization experiences.

Key Documents

  • Measuring International NGO Agency-Level Results (May 2016)

    • Measuring International NGO Agency-Level Results (white paper): This study was commissioned by 11 major NGOs as part of InterAction’s Evaluation and Program Effectiveness Working Group (EPEWG). Written by Carlisle Levine, Tosca Bruno van Vijfeijken and Sherine Jayawickrama, its purpose was to examine whether or not the building of agency-level measurement systems is a worthwhile endeavor, under what conditions it delivers benefits and its potential challenges. The paper describes motivations for creating such systems, the expectations and assumptions associated with them, and the nature of the systems. It includes three brief cases as examples. The paper then analyzes what it takes to build and maintain them, their use, key challenges, benefits, risks, trade-offs, and costs. Based on that analysis, the paper offers a series of recommendations to help NGOs decide whether or not agency-level measurement makes sense for them, and, if so, how to develop systems that meet their needs. 

    • So, What Does It All Add Up To? Measuring International NGO Results At The Agency Level (executive brief): This brief, intended for executive leaders, is a summary of the white paper, "Measuring International NGO Agency-Level Results." The brief provides insights for executive leaders considering establishing agency-level measurement systems and covers:

  • What agency-level measurement looks like;

  • How agency-level measurement systems are used; and

  • Factors that affect the success of agency-level measurement systems.

  • In 2012, and in preparation for an InterAction Forum session, four organizations prepared a document summarizing their approach to agency level results measurement. This document was updated in June 2015, expanding on the number of organizations covered.

  • Key framing points: This document, prepared for InterAction's 2014 Forum, was meant to help organizations think through their options when developing agency level measurement systems. The document covers motivations, organizational context, assumptions, approaches, and challenges/risks.

Presentations


Impact Evaluation

Key Documents

  • With financial support from the Rockefeller Foundation, InterAction developed a four-part series of guidance notes on impact evaluation, each of which is accompanied by two webinars related to the notes' contents. To access translated versions of the guidance notes, webinar recordings and presentation slides, please visit InterAction's page on this impact evaluation guidance note and webinar series. Most of the guidance notes are available in French, Spanish and Arabic, as well as English. 

    • Introduction to Impact Evaluation: This guidance note by Patricia Rogers, Professor of Public Sector Evaluation at RMIT University, provides an overview of impact evaluation, explaining why impact evaluation should be done, when and by whom. It describes different methods, approaches and designs that can be used for the different aspects of impact evaluation: clarifying values for the evaluation, developing a theory of how the intervention is understood to work, measuring or describing impacts and other important variables, explaining why impacts have occurred, synthesizing results, and reporting and supporting use.
    • Linking Monitoring and Evaluation to Impact Evaluation: This guidance note, by Burt Perrin, illustrates the relationship between routine M&E and impact evaluation, indicating how both monitoring and evaluation activities can support meaningful and valid impact evaluation, and even make it possible. The note also provides guidance and ideas about the various steps involved and approaches that can be used to maximize the contribution of routine M&E to impact evaluation.
    • Introduction to Mixed Methods in Impact Evaluation: This guidance note, by Michael Bamberger, begins by explaining what a mixed methods (MM) impact evaluation design is and what distinguishes this approach from quantitative or qualitative impact evaluation designs. It notes that a mixed methods approach can strengthen the reliability of data, validity of the findings and recommendations, and broaden and deepen our understanding of the processes through which program outcomes and impacts are achieved, and how these are affected by the context within which the program is implemented. The guidance note also highlights the potential applications and benefits of a mixed methods approach for NGOs.
    • Use of Impact Evaluation Results: This guidance note by by David Bonbright, Chief Executive of Keystone Accountability, highlights three themes crucial for effective utilization of evaluation results. Theme one states that use does not happen by accident. Impact evaluations are more likely to be used when uses have been anticipated and planned from the earliest stages of the evaluation, or the planning stages of the work being  evaluated. Theme two concerns the operations and systems required in an organization to use impact evaluations well. Theme three suggests that findings from impact evaluations will not be used well unless and until we reform organizational culture. The note sets out directions and principles to guide the effort of eliminating disincentives and creating incentives for adopting evaluation findings, with some guiding illustrations from current practice.

Presentations


Learning and Knowledge Management

Key Documents

Presentations


Local Ownership in Evaluation


Open Data

Presentations