Impact Evaluation at Evaluation Week

When:
Monday, 7 November 2016 | 9:30am

On the 7th of November, prior to the official opening of Evaluation Week, IDEV organised an Evaluation Capacity Workshop on impact evaluation. Impact evaluation (IE) is an assessment of how an intervention being evaluated affects outcomes, whether these effects are intended or unintended. IE serves the two objectives of lesson learning and accountability. If an IE is well designed, it can answer the questions about programme design: what bits work and which don’t and thereby provide information for redesign and the design of future programmes. In essence IE can provide information on why and how a programme works not just if it works. By identifying if a particular development assistance is working or not, IE is also serving the accountability function. In the case of the High-5s, the Bank will like to know how they work and which component is working and which is not working and why.

Maria Aguirre of the Catholic University of America was invited to give a keynote speech on ‘Why do we need impact evaluation for international development, why we are not doing more of IE and what we can do to have more IEs?’ In her presentation she tried to answer the questions posed in her presentation title, by emphasising the importance of impact evaluation to lesson learning; accountability; and establishment of the development effectiveness of any intervention. Prior to any IE, there must be established a culture of evaluation within an organisation and deliberate efforts must be made to build the human capacity of the evaluators. Recalling that there are many approaches to IE, she talked about one particular approach she and her colleagues have used extensively. This is the participatory approach in which all stakeholders are involved in the IE process as one cannot ignore interpersonal relationships, i.e. the people involved in a programme. Such a participatory approach can reduce the cost of IE as IEs have been found to be expensive undertakings. In addition, the participatory approach gives participants a sense of ownership of the IE process. In this context she gave examples of two experiments where she introduced a participatory methodology to measure the level of participation using what she called a participatory index. The Randomised Control Trials (RTCs) are the principal mode of investigation. She concluded that an institution has to deliberately build an evaluation culture and in doing so, scale up the value of IE.

Mr. Shimeles from the Research Department of the Bank argued that while there is a need for IEs in international development, the Research Department has not done much in this area because of staff and cost constraints. But IEs are essential in order to establish causality between Bank operations and outcomes. Another panellist argued that UNICEF has been using IE for years and they have always tried to include Management response within any IE report. This approach has induced positive Management attitude towards IE reports within the organisation. He also confirmed that IEs can be expensive and that their costs have to be weighed against their benefits. Another panellist (from the African Evaluation Association) argued that IE has no use unless it sheds light on how change has taken place. Impact evaluation should not be seen just as a technique but as a means of analysing how development has taken place. Ms. Aguirre also noted that the impact of an intervention is not always predictable as many factors can take place simultaneously alongside an intervention particularly in the context of poor countries or where complex programmes like the High-5s are being implemented. Thus a much more integrated approach is required that takes into account the limitations of IE, the complexity of the programmes and their spill over effects. The evaluator must be able to innovate with new tools of analysis. Finally she said that statistical rigour is not the only requirement in doing and using IE, it is also important to understand its limitations and try to bypass those limitations through enhanced evaluation activity.

Discussions with the audience focussed on data requirements for IE, particularly baseline data; the construction of a theory of change; the construction of appropriate counterfactuals and the involvement of stakeholders; and the choice of which programme to subject to IE. For example humanitarian interventions do not need to be subjected to IE.

               The action points that seem to emerge from the discussions are that evaluation needs to be institutionalised within an organisation; it also needs to be strengthened in order to yield quality evaluation findings; and it has to be able to learn from nations that have successfully developed. Given the expected resistance to evaluation, a culture of evaluation must be established possibly through legal means and/or by involving the population to create a demand for evaluation. The high cost of IE precludes its use for all interventions. As a result an institution like IDEV must be selective by focussing on pilot programmes that require being scaled up, interventions which do not provide solid evidence of impact and a selection of interventions across the Bank’s portfolio.

Sector