Carrying out evaluations that make a difference

Guest blog by: Martha McGuire

Why do we do evaluation?  For accountability?  Yes.  But more importantly to improve programs so they help people improve their lives.  During the 2015 International Year of Evaluation, a group of evaluators came together from around the world to learn more about how evaluations can make a difference[1].  They put out a call for stories about evaluations that make a difference and short-listed those that met the criteria.  Out of 64 submissions, seven stories were selected because they demonstrated evidence that evaluation led to positive changes.

What do we mean by making a difference?

Many of the submissions were examples of high quality, well-designed evaluations.  Often the evaluations were used by stakeholders, but these are only the first steps.  Evaluations that make a difference must meet the following criteria.

Positive changes in:

  • People’s lives
  • The environment
  • Social betterment[2]

 

How can evaluations make a difference?

Both evaluators and evaluation users play an important role in conducting evaluations that make a difference.  How the evaluation is undertaken is important.  Eight key factors emerged[3]:

 

 

So how does this change how we conduct evaluations?

We need to incorporate these factors into our evaluation and the way we carry out evaluations.  Many evaluators have said, “But I have no control over users and commissioners.  You can’t hold me accountable for conducting an evaluation that makes a difference.” Always there are things outside of our control.  But we can influence.  It means working with evaluation commissioners and users to ensure that they understand the important role that they play.

  1. Focus on evaluation impact

Right from the start in designing the evaluation, the evaluator must work with an intention to improve people’s lives.  It is why a program exists.  It is why the evaluation must contribute to improving the program so that it is better able to provide services that work for its recipients.  This is true whether it is a program on climate change, a convention to protect property in times of armed conflict or a program to promote education for girls.  It should go into formulating the questions that the evaluation will address.

This can be challenging.  For example, if we are exploring the effectiveness of a climate change program, we need to ensure that we look at the extent to which it is contributing to positive climate change.  But how do we do that when this program is one small drop in the bucket of other programs occurring in a world where there are so many factors that are working against climate change?  Well, we can look at whether a difference has been made a local level.  For example, by looking at reduced air pollution levels or greenhouse gas emissions we can explore whether the program is headed in the right direction.

A theory of change can help us focus on evaluation impact. Theories of change depict why a program is expected to get its intended results and show factors that might support or interfere with those results.  They outline the anticipated results chain.  So if the evaluation finds that people are changing their behaviors in order to reduce greenhouse gas emissions, the evaluation can point to the positive direction, but also point the further steps that need to be taken in order to have an impact on the environment.

  1. Give voice to the voiceless

Yes, it is important to hear the voices of all stakeholders, but it is harder for some groups to be heard.  The beneficiaries of programs are often overlooked.  Even if an effort is made to allow their input, the way in which information is obtained can interfere.  Beneficiaries on a Steering Committee can be overwhelmed by service providers and not feel comfortable speaking up.  Language can be barrier.  Lack of childcare for a low income single mother can mean she cannot attend a focus group.  Some ways to ensure that people are able to give input:

  • Using different data gathering methods such as photovoice, story-telling, community-based participatory statistics
  • Providing childcare, a meal and cost of transportation
  • Using professional interpreters who are not from the community
  • Ensuring that there are sufficient number of participants on a steering committee so they can support each other
  • Not mixing different populations in focus groups, for example separating men and women

When beneficiary voices are heard, the effect can be transformative, with the evaluation carrying their words to decision-makers.

  1. Provide credible evidence

A high quality, well designed, well-implemented is a good start to obtaining credible evidence.  However, what is seen as credible to one group may not be credible to another.  The way evidence is presented can affect credibility.  A well-written report that is easy to read helps to build credibility.  For one person, hard data is the only believable evidence.  For another person, stories provide real world examples.  Trust in the evaluator can impact the credibility of the information.

For example, the evaluation of a program for expectant mothers living in remote villages in Papua New Guinea.  For every 100,000 live births, 733 women die.  By comparison Australia death rate is only 6.8.  Made possible by cellular phone, the Childbirth Emergency Phone program linked labour wards in major cities.  The evaluation provided hard evidence of women and babies who lives had been saved.  This was supported by positive stories.  With this credible evidence the future of the program was secured.

  1. Use an approach that supports positive thinking and action

Increasingly, evaluations are being conducted using an ‘appreciative’ approach which is intended to discover, understand, and foster transformative change based on the positive in a program.  Most programs are doing some things right.  An appreciative approach validates the positive, providing a platform from which change can occur.  Most programs can also benefit from some changes.  These changes are presented in the context of what the stakeholders want the program to look like which is much more positive than dissecting what is wrong with a program.  Evaluations using an appreciative approach seek out the best of what is to help ignite the collective imagination of ‘what might be’.

  1. Ensure users and intended beneficiaries are engaged through a participatory approach

Together evaluators and evaluation users determine who the stakeholder are and how to engage them.  Evaluators often keep their distance from an evaluation in order to maintain objectivity.  The danger is that the evaluation may not meet the needs of the program – the evaluator may not even fully understand the program and its context.  Engaging stakeholders allows them to gain insights as the evaluation progresses and begin to see things from a new perspective.  Having beneficiaries engage through sharing stories helps people form relationships, strengthen networks and set up informal knowledge transfer mechanisms.  Active engagement by all stakeholders helps to develop an understanding of the evaluation process and increases the likelihood that the findings will be seen as credible.

  1. Embed evaluation within the program

In some evaluations, it is difficult to know where the program ends and the evaluation begins.  With a sanitation program in Kenya, the evaluation played a key role in achieving a dramatic reduction in open defecation practices.  Villages competed with one another to achieve an open defecation-free status.  Similarly, the evaluation of a hand hygiene program produced changes in behaviours even before the evaluation was completed.  The low compliance with hand hygiene standards for health care providers shocked some groups of caregivers and motivated them to do better.  As a result monitoring compliance became an integral part of the program.

  1. Really care about the evaluation

Too often evaluations are done simply because they are required.  Reports from such evaluations are submitted to the funder and sit on the shelf unused.  When program people want the evaluation and are interested in the findings the recommendations are more likely to be used.  As pointed out by Michael Quinn Patton:  The person factor is the presence of an individual or group of people who personally care about the evaluation and the findings it generates.[4]  Without the users being interested it is highly unlikely that the evaluation will be used.  And that is an important step towards the evaluation creating positive change.

But how can I as an evaluator control what the evaluation users do?  Of course you can’t control, but you can influence by using a participatory process that engages the users in planning the evaluation, in interpreting the findings and in developing recommendations.  You advise potential evaluation users about the importance of their caring in order to support an evaluation that is not simply a waste of time.

  1. Champion the evaluation with decision-makers

Championing the evaluation with decision makers within the organization and with funders is a role played by users, but the evaluator can influence this.  It is great if the person championing the evaluation is in a position of influence such as on the board of directors or part of a funding organization.  If this is not the case program staff and managers can work to influence the board.  And the evaluator can support their efforts by giving presentations to decision-makers.  This may be something that is determined at the beginning of an evaluation.  Evaluators can ask, “Who will make the decisions regarding implementation of the evaluation recommendations?  What is the best way of engaging them in the evaluation process?”

Moving Forward

This is just the beginning in gaining an understanding of how to conduct evaluations that make a difference.  It is important that more evidence is gathered in order to test the factors.  Two questions:

  1. Do you have experience with evaluations that have made a difference that you are willing to share?
  2. What else is needed to create enabling environments where high quality credible evaluations influence decisions at all levels?

References:

[1] Rochelle Zorzi (North America), Burt Perrin (France), Pablo Rodriquez-Bilella (Argentina), Scott Bayley (Australia), Serge Eric Yakeu (Cameroon), Soma De Silva (Sri Lanka)

[2] Zorzi, Rochelle, Burt Perrin, Pablo Rodriquez-Bilella, Scott Bayley, Serge Eric Yakeu, Soma Di Silva (2016) Evaluations that Make a Difference https://evaluationstories.wordpress.com/

[3] IBID

[4] Developmental evaluation:  applying complexity concepts to enhance innovation and use, Guildford Press, 2011, p. 56.

 

Martha McGuire is an evaluation consultant with Peel Region Evaluation Platform. She has 30 years’ experience as an evaluator with her most recent work focusing on international development. Martha also teaches monitoring and evaluation at Ryerson University in their Non-profit Management Certificate Program.

We would love to hear your thoughts on this. Email us at hello@peelevaluates.ca!

Follow Our Blog

Do you want to receive evaluation insights from PLC and our partners in your inbox? Enter your email address to follow our blog!