Begin with the end in mind: linking program design and evaluation

Evaluation is really about seeking change. When it comes to program evaluation, the plan we create is intended to outline the different types of changes we want to see. If the desired outcomes are planned upfront, it is easier to do program evaluation.

This was the advice offered by Charlotte Young in PREP’s workshop, Linking Program Design and Evaluation. Stated simply, we decide what we want to accomplish in tandem with how we will know we have achieved it.

Humans are messy, full of feelings and perceptions, so it’s sometimes challenging to objectively measure changes. Thought of from this perspective, evaluation is the ultimate exercise in creativity and optimism; we have to be able to anticipate a desired future state. It asks that we think in concrete terms about something that hasn’t happened yet.

While we’re thinking about program design questions such as, “who is the program intended to serve? or, “what setting would make most sense for what we are offering?”, we should have specific and clearly articulated outcomes in place so we can gear program activities towards them.

Early in the workshop, Charlotte had participants work in teams to place specific program design and evaluation activities within a continuum. Each group debated the merits of which step should come before the other. The learning? Some program and evaluation planning should happen simultaneously.

This continuum exercise also suggested another bit of insight. That just as we pilot our programs, so too should we pilot our evaluation – and for the same reason: so, we can modify our plan based on what we’ve learned.

The primary benefit of planning for evaluation during program planning is that it can shorten the length of time it takes to change things that aren’t working. This kind of responsiveness can improve our impacts and save us money in the long run.

Embedding outcomes into the program plan will allow us to see a direct line from our intervention (or program) to the results we want to achieve. Charlotte advised that outcomes don’t need to be sophisticated either. We should consider what measures would constitute research that is “good enough” to provide a solid basis for decision-making.

Workshop participants spent the majority of their time crafting outcomes and indicators for a fictional new program. These activities, however, needed to be considered separately.

To begin, the group took a deep dive into understanding what makes a good outcome. Before crafting their own, the group evaluated a sampling of outcome statements from a range of programs. Were these outcome statements following SMART principles in that they were Specific, Measurable, Achievable, Results-focused and Time-bound? Charlotte added an extra T to this acronym to communicate that outcomes should also be Transparent, meaning that they would be easily understood by participants.

Here are some pro-tips from Charlotte on creating outcome statements:

  • Keep to one outcome per statement.
  • Focus on the desired change, not the process or task that will be used to accomplish it
  • Start with a verb and consider if it is observable or not observable. How can we gauge ‘appreciation’ for example? It might be preferable to aim for a more observable verb such as ‘remember’ ‘calculate’ or ‘report’
  • Remember to include the extent of desired change considering “how well”, or “how often” for instance
  • If assigning a desired percentage change, consider what might make a reasonable benchmark. If 70% of participants reported they were better able to manage household finances after a money management course, is that a reasonable expectation? It’s up to you to use your best estimation and draw on research or previous evaluations, if applicable.

Indicators are our way of measuring if an outcome has been achieved. These also require time and consideration. When it comes to data gathering, Charlotte advised that we should keep in mind who will use the results. Also: how, when and where data will be collected. Good evaluation practice says that we should use multiple approaches and where possible, embed the data collection into something we’re already doing in the program.

Which measurement tools are best? Let the program design and desired outcomes be the guide. Some tools will make more sense in different settings. We can choose from the families of available tools which include surveys, interviews, document review and observation.

Some participants were surprised to hear that we should be including 10 to 20% of our overall evaluation budget to evaluation. This stresses the importance of evaluation and that it requires resources to do well.

The participants attending this workshop shared their appreciation for the team-oriented ‘learn by doing’ structure of the day. It was proof of Charlotte’s recommendation that evaluations should be designed collaboratively, with those responsible for program planning and for evaluation coming together to add their unique perspectives on the work. Even if we’ve got programs that are well established, a collaborative approach to developing a plan for evaluation can be a fruitful and engaging experience.

We would love to hear your thoughts on this. Email us at hello@peelevaluates.ca!

Follow Our Blog

Do you want to receive evaluation insights from PLC and our partners in your inbox? Enter your email address to follow our blog!