You are here

Deciding what to monitor and evaluate

Monitoring and evaluation explained
  • Performance monitoring helps to follow up programme/policy implementation against a set of indicators. These should be able to measure key outputs and results of the initiative. They should capture the evolution throughout the implementation process.
  • Evaluation makes judgements about the extent to which the programme or policy meets its objectives. These judgements are also made based on series of indicators. Evaluations do not only present the data on outputs and results but they also assess whether these results can be considered as ‘good’ performance or not. Evaluations also provide insights that should be used to refine and improve programmes or policies and make them more effective.
Collapsible section
Defining what is to be monitored and/or evaluated

Step1. Defining the scope of our evaluation

The first step in developing a monitoring framework or an evaluation approach is defining what is to be covered. This implies clarity about the scope of the subject of monitoring or evaluation. This can concern aspects such as:

  • the exact activities covered
  • the evaluation period covered
  • the geographical scope of the evaluation

Example:

All activities of the mentoring programme implemented in region X. The evaluation will cover the full period since 2012 (date when changes to the programme were introduced).

See example for programme theory

Step 2. Developing the programme theory or intervention logic

The second step is developing the programme theory, or the ‘intervention logic’. This explains how the programme is expected to work. It clarifies the logical chain from inputs and activities to outputs, results and impacts. It articulates why a certain activity is expected to lead to certain changes. It identifies the intermediary changes that are needed in order to achieve the expected results and impacts. 

Step 3. Formulate evaluation questions

Based on the programme description, scope definition and intervention logic, it is important to formulate a set of evaluation questions. These should capture those aspects of the programme on which decisions need to be made. This depends very much on the stage of programme design and delivery, and the nature of decisions that the evaluation is expected to inform:

  • In some instances it may be more relevant to ask questions about the implementation process. For instance, this could be the case if the aim is to refine an ongoing programme and improve its implementation.
  • In others more emphasis will be put on the results and impacts. For instance, if the purpose of the evaluation is to know if a pilot programme has the desired impact, before deciding to establish the programme more widely.

Examples:

Have early leavers been reached and engaged in the programme as intended?

What effect does the mentoring programme have on participants’ motivation to learn?

What impact does the mentoring programme have on the rate of early leavers in our municipality?

Collapsible section