An intervention logic shows the relationship between programme theory (how and why a programme is expected to lead to change) and the use of indicators for evaluation.
There are different types of indicators used for monitoring and evaluation:
- Context: context indicators do not directly concern the policy/programme being implemented but they help to explain the bigger picture. They are used to contextualise the performance of a programme/policy.
- Inputs: what is invested into the implementation of the programme/policy. Inputs can be financial resources or human resources.
- Process: the activities put in place.
- Outputs: these reflect what the programme directly produced, meaning the numbers of participants, products, services delivered, etc.
- Results: the direct achievements of the programme/policy. These should capture the direct change that the programme/policy led to. They should be related to what the programme aims to achieve.
- Impacts: the longer term effects of a policy or a programme. Impacts can go beyond the individuals directly involved and look at effects at the level of organisations or society.
- Structural indicators capture what is being done at the level of policy or system to tackle a certain issue and how it is done. They are typically either yes or no statements or they use other categories to capture systemic/policy features.
To get a full understanding of a programme/policy, one should look at it combining a broad set of indicators. A single indicator never tells the full story. Therefore all of the above types of indicators should be reflected in the monitoring or evaluation framework.
Indicators can capture information that is quantitative as well as qualitative. Quantitative indicators focus on absolute numbers, percentages, ratios and other measures. Qualitative indicators capture types or categories, express breadth, denominate relations, etc.
The result and impact indicators should be logically linked to what the programme aims to achieve. They should be chosen carefully and there should not be too many. There can be a tendency among stakeholders to focus the programme delivery on those indicators that are being measured. This can have adverse effects on the quality of the service/programme. That is why:
- the indicators should be chosen wisely and capture what really matters
- they should be accompanied by a narrative that will translate the data collected into an explanation. This can be supported by an intervention logic.
Examples of quantitative indicators:
- Context: unemployment levels can be a useful contextual indicator to understand early leaving from education and training
- Inputs: programme budget, number of staff
- Process: number and type of activities implemented
- Output: number of persons trained, number of persons who received coaching
It can also be formulated as a share of certain population rather than absolute number
- Result: number and share of those who received support and who have reintegrated education/ training, number and share of those who received support and who have improved their skills
- Impact: number and share of those who received support and who completed education/ training, change in early leaving rate at local level
- Structural indicator: the extent to which majority of education and training institutions do have in place a plan to reduce early leaving
Examples of qualitative indicators:
- relationships between stakeholders involved in programme delivery
- types of changes that learners associate with participation in the programme
- breadth and types of changes in learners’ behaviours observed by practitioners (e.g. teachers/trainers)
The result and impact indicators should be designed to capture the difference a programme/policy makes to the problem of early leaving.
They should reflect what a programme aims to achieve. In other words they should be able to measure the performance of the programme/policy against its objectives. These indicators should reflect all the levels in the logical chain as described in programme theory and intervention logic.
The ultimate objective of programmes/policies concerned by this toolkit is to decrease the rate of early leaving by:
- improving retention and graduation rates
- increasing qualification attainment amongst early leavers who return to education or training
This however is not achieved directly by most programmes/policies. As explained in other parts of the toolkit, early leaving is a multi-faceted phenomenon which reflects a range of challenges and difficulties. Programmes/policies tackle these underpinning challenges and factors and lead to related changes at individual or institutional level. It is these intermediary results that are associated with a change in the early leaving rate.
There is a certain time lapse between the moment when a person takes part in a policy/programme and when s/he eventually graduates. This can take several years. Often monitoring and evaluation information is needed to make decisions about programme/policy funding before this period elapses. It is hence very important to design indicators that also capture intermediary results.
Direct and indirect effects of programmes/ policies
The links between intermediary results and impacts should be based on evidence. For instance, Cedefop research detected that students’ inadequate orientation is one of the factors leading to early leaving. Thus, a programme that aims at helping young people make informed choices about their education and training pathways, should ultimately contribute to tackling early leaving.
How to define what a programme/policy is expected to change and why?
Every programme/ policy is underpinned by certain assumptions about what is the nature of the problem and how doing XYZ will change the status quo. These assumptions can be more or less explicit in programme documentation. Often it is necessary to discuss with those who designed the programme/ policy in order to understand these assumptions. It is by clarifying these\ that the evaluators develop the logical reasoning that captures: the problem, the nature of the activities put in place, the rationale that explains how doing X will change the problem. A good evaluation should not only measure whether the expected changes happened or not. It should also explain which of the initial assumptions were proved correct and which ones not.
Do evaluations only measure what is expected from the programme?
Evaluations are typically guided by the programme intervention logic which explains what is expected to be happening. However, programmes can have unexpected effects too (positive or negative). As these are unexpected, evaluators often become aware of them during the qualitative enquiry by talking to stakeholders or beneficiaries. In some cases the unexpected effects may be so important that there may be a need to integrate them among the key indicators for the evaluation.
A key evaluator’s tool to develop a monitoring and evaluation indicator framework is the programme/policy intervention logic. The intervention logic breaks down the programme rationale into:
- inputs: resources invested
- process/activities: what the programme does
- outputs: what is directly produced/delivered and who takes part;
- results: what concrete changes can be identified at the level of individuals (learners or practitioners) or institutions
- impacts: to what extent the programme decreases early leaving
If contextual factors influence the programme implementation or its chances of success, then these should be also clarified. Discussing the intervention logic can clarify any assumptions about the context which are considered as necessary for the programme to succeed.
Monitoring and evaluation indicators should be defined for each aspect of the intervention logic. If indicators only capture outputs the evaluation does not say anything about the real change that can be attributed to the programme. If they only focus on the ultimate results and impacts, then it is not clear how concretely the programme made (or failed to make) a difference. If the change in ultimate impacts is small and no intermediate results are measured, it is not possible to see what aspects of the intervention logic and programme rationale are failing. This makes it difficult to recommend adjustments.
Cedefop research which underpins this toolkit shows that only a small number of evaluations pay sufficient attention to indicators for intermediary results. Many measure outputs and some measure impacts, without assessing what led to these impacts.
Note: an intervention logic is not static. It should be the starting point of a thought process on how to evaluate a programme. Throughout the process it can be fine-tuned to clarify the causal links between inputs, and the outputs, results and impacts.
Examples of contextual features (not directly linked to the policy/programme) which can affect programme performance are:
- level of unemployment
- staff turnover
- relationship between key stakeholder organisations
- teacher/trainer-student relations
- changes in political priorities
Check our examples of intervention logic for: