Do you want to evaluate whether a policy or measure targeted at preventing or reducing early leaving from (vocational) education and training you oversee is performing well?

Cedefop has developed an evaluation plan for the monitoring and evaluation of specific policies and measures to be used by policy-makers and other stakeholders who are not experts in the field of evaluation. You can use the plan when developing your monitoring and evaluation approach. The steps below present a set of tasks that need to be completed. They do not define who does them – whether it is the evaluation steering committee, the evaluator or the client commissioning the evaluation. This responsibility will differ from case to case.

The information in this section is based on the Cedefop study ‘Leaving education early: putting vocational education and training (VET) centre stage’ and the research conducted for the development of this toolkit.

 

Key question: What does the policy/measure consist of?

Describe the policy/measure. Be as specific as possible about the elements of the policy/measure.

Example:
Mentoring activities for learners at risk of early leaving.

Key question: How is it different from other policies/measures?

Compare here with other similar policies/measures.

Key question: What period of time should be covered?

Define the period of time that will be covered by the evaluation (from - to).

Example:
Period of time: from 2012 (when changes to the mentoring programme were introduced) to 2016.

Key question: What is the geographical scope of the evaluation?

Define the geographical scope of the evaluation.

Example:
Geographical scope: region X.

To read more about how to define what is to be evaluated go to section ‘Deciding what to monitor and evaluate’ of the toolkit >

Key question: What are the policy/measure objectives and how is it expected to achieve them?

Review policy/measure documentation (e.g. guidelines, manuals), carry out interviews with those who designed the policy/measure.

Example:
See Cedefop example for programme theory/intervention logics >

Key question: What is the logical chain of changes that is expected to lead to the objectives?

Use a visual presentation showing the causal chain from inputs and activities to outputs, results and impacts.

Example:
See an example of visual presentation of programme theory/intervention logics >

Key question: What were the (negative/positive) results that were not expected but which could plausibly be observed?

  • Interview people who are knowledgeable about the policy/measure performance (i.e. those in charge of designing and implementing the measure. This can include VET practitioners involved in the implementation of the measure).
  • Risk assessment – think about what could go wrong.

Example:

  • Participating students feel labelled and this leads to low motivation.
  • Seasonal job openings attract students before programme completion.

Key question: What are the questions the evaluation should answer?

  • Discuss with members of the evaluation team. If the team considers that additional expertise is needed, it is possible to establish a steering group in charge of making the decisions about the evaluation.
  • The questions should be clear and open-ended. There should be a reasonable number of them compared to the complexity and budget of the policy/measure. 

Key question: What is the information needed for future actions to be taken based on this evaluation?

Examples:

  • Have early leavers been reached and engaged in the measure as intended?
  • What effect does the mentoring programme have on participants’ motivation to learn?
  • What impact does the mentoring programme have on the rate of early leavers in our municipality?

Key question: What will be considered as good performance?

  • If the policy/measure has targets, these can be used.
  • If there is baseline data, the judgement criteria can be defined in relation to the baseline.
  • Otherwise involve the steering group/key stakeholders in defining these criteria.

Examples:

  • Increase the number of schools which have action plans to tackle early leaving.
  • Increase the number of schools which have action plans to tackle early leaving by 20%.
  • Increase the share of schools which have action plans to tackle early leaving up to 50% of schools.

To read more about how to determine what constitutes good performance go to section ‘Deciding if our programme or policy is good enough’ >

Key question: Based on the intervention logic, unintended effects and the judgement criteria identified in Steps 1 and 2 – what needs to be measured/described as a priority?

Unpack the intervention logic, unintended effects and judgement criteria into statements that can form the basis for defining indicators.

Key question: What will be the indicators that will enable you to make the judgements?

Make sure the indicators cover inputs, outputs, results, impacts as well as context.

To read more about how to define indicators go to section ‘Choosing relevant indicators’>

For further reading, you may consult the Better Education website>

Key question: What data will be used to measure each indicator?

  • Reflect on the nature of the indicator and what data you would ideally want to have. If this is not possible, think about alternative sources of data.

Examples:

  • Data on the ‘number of mentors mobilised’ is available from the measure’s reporting data.
  • Data on ‘absenteeism’ is available on schools’ administrative systems.
  • Data on how the measure ‘increased students’ self-confidence and motivation’ is not available but can be collected.
  • It is not possible to access administrative data on enrolment in further education/training. Therefore, to measure the indicator ‘share of participants who move on to further education/training’ use self-reports of young people on their current activities and future career plans gathered six months after the programme has ended.

Key question: What data collection tools are needed? (interview questionnaires, survey questionnaires, observation templates, etc.).

Key question: What sample is needed to collect the data for each method? This is important for both qualitative and quantitative approaches.

  • Develop the methodology reflecting on:
    • the capacity you have and the resources available
    • the extent to which secondary data can be used;
    • what primary data needs to be collected

Example:
A survey will be conducted over the phone six months after the programme ends. Former participants will be asked about their current activities (if studying, and at which level and programme) and future career plans (if they plan to enrol in further education and training, find a job, or other).

Key question: How will patterns in the data be identified? This concerns both qualitative and quantitative data.

Decide which data analysis techniques are most suitable given the evaluation questions.

Key question: How will the data be presented?

In most cases you will use descriptive statistics for quantitative data and content analysis for qualitative data. The complexity of the analysis depends on the questions to be answered, the indicators to be used but also the breadth and depth of the data.

Key question: What is the observed change in the key indicators over time? Before and after the evaluated measure was put in place?

Compare the measurement before and after the policy/measure has been put in place to identify trends over time.

Example:
The number of schools which have action plans to tackle early leaving has increased by 30% after the requirement to have an action plan was introduced.
Newly adopted action plans are similar to the previously existing ones in that…
New actions plans include some new features…

To read more about how to measure change go to section ‘Assessing whether our programme or policy makes a difference’>

Key question: To what extent is the change identified a result of the policy/measure?

  • Compare the results with the control group, if feasible.
  • Carry out qualitative contribution analysis.

Example:
To evaluate the effectiveness of the Dutch programme Medical Advice for Sick-reported Students (MASS) policy makers used a quasi-experimental design. 7 out of 21 schools for pre-vocational secondary education had been applying the MASS programme, and they were all asked to participate in the study. They were the intervention schools. Within the group of remaining schools, policy makers chose seven to participate in the study as control schools. Control schools were selected by their characteristics to match as much as possible those of the intervention schools in terms of urbanisation, fields of education, and size of the school.
Read the MASS evaluation 2016 study (in English) >

Key question: What would have happened anyway in absence of the policy/measure?

To read more about how to verify the causal mechanism go to section ‘Assessing whether our programme or policy makes a difference

Key question: Are the costs reasonable compared to the outputs and results achieved?

Calculate costs per output and where possible cost per result.

Key question: How do the costs compare to other, similar, policies/measures?

Make comparisons with similar policies/measures.

Key question: How do the costs compare for activities within the policy/measure?

Make comparisons across different components of the policy/measure.

Example:
The evaluation of the UK measure ‘The Youth Contract for 16-17 year olds not in education, employment or training evaluation’ included a cost-benefit analysis. It subtracted the estimated direct and indirect costs of the programme from the estimated long-term benefits of participating in it. It examined the impact of additional qualifications resulting from participation in the programme on increased lifetime earnings, improved health, and reduced criminal activity.
Read the 2014 Youth Contract evaluation report (in English) >

To read more about this go to section ‘Deciding if our programme or policy is good enough’>