Do you want to evaluate whether a measure targeted at preventing or reducing early leaving from (vocational) education and training you oversee is performing well?

Cedefop has developed an evaluation plan for the monitoring and evaluation of specific policies and measures to be used by learning providers who are not experts in the field of evaluation. You can use the plan when developing your monitoring and evaluation approach. The steps below present a set of tasks that need to be completed.

The information in this section is based on the Cedefop study ‘Leaving education early: putting vocational education and training (VET) centre stage’ and the research conducted for the development of this toolkit.

Key question: Which of the activities you have introduced should be evaluated?

Prepare a full list of activities to be evaluated.

Example:
All activities aimed at boosting students’ motivation.

Key question: What period of time should be covered in the  evaluation?

Define the period of time that will be covered by the evaluation.

Example:
From 2014 (when these activities were introduced) to 2016.

To read more about how to define what is to be evaluated go to section ‘Deciding what to monitor and evaluate’ >

Key question: What was the reason for introducing these activities and what were you expecting to achieve?

For each of the activities clearly state why it was introduced and what it was expected to achieve.

Example:
See example for programme theory/intervention logics >

Key question 1: What other positive or negative changes could be happening which were not initially anticipated?

Discuss with colleagues to identify whether any unintended effects should be captured.

Example:
For instance, the selection of a student to participate in a support measure can be perceived negatively by the learner, and have a negative impact on his/her motivation. It may be interesting to cover this aspect in the evaluation.

Key question: What questions do you want the evaluation to answer?

  • This can be done by simply translating the expected results and impacts into questions.
  • Or you can formulate a few simple questions.

Examples:

  • Have students been engaged in the motivational activities as intended?
  • What effect do the undertaken activities have on participants’ motivation to learn?
  • What impact do the undertaken activities have on enrolment rates in upper secondary education?

Key question: What will be considered as good performance?

  • Discuss with key colleagues how you will judge whether the identified results are positive.
  • Define these criteria before the evaluation. Otherwise the interpretation of the results is likely to be too optimistic.

Examples:
A measure would be considered as performing well if there is:

  • increase in the number of students enrolled in upper secondary programmes.
  • increase in the share of students enrolled in upper secondary programmes by 10%.
  • increase in the share of students enrolled in upper secondary programmes up to 90%.

To read more about how to determine what constitutes good performance go to section ‘Deciding if our programme or policy is good enough’ >

Key question: What do you need to measure to answer the evaluation questions identified in Step 1, Task 1.4?

Based on the expected results and impacts of activities put in place, define the main indicators.

To read more about how to define indicators go to section ‘Choosing relevant indicators’>

For further reading, you may consult the Better Education website>

Key question: What data will be used to measure each indicator?

  • Reflect on the data you would ideally want to have for each indicator. If it is not possible to get these data, think about alternative data.

Examples:

  • Data on the number of practitioners mobilised is available from the measure’s reporting data.
  • Data on ‘absenteeism’ is available on the administrative system of our school.
  • Data on how the measure ‘increased students’ self-confidence and motivation’ is not available but can be collected.
  • It is not possible to access administrative data on enrolment in further education/training. Therefore, to measure the indicator ‘share of participants who move on to further education/training’ use self-reports of young people on their current activities and future career plans gathered six months after the programme has ended.

Key question: What tools are needed for data collection? (interview questionnaires, survey questionnaires, observation templates, etc.)

Key question: What sample is needed to collect the data for each method? (e.g. how many students will be interviewed). This is important for both qualitative and quantitative approaches

  • Develop the methodology reflecting on:
    • how many survey responses or interviews do you need to make valid conclusions
    • the capacity you have and the resources available (for instance, what human resources are available to conduct the interviews?)
    • what data is already available (e.g. school’s administrative data)
    • what new data needs to be collected
  • You are most likely to:
    • interview/ survey learners and staff
    • organise a focus group with learners or staff
    • use your own administrative data about students’ trajectories

Example:
A short survey will be conducted over the phone six months after the programme ends. Former participants will be asked about their current activities (if studying, and at which level and programme) and future career plans (if they plan to enrol in further education and training, find a job, or other).

Key question: How will the data be analysed and presented?

You are most likely to summarise quantitative data by using totals, percentages and averages, and identify examples through the analysis of the qualitative data.

Key question: What changes do you observe in the key indicators over time? Before and after the evaluated measure was put in place?

  • Compare the indicators before the activities were implemented and after.
  • Qualify (describe it qualitatively) and quantify (assess in numbers) the change.

Example:

The number of students enrolled in upper secondary programmes has increased by 15%. Enrolment has increased in VET/general programmes, in particular in the following programmes… The self-reported motivations of students to enrol in upper secondary education were… They consider that participating in the activities contributed to their decision to continue studying because…

To read more about how to measure change go to section ‘Assessing whether our programme or policy makes a difference’ >

Key question: Are the costs reasonable compared to the outputs and results achieved?

  • Assess the financial and human resources that were mobilised to put in place the activities.
  • To judge whether these resources are appropriate given the results obtained, arrange a discussion with colleagues. Alternatively you could run a survey asking them to judge whether the investment is worthwhile.
  • Evaluators can also make a comparison with a similar measure. This requires making sure the costs are calculated using the same approach.

Example:
For instance, you can compare the total costs of the programme with the number of young persons who have been reached and the share of those who have seen positive results.

To read more about how to compare costs with outputs and results go to section ‘Deciding if our programme or policy is good enough’ >