You are here

Evaluation plan for providers and practitioners

Are you in charge of a measure targeted at preventing or reducing early leaving from (vocational) education and training in your school or training centre?

If yes, you can use Cedefop evaluation plan to help you develop your monitoring and evaluation approach. The steps below present a set of tasks you need to do.

The information in this section is based on the Cedefop study ‘Leaving education early: putting vocational education and training (VET) centre stage’ and the research conducted for the development of this toolkit. Would you like to know more about this toolkit? Go to About the project >

You can find further information on evaluation in Evaluate section of the toolkit.

Further reading: Rainbow Framework developed by, an international collaboration to improve evaluation practice and theory by sharing information about evaluation options and approaches.

Step 1. Define what is to be evaluated

Task 1: Define the scope of the evaluation

Key question 1: Which of the activities you have introduced should be evaluated?

  • Prepare a full list of activities to be evaluated.

Example: Activities to be evaluated: activities aimed at boosting students’ motivation.

Key question 2: What period of time should be covered?

  • Define the period of time that will be covered by the evaluation.

Example: Period of time: from 2014 (when these activities were introduced) to 2016.


Links to relevant sections of the toolkit

Would you like to read more about how to define what is to be evaluated?

Please go to section ‘Deciding what to monitor and evaluate’ >


Task 2: Define what activities were intended to achieve (intervention logic)

Key question 1: What was the reason for introducing these activities and what were you expecting to achieve?

  • For each of the activities clearly state why it was introduced and what it was expected to result in.

Example: See example for programme theory/intervention logics >


Task 3: Identify unintended results which should be evaluated (optional)

Key question 1: What other positive or negative changes could be happening which were not initially anticipated?

  • Discuss with colleagues to identify whether any unintended effects should be captured.


Example: For instance, the selection of a student to participate in a support measure can be perceived negatively by the learner, and have a negative impact on his/her motivation. It may be interesting to cover this aspect in the evaluation.


Task 4: Formulate the key evaluation questions

Key question 1: What questions do you want the evaluation to answer?

  • This can be done by simply translating the expected results and impacts into questions.
  • Or you can formulate a few simple questions.


Have students been engaged in the motivational activities as intended?

What effect do activities undertaken have on participants’ motivation to learn?

What impact do the activities undertaken have on enrolment rates in upper secondary education?

Step 2. Determine what constitutes good performance

Task 1: Establish the criteria that will be used to judge if a measure has performed well, or not


Key question 1: What will be considered as successful performance?

  • Discuss with key colleagues how you will judge whether the results identified are positive.
  • Define these criteria before the evaluation. Otherwise the interpretation of the results is likely to be too optimistic.



Increase the number of students who enrol in upper secondary programmes.

Increase the share of students who enrol in upper secondary programmes by 10%.

Increase the share of students who enrol in upper secondary programmes up to 90%.


Links to relevant sections of the toolkit

Would you like to read more about how to determine what constitutes good performance?

Go to section ‘Deciding if our programme or policy is good enough’>

Step 3. Define indicators, collect and analyse data

Task 1: Define the indicators that will be used

Key question 1: What will you measure through the evaluation?

  • Based on the expected results and impacts of activities put in place, define the main indicators.



See indicator examples for mentoring and coaching measures >

See indicator examples for school-level action plans >

See indicator examples for re-engaging measures >


Links to relevant sections of the toolkit

Would you like to read more about how to define indicators?

Go to section ‘Choosing relevant indicators’>

For further reading, you may consult the Better Education website>


Task 2: Clarify the sources of information for each indicator

Key question 1: What data will be used for each indicator?

  • Reflect on the data you would ideally want to have for each indicator. If it is not possible to get these data, think about alternative data.


  • Data on the number of practitioners mobilised is available from the measure’s reporting data.
  • Data on ‘absenteeism’ is available on the administrative system of our school.
  • Data on how the measure ‘increased students’ self-confidence and motivation’ is not available but can be collected.
  • Regarding the indicator ‘share of participants who move on to further education/training’ it is not possible to access administrative data on enrolment in further education/training. Self-reports of young people on their current activities and future career plans, six months after the programme, will be used instead.

Task 3: Develop the data collection methodology

Key question 1: What tools are needed for data collection? (interview questionnaires, survey questionnaires, observation templates, etc.)

Key question 2: What sample is needed to collect the data through a given method? (e.g. how many students will be interviewed). This is important for both qualitative and quantitative approaches

  • Develop the methodology reflecting on:
    • the capacity you have and the resources available (for instance, what human resources are available to conduct the interviews?)
    • what data is already available (e.g. school’s administrative data)
    • what new data needs to be collected
  • You are most likely to:
    • interview/ survey learners and staff
    • organise a focus group with learners or staff
    • use your own administrative data about students’ trajectories

Example: A short survey will be conducted over the phone six months after the programme. Former participants will be asked about their current activities (if studying, and which level and programme) and future career plans (if they plan to enrol in further education and training, find a job, or other).


Task 4: Analyse the data

Key question 1: How will the data be analysed and presented?

  • You are most likely to summarise quantitative data by using totals, percentages and averages, and identify examples through the analysis of the qualitative data.

Examples: See examples of evaluation reports (please note that these evaluations have not been developed at provider level and so your evaluation is unlikely to be as complex):


Task 5: Assess change – qualitatively or quantitatively

Key question 1: What is the evolution of key indicators over time? Before and after the intervention?

  • Compare the situation before the activities were implemented and after.
  • Qualify (describe it qualitatively) and quantify (assess in numbers) the change.

Example: The number of students who enrol in upper secondary programmes has increased by 15%. Enrolment has increased in VET/general programmes, in particular in the following programmes… The self-reported motivations of students to enrol in upper secondary education were… They consider that participating in the activities contributed to their decision to continue studying because…


Links to relevant sections of the toolkit

Would you like to read more about how to measure change? Please go to

our section ‘Assessing whether our programme or policy makes a difference’>

Step 4. Compare costs with outputs and results

Task 1: Consider whether the initiative is an effective use of the resources involved

Key question 1: Are the costs reasonable compared to the outputs and results achieved?

  • Assess the financial and human resources that were mobilised to put in place the activities.
  • To judge whether these resources are appropriate given the results obtained, arrange a discussion with colleagues. Alternatively you could run a survey asking them to judge whether the investment is worthwhile.
  • Evaluators can also make a comparison with a similar measure. This requires making sure the costs are calculated using the same approach.

Example: For instance, you can compare the total costs of the programme with the number of young persons who have been reached and the share of those who have seen positive results.


Links to relevant sections of the toolkit

Would you like to read more about how to compare costs with outputs and results?

Go to section ‘Deciding if our programme or policy is good enough’>