You are here

Evaluation plan for policy makers

Are you in charge of a policy or measure targeted at preventing or reducing early leaving from (vocational) education and training?

If yes, you are surely interested to know whether this policy or measure is performing well.

Cedefop has developed an evaluation plan for the monitoring and evaluation of specific policies and measures, for policy-makers and other stakeholders who are not experts in the field of evaluation. You can use this when developing your monitoring and evaluation approach. The steps below present a set of tasks that need to be taken care of. They do not define who does them – whether it is the evaluation steering committee, the evaluator or the client commissioning the evaluation. This will differ from case to case.

 

The information in this section is based on the Cedefop study ‘Leaving education early: putting vocational education and training (VET) centre stage’ and the research conducted for the development of this toolkit. Would you like to know more about this toolkit? Go to About the project >

You can find further information on evaluation in the Evaluation section of the toolkit.

Further reading: Rainbow Framework developed by BetterEvaluation.org, an international collaboration to improve evaluation practice and theory by sharing information about evaluation options and approaches.

Step 1. Define what is to be evaluated

Task 1: Specify the policy/measure to evaluate and define the scope of the evaluation

Key question 1: What does the policy/measure consist of?

  • Develop a description of the policy/measure.

Example:

Activities to be evaluated: Mentoring activities for learners at risk of early leaving.

Key question 2: How is it different from other policies/measures?

  • Compare here with other similar policies/measures.

Key question 3: What period of time should be covered?

  • Define the period of time that will be covered by the evaluation (from - to).

Example:

Period of time: from 2012 (when changes to the mentoring programme were introduced) to 2016.

Key question 4: What is the geographical scope of the evaluation?

  • Define the geographical scope of the evaluation.

Example:

Geographical scope: region X.

Links to relevant sections of the toolkit

Would you like to read more about how to define what is to be evaluated?

Go to section ‘Deciding what to monitor and evaluate’ >

#FFA500

Task 2: Define the programme theory or intervention logic

Key question 1: What are the policy/measure objectives and how is it expected to achieve them?

  • Review policy/measure documentation, carry out interviews with those who designed the policy/measure.

Example: See Cedefop example for programme theory/intervention logics >

Key question 2: What is the logical chain of changes that is expected to lead to the achievement of objectives?

  • Use a visual presentation showing the logical chain from inputs and activities to outputs, results and impacts.

Example: See an example of visual presentation of programme theory/intervention logics>

#008000

Task 3: Identify unintended results which should be evaluated (optional)

Key question 1: What were the (negative/positive) results that were not expected but which could plausibly be observed?

  • Interviews with people who are knowledgeable about the policy/measure performance (i.e. those in charge of designing and implementing the measure. It can include VET practitioners involved in the implementation of the measure).
  • Risk assessment – think about what could go wrong.

Example:

Results are worse than expected due to difficulties in implementation e.g. insufficient resources at provider level; low commitment of institutional leaders, teachers or trainers.

- Participating students feel labelled and this leads to low motivation.

- Seasonal job openings attract students before programme completion.

#00bfff

Task 4: Formulate the key evaluation questions

Key question 1: What are the questions the evaluation should answer?

  • Discussions among the members of the evaluation team. If the team considers that additional expertise is needed, it is possible to establish a steering group in charge of making the decisions about the evaluation.
  • The questions should be clear and open. There should be a reasonable number of them compared to the complexity and budget of the policy/measure. 

Key question 2: What is the information needed for the decisions to be made based on this evaluation?

Examples:

Have early leavers been reached and engaged in the measure as intended?

What effect does the mentoring programme have on participants’ motivation to learn?

What impact does the mentoring programme have on the rate of early leavers in our municipality?

#ffff00
Step 2. Determine what constitutes good performance

Task 1: Establish the criteria that will be used to judge if a measure has performed well, or not

Key question 1: What will be considered as successful performance?

  • If the policy/measure has targets, these can be used.
  • If there is baseline data, the judgement criteria can be defined in relation to the baseline.
  • Otherwise involve the steering group/key stakeholders in defining these criteria.

Examples:

Increase the number of schools which have action plans to tackle early leaving.

Increase the number of schools which have action plans to tackle early leaving by 20%.

Increase the share of schools which have action plans to tackle early leaving up to 50% of schools.

Links to relevant sections of the toolkit

Would you like to read more about how to determine what constitutes good performance?

Go to section ‘Deciding if our programme or policy is good enough’>

#008000
Step 3. Define indicators, collect and analyse data

Task 1: Define the indicators that will be used

Key question 1: Based on the intervention logic, unintended effects and the judgement criteria – what needs to be measured/described as a priority?

  • Unpick the intervention logic, unintended effects and judgement criteria into statements that can form the basis for defining indicators.

Key question 2: What will be the indicators that will enable you to make the judgements?

  • Make sure the indicators cover inputs, outputs, results, impacts as well as context.

Examples:

See indicator examples for mentoring and coaching measures >

See indicator examples for school-level action plans >

See indicator examples for re-engaging measures >

 

Links to relevant sections of the toolkit

Would you like to read more about how to define indicators?

Go to section ‘Choosing relevant indicators’>

For further reading, you may consult the Better Education website>

#00bfff

Task 2: Clarify the sources of information for each indicator

Key question 1: What data will be used to populate a given indicator?

  • Reflect on the nature of the indicator and what data you would ideally want to have. If this is not possible think about alternative sources of data.

Examples:

  • Data on the ‘number of mentors mobilised’ is available from the measure’s reporting data.
  • Data on ‘absenteeism’ is available on schools’ administrative systems.
  • Data on how the measure ‘increased students’ self-confidence and motivation’ is not available but can be collected.
  • Regarding the indicator ‘share of participants who move on to further education/training’ it is not possible to access administrative data on enrolment in further education/training. Self-reports of young people on their current activities and future career plans, six months after the programme, will be used instead.
#FFA500

Task 3: Develop the data collection methodology to populate the indicators

Key question 1: What data collection tools are needed? (interview questionnaires, survey questionnaires, observation templates, etc.).

Key question 2: What sample is needed to collect the data through a given method? This is important for both qualitative and quantitative approaches.

  • Develop the methodology reflecting on:
    • the capacity you have and the resources available
    • the extent to which secondary data can be used;
    • what primary data needs to be collected

Example: A survey will be conducted over the phone six months after the programme. Former participants will be asked about their current activities (if studying, and which level and programme) and future career plans (if they plan to enrol in further education and training, find a job, or other).

#008000

Task 4: Analyse the data

Key question 1: How will patterns in the data be identified? This concerns both qualitative and quantitative data.

  • Decide which data analysis techniques are most suitable given the evaluation questions.

Key question 2: How will the data be presented?

  • In most cases you will use descriptive statistics for quantitative data and content analysis for qualitative data. The complexity of the analysis depends on the questions to be answered, the indicators to be used but also the breadth and depth of the data.

Examples: See examples of evaluation reports:

#00bfff

Task 5: Assess change – qualitatively or quantitatively

Key question 1: What is the evolution of key indicators over time? Before and after the intervention?

  • Compare the measurement before and after the policy/measure has been put in place to identify trends over time.

Example:

The number of schools which have action plans to tackle early leaving has increased by 30%.

Newly adopted action plans are similar to the previously existing ones in that…

New actions plans include some new features…

Links to relevant sections of the toolkit

Would you like to read more about how to measure change?

Go to section ‘Assessing whether our programme or policy makes a difference’>

#008000
Step 4. Understand causes of results and impacts

Task 1: Verify attribution, i.e. whether the change is due to the policy/measure being evaluated

Key question 1: To what extent is the change identified a result of the policy/measure?

Example:

The effectiveness study on the Dutch measure Medical Advice for Sick-reported Students (MASS) used a quasi-experimental design. 7 out of 21 schools for pre-vocational secondary education had been applying the intervention programme, and were all asked to participate in the study (intervention schools). Within the group of remaining schools, seven were asked to participate in the study as control schools. Control schools were selected by their characteristics to match as much as possible those of intervention schools in terms of urbanisation, fields of education, and size of the school.

Read 2016 study (in English) >

Key question 2: What would have happened anyway in absence of the policy/measure?

 

Links to relevant sections of the toolkit

Would you like to read more about how to verify attribution?

Go to section ‘Assessing whether our programme or policy makes a difference

#FFA500
Step 5. Compare costs with outputs and results

Task 1: Consider whether the initiative is an effective use of the resources involved

Key question 1: Are the costs reasonable compared to the outputs and results achieved??

  • Calculate costs per output and where possible cost per result.

Key question 2: How do the costs compare to other, similar, policies/measures?

  • Make comparisons with similar policies/measures.

Key question 3: How do the costs compare for activities within the policy/measure?

  • Make comparisons across different components of the policy/measure.

Example:

The evaluation of the UK measure ‘The Youth Contract for 16-17 year olds not in education, employment or training evaluation’ included a cost-benefit analysis. It subtracted the estimated direct and indirect costs of the programme from the estimated long-term benefits of participating in it.

It looks into the impact of additional qualifications resulting from participation in the programme, on increased lifetime earnings, improved health, and reduced criminal activity.

 

Read 2014 evaluation report (in English) >

 

Links to relevant sections of the toolkit

Would you like to read more about this?

Go to section ‘Deciding if our programme or policy is good enough’>

#FFA500