As generative AI reshapes societies, European education and training policy faces a central challenge: how to harness AI’s potential while maintaining the human core of learning - critical thinking, ethical judgement, hands-on skills and social interaction. Across Europe, VET and education policymakers are experimenting with ways to safeguard human learning in AI classrooms, ensuring that technology complements, rather than replaces, human interaction.
1. Generative AI: Opportunities and Risks for Human-Centred Learning
Known limitations
Generative AI systems are powerful but imperfect: they can produce inaccurate information (hallucinations), embed bias, lack transparency, and perform unevenly across languages and educational contexts. Within European education and training policy, these limitations pose direct risks to learning quality and equity, particularly in VET, where accuracy, safety, and practical relevance of content are essential. The European policy context, shaped by institutions such as the European Union and the European Commission, places strong emphasis on inclusion, multilingualism, and evidence-based education. Misaligned or biased AI outputs risk widening achievement gaps, particularly for learners in smaller linguistic communities or disadvantaged regions. A misinformed learner could internalise flawed techniques or unsafe procedures, while overreliance on AI-generated feedback may weaken critical thinking, professional judgement, and pedagogical autonomy if educators are not adequately trained to interpret, question, and validate AI outputs. These risks highlight why AI must be integrated thoughtfully, with educators empowered to maintain trust, quality, and inclusion in VET learning.
Misuse
Generative AI can be misused for academic dishonesty, automated plagiarism, impersonation, or the creation of fabricated credentials, which directly threatens the integrity of European qualifications frameworks and cross-border recognition of skills. In a policy area that relies on trust, comparability, and mobility, misuse could erode confidence in assessment systems and micro-credentials. Without clear guidance and safeguards, learners may shortcut learning, while malicious actors could exploit AI to generate fraudulent training materials or certification documents.
Society-wide disruptions
Generative AI may accelerate labour market changes that outpace existing education and training systems. European policy seeks to align skills development with economic and social needs, yet rapid automation of cognitive tasks could render some curricula obsolete and intensify skills mismatches. This creates risks of unemployment, polarization between high- and low-skilled workers, and pressure on lifelong learning systems to reskill large population segments. Proactively embedding AI literacy and human-centred skills in VET can reduce these risks and support inclusion.
Existential risks
Although more speculative, existential risks associated with advanced AI, such as loss of human control over critical systems or large-scale societal destabilization, are relevant to education policy because education shapes future developers, policymakers, and citizens. European education and training frameworks promote human-cantered values, democratic participation, and ethical reasoning. Failing to embed these principles in AI-related education could reduce society’s capacity to steer technological development responsibly, increasing long-term systemic risks.
Intellectual property
Generative AI raises complex intellectual property (IP) issues for European education systems, including the use of copyrighted materials in training data and the ownership of AI-generated outputs created by students and educators. These challenges intersect with European copyright law and data protection regimes such as General Data Protection Regulation. Uncertainty around IP may discourage the creation and sharing of open educational resources, or expose institutions to legal liability, thereby slowing innovation in digital learning.
Misinformation/disinformation
Generative AI can rapidly produce persuasive but false or misleading content, threatening learning quality and the development of informed citizenship, both central objectives of European education policy. Students exposed to AI-generated misinformation may struggle to distinguish credible sources, weakening media literacy and democratic resilience. In a multilingual European context, AI-generated disinformation can be scaled across languages, amplifying its impact and complicating monitoring and response efforts. Deepfakes are one tangible challenge: the World Intellectual Property Organisation (WIPO) defines deepfakes as videos or images that synthesize media by superimposing human features onto another body or manipulating sounds to generate realistic videos.
2. How Countries Are Keeping Learning Human in the AI Era
Across Europe, Member States are developing policies that foreground human-centred learning as AI technologies enter classrooms and training environments:
Finland: Human-Centred AI Literacy and Guidance
In Finland, the national education authorities (the Finnish National Agency for Education and the Ministry of Education and Culture) have published comprehensive AI in education legislation and recommendations that apply to early childhood, general education and VET alike. These emphasise pedagogical justification, transparency about AI use, critical assessment of outputs, and training for staff and learners to interpret AI results responsibly. AI should support and never replace the development of human competences and ethical judgement. Finland also embeds AI literacy into curricula so students learn to recognise and interrogate misinformation and deepfakes from an early age, building on decades of media literacy tradition to resist manipulative or false content.
Nordic debates and guidance also foresee instructional strategies that include teacher disclosure of when and how AI is used in their own work and to tell their students what errors and biases could come from its use, and assignments that deliberately require human cognitive work (e.g. handwritten essays or reflective analysis) to ensure depth of learning.
Lithuania, Belgium, and Italy: VET Educator Professional Development
Erasmus+ projects such as EDU-AI have supported the training of VET educators across Lithuania, Belgium and Italy, with a strong focus on integrating AI into vocational pedagogy rather than treating it as a purely technical add-on (ART-inn, EDU-AI). These initiatives emphasise ethical use, critical evaluation, and classroom practices that enhance human interaction, ensuring that AI complements - rather than replaces - the relational and practice-based nature of VET. Hands-on training and transnational collaboration strengthen peer networks and support educators in applying professional judgement, aligning digital innovation with the core principles of vocational teaching and learning.
Greece: Teacher Training and Pilots in Responsible Use
In October 2025, Greece launched the pioneering “AI in Schools” pilot programme to integrate generative AI into public secondary education. The initiative introduces ChatGPT Edu - a secure, ad-free version of OpenAI’s model - positioning Greece among the first European countries to formally embed AI in the curriculum.
It is supported by structured teacher training that promotes the use of AI to enhance creativity, collaboration, and critical thinking, rather than rote learning. The programme also emphasises ethical use and GDPR-compliant deployment, equipping teachers to guide students in responsible AI use.
Teachers are voluntarily participating in specialised training to safely apply these tools in both teaching and administrative tasks. In parallel, the University of Patras has launched Europe’s first university-certified AI training programmes for educators, offering accredited online courses with ECTS. Since 2023, tens of thousands of teachers have already participated, with a target of training up to 200,000.
EU Open Resources and MOOCs
Europe-wide MOOCs such as the Artificial Intelligence in Vocational Education and Training course support educators and learners across Member States by addressing AI’s impact on work, future skills, and its ethical implications for teaching and learning (Digital Skills and Jobs Platform). These open resources highlight that AI in VET is a pedagogical and societal issue, requiring educators and learners to combine digital knowledge with critical thinking, ethical awareness, and professional judgement.
3. Addressing the Most Important Risk: Misinformation and Disinformation
Misinformation and disinformation are the most critical risks in the AI era for European education. At EU level, the recently adopted EU AI Act establishes binding requirements for transparency, risk management, and human oversight for AI systems, including those used in education. The challenge now is translating this regulation into educational policies, curricula, and practices. Keeping learning human depends on coordinated action by stakeholders with complementary responsibilities:
- National education policymakers: Embed AI literacy into VET and general education, covering misinformation detection, source evaluation, and ethical reasoning. Finland’s national guidance demonstrates translating high-level principles into pedagogical practice, and Spain has integrated AI competences into curricula and teacher upskilling.
- Educational institutions and VET providers: Develop internal guidelines on AI use, disclosure, and assessment. Encourage project-based learning, oral exams, and critical-source assignments to foreground human reasoning. Professional development is essential for educators to model responsible use.
- Technology developers: Design AI systems that are transparent, auditable, and safe. Features like labelling AI content, provenance info, and user explanations empower learners to critically engage with outputs.
Through such measures, Member States can ensure that the EU AI Act becomes a living instrument in classrooms and workshop, not only a legal text promoting digital inclusion and wellbeing for both educators and learners.
Conclusion
Keeping learning human in the age of generative AI requires coherent action across policy levels and stakeholders. The EU AI Act provides a regulatory backbone, while Finland, Spain, Greece, among many more EU Member States, and Erasmus+ initiatives illustrate how national strategies can translate this framework into education-specific guidance. Combined with institutional leadership and responsible technology design, Europe can integrate generative AI into education and training without sacrificing the human capacities - critical thinking, ethical judgement, creativity, and social interaction - that lie at the heart of learning.
Explore the Cedefop VET Toolkit for tackling early leaving to apply these insights in inclusive VET settings.
