Generative AI (GenAI) and emerging agentic systems are moving AI into the learning process itself. These systems don’t stop at delivering content. They explain, adapt, remember and guide learners through tasks. In doing so, they change where cognitive effort sits. I.e., what learners do themselves and what gets delegated to machines.
This shift unlocks significant opportunities. GenAI can provide on-demand explanations, examples and feedback at a scale. It can diversify learning resources through multimodal content, support learners working in a second language and reduce friction when students get stuck, lowering barriers to engagement and persistence. For some learners, AI-mediated feedback can feel psychologically safer, encouraging experimentation (trial and error), revision and assistance without fear of judgement.
But these gains come with important risks. The same design choices that improve short-term performance, confidence, or engagement can weaken
independent reasoning, distort social development or introduce hidden dependencies over time. In educational contexts, especially those involving children and teens, we are talking about learning, safeguarding, regulatory and reputational risks. If the
“Google effect” (digital amnesia) raised concerns about outsourcing memory to search engines, LLMs can be even more powerful in practice.
Agentic and multi-agent systems raise the stakes further. As AI systems plan, adapt, coordinate, and act proactively, they can quietly assume roles that belong to learners or educators: framing problems, sequencing tasks, resolving disagreement, or deciding what happens next. When these shifts are unexamined, learning can collapse into automated coordination, impacting cognitive development.
This is why emerging standards and guidance, (e.g., the Guidance for generative AI in education and researchcreated by UNESCO or the Generative AI: product safety standards by the UK Department for Education), take a risk-first approach to AI in education. As AI becomes embedded in learning, design choices increasingly carry regulatory, safeguarding and reputational consequences, alongside pedagogical ones.
Where AI Can Undermine Learning
AI introduces learning and developmental risks unless we actively mitigate them.
These risks are not uniform across learners. It is very important to consider age and developmental stage. Younger learners are more susceptible to over-trust, emotional reliance, and authority transfer, while older learners may better interrogate and contextualise AI outputs.
Responsible educational AI requires age-appropriate constraints on autonomy, interaction style and use cases. One model of use won’t fit all learners.
Risk Accumulation
Hallucinations and biases
AI can produce confident but incorrect or biased explanations.
These errors are often hard to detect and can introduce misconceptions, cognitive bias and amplify over-trust, undermining learner judgment. This can reinforce misconceptions, stereotypes and uneven representations, shaping understanding, behaviour and sense of self.
Cognitive deskilling
When AI is available to generate answers, full solutions and completed tasks, learners offload cognitive work - this can provoke long-term developmental harm. This shows up when relying on full solutions rather than grappling with problems. Over time,this can weaken reasoning, problem-solving and persistence.
Psychological, emotional and social risk
AI learning systems built as conversational, companion-style or anthropomorphic can encourage emotional reliance, reduce peer interaction and undermine real-world support networks. This is especially problematic for children, as they can distort emotional and social development and blur boundaries around trust and authority.
Context insensitivity

When deployed without contextual adaptation, outputs can lead to explanations, examples or learning strategies that are inappropriate, misleading, or misaligned with learners’ educational context. Over time, this risks privileging dominant knowledge frameworks, marginalising local perspectives, and creating friction with classroom practices, particularly in culturally diverse or resource-constrained settings.
Manipulative design
Patterns such as
flattery, unjustified confidence, social pressure (e.g., ‘others have done this’), or reward-based engagement or monetisation-driven dark patterns can influence learner behaviour in ways that are not acceptable in learning contexts.
Data privacy
Continuous monitoring, collection of sensitive learner data, or reuse of data for commercial purposes or model training raise serious concerns, especially for children, who are treated as a high-risk group under data protection law.
Agentic autonomy
Agentic AI systems plan, adapt, and act over time. Memory and proactive decision-making can gradually shift agency away from learners (even when short-term task performance improves), creating dependency loops that are difficult to detect. Learners may rely increasingly on system guidance for task sequencing, decision-making, or problem framing, reducing opportunities for independent reasoning and productive struggle. In multi-agent systems,
coordinated outputs or apparent consensus can further inflate epistemic authority, making AI guidance appear more reliable, discouraging critical evaluation.
Anti-Patterns in EdTech AI Design
Responsible edtech requires human-centred design and system-level accountability.
The patterns below highlight design risks that product teams can address to reduce harm and improve learning outcomes.
Solution-by-default
01
AI is configured to generate full answers, code, solutions or explanations as the primary interaction. This encourages cognitive offloading, shortcuts productive struggle, and shifts the learner from sense-making to copying.
Red flag:
Learners can complete tasks without attempting them.
02
Fluency instead of understanding
Performance while AI is active is used as a success signal, without testing whether learners can perform independently.
Red flag: Success metrics that only focus on task completion, speed or output quality - without measuring skill transfer.
03
AI as the primary authority
The system speaks with high confidence and human-like authority, discouraging questioning or verification. Multi-agent consensus is presented as more reliable rather than provisional.
Red flag: Learners rarely challenge or revise AI outputs.
04
Optimising for engagement
Design choices prioritise time-on-task, retention or satisfaction over learning depth. Persuasive nudges, flattery or gamified (engagement or usage-driven) rewards do not drive understanding.
Red flag:
Engagement metrics improve while independent performance stagnates or declines.
05
Opaque decisions
Learners and educators cannot tell why the system intervened, what information it used, or how confident it is. Errors, bias or hallucinations go unnoticed.
Red flag: It just gives an answer with no explanation or source transparency.
06
Agentic autonomy without boundaries
Agentic systems plan tasks, set goals or sequence learning steps without learner confirmation. Memory and adaptation quietly replace learner agency over time.
Red flag:
The system increasingly decides what happens next.
Confidence without competence
07
Learners feel more confident using AI, but are not given opportunities to test skills without support. Confidence becomes a misleading proxy for mastery (Dunning-Kruger effect).
Red flag:
No AI-free checkpoints or 'try without help' moments.
08
Misuse treated as a user problem
Design assumes learners will use AI responsibly without constraints, scaffolds and literacy support.
Red flag:
Responsibility is pushed to users rather than designed into the system.
09
Age and context-blind design
AI systems are deployed without adaptation to learner age/ developmental stage, local curricula or cultural and linguistic context.
Red flag: The same interaction patterns, autonomy level and behaviour is applied across age groups or contexts.
Designing for Protection: Guardrails for EdTech AI
Learning harm is often an emergent property of design choices, interaction patterns and governance gaps.
Responsible edtech requires human-centred design and system-level accountability .
Ground AI in learning science
AI must be grounded in human learning theory (e.g. constructivism, Inquiry-Based Learning (IBL), scaffolding, Zone of Proximal Development (ZPD), etc.), not optimised for task completion, speed, fluency or output quality alone.
Responsible design considerations: Encourage cognitive engagement, reflection, verification and independent reasoning.
Design against over-reliance
Systems should enforce clear role boundaries, avoid authoritative or solution-first behaviour and prioritise scaffolding. Support should fade as competence increases, not persist indefinitely.
Responsible design considerations: Implement attempt-first flows, progressive disclosure and intentional support fading.
Explicitly design for scaffolding modes (Aid/Complement) rather than replacement mode (Substitute) as the default. Make solutions available only after evidence of learner engagement (e.g., an attempt, explanation, comparison).
Preserve agency
AI should support learners in questioning, verifying and revising, not accepting outputs at face value. Interfaces should make uncertainty visible and prompt reflection and contradiction (e.g., 'What would you check?', 'What do you keep and why?').
Responsible design considerations: Treat agency as a core learning outcome. Design prompts, UI constraints and feedback loops that require justification, comparison and learner reasoning.
Ensure human oversight
Responsible design considerations: Architect systems to distinguish low-stakes from high-stakes workflows. Enable human review, override and escalation for consequential decisions. Make the level of AI authority explicit rather than implicit.
Conduct longitudinal behavioural testing
Short-term confidence and immediate efficiency gains can mask declining independent performance. Governance must include longitudinal monitoring of reliance, cognition, self-efficacy and differential impacts across learner groups.
Responsible design considerations: Track learning trajectories over time. I.e., compare AI-assisted with unaided performance and monitor whether confidence and independence converge or diverge.
Design for transparency and explainability
Responsible design considerations: Expose reasoning and uncertainty through interaction design ('why this suggestion?). Make AI contributions visible from learner work.
Prevent inaccuracy and pedagogical misalignment
For AI-generated learning materials (i.e., videos, quizzes, Q&A), ensure outputs align with course content and instructional / academic goals.
Responsible design considerations: Use restricted retrieval from approved materials (e.g., RAG) and human oversight to prevent misinformation, biases and low pedagogical quality.
Do not dehumanize learning
Social interaction is a learning-relevant ingredient. Designs that remove it can reduce learning quality (e.g., lower perceived social presence).
Responsible design considerations: Positioning the AI as a tool or facilitator. Embed human guidance and actively redirect learners to teachers, peers or group discussion when social engagement is essential for understanding, reflection or motivation.
Treat AI literacy as a prerequisite
AI literacy for learners and educators is a prerequisite for responsible deployment. We cannot expect it to emerge automatically through use.
Responsible design considerations: Show what AI can/ cannot do, and why outputs are generated. Provide contextual explanations (e.g. tooltips, examples) at moments of use, instead of one-off onboarding. Offer distinct explanations and controls for learners, educators, parents and administrators, aligned with their decision-making responsibilities / pedagogical roles.
Co-creation, system evaluation and ongoing monitoring
Co-creation with learning and development experts, educators, learners and local domain experts is essential to ensure systems align with learning theory, developmental needs, cultural and linguistic contexts and the constraints of classroom and school environments.
Testing must cover the full AI learning system, including data sources and knowledge bases, retrieval and grounding mechanisms, prompts, interaction design, memory and adaptation, orchestration logic, and, where applicable, the dynamics of multi-agent interactions.
Early testing and continuous post-deployment monitoring should not stop at output quality. They must also examine learning impact, dependency, shifts in learner agency and unintended behaviours that emerge over time.
For agentic and multi-agent systems, this includes testing how decisions are delegated, how disagreement is resolved, and whether collaboration supports learning or collapses into automated coordination. Because many risks surface only through repeated use, longitudinal monitoring must be treated as a governance requirement.
Treating educators/ schools, learners and local domain experts as ongoing partners rather than end users helps surface risks early and recalibrate system behaviour. This ensures AI-supported learning remains pedagogically robust, inclusive, safe and effective across diverse educational settings.
AI will increasingly shape how we think, collaborate and develop over time
Whether this strengthens or erodes learning is a design, governance and responsibility choice, shared by builders, institutions and education systems.
Getting this right requires clear governance, boundaries, age-appropriate safeguards, human oversight and sustained attention to how learning actually unfolds in practice.
How We Work With EdTech Teams
We support teams across three areas:
1. AI evaluations & behavioural risk assessments
We assess how learners and educators interact with your system: output quality, model drift over time, and where over-reliance, authority transfer misuse or unintended behaviours emerge as systems scale.
2. UX / design refinement
We translate learning science and behavioural evidence into concrete design guidance (i.e., interaction patterns, autonomy boundaries, scaffolding strategies, safeguards and monitoring) that reduce risk while improving product value.
3. Regulatory alignment
We help teams align design and governance decisions with emerging regulations before these become compliance or procurement blockers.
If you’re building or deploying AI in education and want clarity on learning impact, behavioural risk, or regulatory exposure
before issues surface at scale, we’re happy to talk.
References
Artsın, M., & Bozkurt, A. (2025). Charting new horizons: What agentic artificial intelligence (AI) promises in the educational landscape. In EDULEARN25 proceedings (pp. 2019–2023). IATED Academy. https://doi.org/10.21125/edulearn.2025.0585
Bauer, E., Greiff, S., Graesser, A. C., Scheiter, K., & Sailer, M. (2025). Looking beyond the hype: Understanding the effects of AI on learning. Educational Psychology Review, 37, Article 45. https://doi.org/10.1007/s10648-025-10020-8
Burns, M., Winthrop, R., Luther, N., Venetis, E., & Karim, R. (2026). A new direction for students in an AI world: Prosper, prepare, protect. The Brookings Institution, Center for Universal Education. Retrieved from https://www.brookings.edu/articles/a-new-direction-for-students-in-an-ai-world-prosper-prepare-protect
Delikoura, I., Fung, Y. R., & Hui, P. (2025). From superficial outputs to superficial learning: Risks of large language models in education. arXiv. https://doi.org/10.48550/arXiv.2509.21972
Hu, X., Xu, S., Tong, R., & Graesser, A. C. (2025). Generative AI in education: From foundational insights to the Socratic Playground for Learning. arXiv. https://doi.org/10.48550/arXiv.2501.06682
Jia, W., Pan, L., & Neary, S. (2025). Effect of GenAI dependency on university students’ academic achievement: The mediating role of self-efficacy and moderating role of perceived teacher caring. Behavioral Sciences, 15(10), 1348. https://doi.org/10.3390/bs15101348
Kamalov, F., Kumar, S., Hossain, M. S., & Ahmed, S. (2025). Evolution of AI in education: Agentic workflows. arXiv. https://arxiv.org/abs/2504.20082
Kostopoulos, G., Gkamas, V., Rigou, M., & Kotsiantis, S. (2025). Agentic AI in education: State of the art and future directions. IEEE Access, 13, 177467–177491. https://doi.org/10.1109/ACCESS.2025.3620473
Le, H., Shen, Y., Li, Z., Xia, M., Tang, L., Li, X., Jia, J., Wang, Q., Gašević, D., & Fan, Y. (2025). Breaking human dominance: Investigating learners’ preferences for learning feedback from generative AI and human tutors. British Journal of Educational Technology, 56, 1758–1783. https://doi.org/10.1111/bjet.13614
Lee, H.-P. (Hank), Sarkar, A., Tankelevitch, L., Drosos, I. Z., Rintel, S., Banks, R., & Wilson, N. (2025). The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25) (pp. 1–22). ACM. https://doi.org/10.1145/3706598.3713778
Li, S., Liu, J., & Dong, Q. (2025). Generative artificial intelligence-supported programming education: Effects on learning performance, self-efficacy and processes. Australasian Journal of Educational Technology, 41(3), 1–25. https://doi.org/10.14742/ajet.9932
Liu, Y., Liu, Y., Zhang, X., Chen, X., & Yan, R. (2025). The truth becomes clearer through debate! Multi-agent systems with large language models unmask fake news. In Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’25) (pp. 504–514). Association for Computing Machinery. https://doi.org/10.1145/3726302.3730092
Lyu, Y., Ren, S., Feng, Y., Wang, Z., Chen, Z., Ren, Z., & de Rijke, M. (2025). Self-adaptive cognitive debiasing for large language models in decision-making. arXiv. https://arxiv.org/abs/2504.04141
Miao, F., Holmes, W., Huang, R., & Zhang, H. (2021). AI and education: Guidance for policy-makers. UNESCO. https://doi.org/10.54675/PCSP7350
Reagan Panguraj AR. Agentic AI in Inclusive Learning: A Framework for Autonomous Personalization across Diverse Learner Populations. IJERET [Internet]. 2025 Nov. 28 [cited 2026 Jan. 19];:100-1. Available from: https://ijeret.org/index.php/ijeret/article/view/377
Sparrow B., Liu, J. & Wegner, D.M. (2011). Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips. Science. 333,776-778(2011). DOI:10.1126/science.1207745
Tsim, F., & Gutoreva, A. (2025). SCAN: A Decision-Making Framework for Task Assignment with Generative AI [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/g5fd8_v1
UK Department for Education. (2026, January 19). Generative AI: product safety standards. GOV.UK. https://www.gov.uk/government/publications/generative-ai-product-safety-standards/generative-ai-product-safety-standards
UNESCO. (2023). Guidance for Generative AI in Education and Research (F. Miao & W. Holmes, Authors). UNESCO Publishing. https://doi.org/10.54675/EWZM9535
Xia, M. & Guo, S. (2025). Understanding learners' perceptions of artificial intelligence-mediated Informal Digital Learning of English: A Q methodology approach, Acta Psychologica, Volume 261, 2025, 105980, ISSN 0001-6918. https://doi.org/10.1016/j.actpsy.2025.105980.
Yan, L., Greiff, S., Teuber, Z., & Gašević, D. (2024). Promises and challenges of generative artificial intelligence for human learning. Nature Human Behaviour, 8(10), 1839–1850. https://doi.org/10.1038/s41562-024-02004-5
Zhai, C., Wibowo, S., & Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: A systematic review. Smart Learning Environments, 11, Article 28. https://doi.org/10.1186/s40561-024-00316-7
Zhang, L., & Xu, J. (2025). The paradox of self-efficacy and technological dependence: Unraveling generative AI’s impact on university students’ task completion. The Internet and Higher Education, 65, Article 100978. https://doi.org/10.1016/j.iheduc.2024.100978
Recent Posts




Terms & Policies
