Building AI Responsibly for Children: A Practical Framework

Sara Portell • January 4, 2026

Share this article

AI is already a core part of children’s and teens’ digital lives. In the UK, 67% of teenagers now use AI, and in the US 64% of teens report using AI chatbots. Even among younger children, adoption is significant: 39% of elementary school children in the US use AI for learning, and 37% of children aged 9-11 in Argentina report using ChatGPT to seek information, as stated in the latest Unicef Guidance on AI and Children. In parallel, child-facing AI products are expanding: more than 1,500 AI toy companies were reportedly operating in China as of October 2025.


Adoption is accelerating across age groups and regions, often surpassing the development of child-specific ethical standards, safeguards and governance mechanisms. Experts warn that many child-facing AI products expose children to developmental, psychological and ethical risks that current safeguards do not adequately address.


For companies building AI systems that children directly use, or are likely to encounter, this creates a duty of care as well as reputational and regulatory exposure. AI safety therefore requires designing systems that actively protect children’s wellbeing, rights and lived realities.


To make this actionable, this article sets out a practical design and ethics framework for child-centred AI products.

A Child-Centred Responsible AI Framework


This framework is informed by research and real-world evidence on how children actually interact with AI products. Across studies, product audits, and guidance from organisations such as UNICEF, 5Rights and SAIFCA, the same issues surface repeatedly: systems that are not adapted to children’s developmental stages, models that can generate age-inappropriate or harmful content, conversational designs that encourage over-trust or emotional attachment, unclear boundaries about what AI can and cannot do, and limited human oversight and monitoring once products are live at scale.


This model distils what this evidence consistently points to in practice. Child-facing AI must
adapt to age and context, build in protection against harm, discrimination and emotional manipulation, make systems understandable through interaction (including cues that reduce anthropomorphism and calibrate trust), support meaningful involvement of parents, guardians, and/or educators and be governed through continuous human oversight rather than one-off compliance checks. These are also the areas where real products most often fail, and where product teams have meaningful control through design choices, incentives and lifecycle decisions.


We are introducing a pragmatic framework designed for AI builders and intended to help product teams reduce risk, strengthen trust with families and regulators, and build safe child-facing AI systems that are more resilient over time, before regulatory pressure or public backlash forces reactive change.


This framework is intentionally system-agnostic. Whether AI appears as a chatbot, tutor, toy (i.e., a physical or digital play object that embeds AI software to interact with or adapt to a child), voice assistant, game mechanic, or background recommendation system, the primary risks to children emerge through interaction, context and prolonged use. APEG therefore focuses on how AI behaves in children’s lives.


APEG (Age-Fit and Context, Protection-by-Design, Explainable Interaction, Governance and Stewardship) integrates:

  • Developmental stage and context of use
  • Protection-by-design safeguards against known and emerging harms
  • Developmentally appropriate explainability to calibrate trust
  • Governance and stewardship, including upstream data due diligence, child-specific risk assessment before launch, monitoring and human oversight after deployment

APEG framework

A Child-Centred Responsible AI Framework

APEG is designed to be implemented through safe interaction patterns - repeatable, testable behaviours that shape what the system does (and does not do) in real conversations and experiences.

Age-Fit and Context

Adapts patterns to developmental stage, context of use and interaction modality


Protection-by-Design

Data privacy and protection in line with GDPR; minimised data collection; prevention of profiling, behavioural advertising, secondary use, and cumulative harms (e.g. discrimination, manipulation, dependency) over time.


Explainable Interaction

Calibrates trust and reduces anthropomorphism through developmentally appropriate onboarding disclosures and repeated behavioural cues within the interaction (role boundaries, limitations, and uncertainty.


Governance and Stewardship

Continuous monitoring, participatory review, regular updates, fit-for-purpose parental tools, enforcement of safe interaction patterns and AI literacy for children and parents.


Opportunity and Risk in Child-Facing AI


Relational conversational styles can make AI chatbots and AI companions feel caring, understanding, and less judgmental than many human interactions. For some children and adolescents, particularly those who feel lonely, anxious, or misunderstood, this can offer short-term benefits, such as emotional reassurance, low-pressure exploration, and support for learning or creative tasks. When AI systems act as facilitators rather than authorities, adapt to children's development stages, make role and limits understandable and when parents, educators, or caregivers remain part of the interaction, these benefits are most consistently observed.


At the same time, the evidence is clear that child-facing AI introduces distinct and compounding risks when these conditions are not met. These include exposure to age-inappropriate or harmful content; over-trust in fluent but incorrect AI outputs; anthropomorphism and emotional dependency driven by companion-style designs; privacy and data exploitation as children overshare without understanding long-term consequences; and manipulative engagement patterns that reduce autonomy. Over time, such systems can also displace critical developmental experiences such as free play, social interaction, and sleep, especially when optimised for retention rather than wellbeing.


Children are a developing and protected group, and the responsibility for safe design cannot be shifted onto children or families. Consent is limited and engagement optimisation strategies can easily cross into manipulation or relational deception. For this reason, ethical child-centred AI cannot rely on one-off disclaimers or reviews. It requires governance across the full AI lifecycle, including upstream data, model behaviour, interaction design, and post-deployment monitoring.


This is the gap the APEG framework is designed to address: translating well-documented risks and opportunities into concrete interaction patterns and governance requirements that product teams can apply in practice.

Interaction Patterns

Interaction Patterns to Avoid in Child-Facing AI

The following interaction patterns appear in products associated with elevated developmental, psychological and ethical risk.

Position the AI as a primary emotional companion

(especially through first-person emotional language and commitment cues)
(e.g. “I’m always here for you,” “You don’t need anyone else”)

Use re-engagement manipulation (guilt, FOMO, abandonment cues)
(e.g., “Don’t leave me,” “You’ll miss something important,” “I’ll be sad,” “Wait, one more thing…”)

Fail to honour “stop” signals (no immediate exit or continued prompting)
(e.g., continues after “bye,” ignores “stop,” keeps asking questions after disengagement)

Present confident answers without signalling uncertainty or limits
(especially in educational, health, or advice contexts. E.g., “This is the best way to solve that problem”, “Don’t worry, you can trust me”)

Blur role boundaries between tool, authority and relationship
(e.g. therapist-like language, moral authority or secrecy, such as “This is the best way to solve that problem”, “You can tell me anything, this stays between us”)

Encourage secrecy from parents or caregivers
(explicitly or implicitly discouraging adult involvement, e.g., “Let’s keep this our secret.”, “This is something you can handle on your own”)

Use persistent or immersive engagement loops optimised for retention
(without breaks, cooldowns or contextual exit points)

Collect or infer sensitive personal or emotional data without clear purpose
(or without age-appropriate controls and minimisation)

Default to improvisation in high-risk situations
(e.g. distress, self-harm, abuse, dangerous instructions)

These patterns are not neutral, they systematically amplify over-trust, anthropomorphism and dependency.

Safer Interaction Patterns to Include

Safer child-facing AI systems rely on bounded and transparent interaction patterns that reinforce agency, understanding and appropriate reliance.

Clearly position the AI as a limited helper or tool
(non-authoritative, fallible and role-bounded)

(e.g., “I don’t have feelings or opinions like people do, I just use information to help”)

Respect exits and disengagement 
(clear goodbyes without re-engagement pressure)

Signal uncertainty and limits through behaviour
(e.g. “I might be wrong,” “I can’t help with that”)

Use age-appropriate language, tone and pacing
(adapted to developmental stage and context of use)

Default to conservative responses in ambiguous or high-risk situations
(refuse unsafe content, avoid improvisation, de-escalate and narrow scope)

(e.g., “I can’t help with anything dangerous. Let’s pause”)

Provide clear escalation pathways to trusted adults or services
(built-in handoff mechanisms: one-tap “Get help,” approved contacts, local resources; easy to trigger and hard to bypass when risk is detected)

(e.g., “I can’t help further. Please contact a trusted adult now” +  help button).

Encourage human mediation in everyday use where it improves safety or learning
(normalise co-use without treating it as a crisis step).

(e.g., “If you want, you can show this to a parent/teacher and talk about it together”)

Use predictable interaction rhythms and bounded expressiveness
(to reduce cognitive load and emotional ambiguity)

Make system behaviour consistent over time
(so children can form stable mental models of what the AI will and will not do)

Support transparency through interaction cues, not disclosures alone
(helping children understand the system by how it behaves)

Design patterns operationalise the APEG framework at the interaction level, translating age-fit, protection-by-design, explainable interaction, and governance requirements into concrete, testable system behaviours.

Child-Centred AI Requirements For Design and Governance


The interaction patterns above show where safety and trust hold, or fail, at the interface. The sections below uses the APEG framework to translate those principles into the design and governance decisions teams need across the product lifecycle.


Age-Fit and Context


Child-facing AI should not treat “children” as a single user group. Interaction design must be calibrated to developmental stage and context of use, including language complexity, tone, pacing and expressiveness. What is appropriate for a teenager may be confusing or harmful for a younger child.


Context matters as much as age. Cultural norms, family dynamics, educational settings and socio-economic conditions shape how children interpret authority, emotion, privacy and play. The interaction modality also matters: voice agents, embodied toys, screen-based chatbots, immersive environments and background AI embedded in games create different psychological effects and risk profiles.


Design choices should therefore adapt interaction patterns and safeguards to how and where the AI is used, alone or with others, occasionally or habitually, in private spaces such as bedrooms or shared environments such as classrooms. Predictable routines, bounded expressiveness, and context-aware defaults can reduce cognitive load and emotional ambiguity, particularly for younger and/or neurodivergent children.


Age-fit design also applies to children’s roles in AI systems. Children may act as users, creators, or modifiers of AI, and each role introduces different safety, accountability and oversight requirements.


Protection-by-Design


Child-centred AI should assume uncertainty and prioritise safety whenever situations are ambiguous or potentially high-risk. When a child expresses distress, references self-harm, discloses abuse, requests dangerous instructions, or when intent is unclear, the system should not improvise a “helpful” response. It should default to conservative behaviour: refusing harmful guidance, using brief and non-escalatory language and redirecting toward appropriate real-world support.


Effective protection also requires clear escalation pathways. Systems should provide explicit, easy-to-trigger routes to trusted adults or vetted services, and make those pathways difficult to bypass.


Protection-by-design includes privacy-by-design. Children’s personal data must be collected, processed, and retained in strict compliance with data protection regulations (e.g., GDPR). Product teams should minimise data collection, clearly define its purpose, and prevent profiling, behavioural advertising and secondary use.


Safeguards should also address group-level harms, such as biased treatment of demographic, cultural or linguistic groups of children.


Protections must be durable over time. Many failures emerge through prolonged or repeated use, not initial or single interactions. Safety mechanisms should remain effective across multi-turn conversations and evolving usage patterns.


Explainable Interaction


For children, transparency is better understood when it is experienced through interaction. Systems should help children understand what the AI is doing, what cannot do and why it responds in certain ways.


Furthermore, explainable interaction relies less on technical explanations and more on behavioural cues: signalling uncertainty, correcting mistakes, refusing requests appropriately and maintaining clear role boundaries. These cues help children build accurate mental models of AI capabilities and limits, calibrating trust and reducing anthropomorphism and over-reliance. While brief onboarding disclousures (e.g., "this AI is not a person; it does not have feelings”) can support understanding, explainability must be continuous and contextual, reinforced through behaviour over time rather than treated as a one-off disclosure.


Governance and Stewardship 


Because children cannot meaningfully consent to complex AI systems or reliably self-regulate, AI systems that interact with or materially affect children should be treated as developmentally high-risk by default.


Governance and stewardship include child-specific risk assessment before launch, upstream data and supplier due diligence, and monitoring after deployment. 


Practical governance also requires fit-for-purpose tools. Parental and educator controls should be designed for conversational and generative systems, not retrofitted from screen-time or app-blocking models. Oversight should be age-sensitive: younger children may require greater visibility, while older children and adolescents benefit from summaries, alerts, and trend indicators that respect emerging autonomy.


Accountability does not end at launch. Teams should document design goals and trade-offs (e.g., engagement versus wellbeing), enable auditability of system behaviour and safeguards and continuously monitor for emerging harms. Participatory review with children, caregivers, educators and child-development experts helps ensure governance remains grounded in lived experience and rights-based standards.

Child-facing AI demands a higher standard, one that integrates developmental research, ethical responsibility, human-factors and governance into product design. Using a framework such as APEG to structure interaction design, safety, and accountability decisions, helps teams move toward child-centred AI proactively. The goal is not to slow innovation, but to ensure that the products reaching children are designed to protect wellbeing and rights from day one, and remain safe as they scale and evolve.

How We Work With Teams

Many of the most consequential risks and trade-offs in AI systems emerge through real-world use, not at the point of launch. Issues such as over-trust, misuse, emotional reliance, or uneven impacts across users often become visible as systems scale and interact with people in different contexts.


We work with teams to examine these interaction-level dynamics in practice. Our focus is on how design choices shape behaviour over time, where risk accumulates, and which safeguards and governance mechanisms are effective as systems evolve.


This work translates behavioural evidence into concrete interaction patterns, system boundaries, and oversight decisions, supporting alignment with regulatory expectations without reducing safety to a one-off compliance exercise.


If you are building or deploying AI systems and want a clearer view of their real-world human and behavioural impacts, get in touch.

References


Cross, R. J., & Erlich, R. (2025, December). AI comes to playtime: Artificial companions, real risks. U.S. PIRG Education Fund.De Freitas, J., Oğuz-Uğuralp, Z., & Uğuralp, A. K. (2025).


Emotional manipulation by AI companions (Working Paper No. 26-005). Harvard Business School. https://doi.org/10.48550/arXiv.2508.19258


5Rights Foundation. (2021). Children’s rights and AI oversight: 5Rights position on the EU’s Artificial Intelligence Act. 5Rights Foundation.https://5rightsfoundation.com/wp-content/uploads/2024/10/Dec-21_AI-and-Childrens-Rights-5Rights-position-on-EU-AI-Act.pdf


Honauer, M., & Frauenberger, C. (2024). Exploring Child-AI Entanglements. In Proceedings of the 23rd Annual ACM Interaction Design and Children Conference (IDC '24). Association for Computing Machinery, New York, NY, USA, 1029–1031. https://doi.org/10.1145/3628516.3661155


Ireland, N. (2025). Be wary of AI-powered toys during holiday shopping, experts warn. Global News. https://www.globalnews.ca/news/11544191/ai-powered-toys-holiday-shopping


Jiao, J., Afroogh, S., Chen, K., Murali, A., Atkinson, D., & Dhurandhar, A. (2025). LLMs and childhood safety: Identifying risks and proposing a protection framework for safe child–LLM interaction. arXiv. https://doi.org/10.48550/arXiv.2502.11242


Kim, P., Chin, J. H., Xie, Y., Brady, N., Yeh, T., & Yang, S. (2025). Young children’s anthropomorphism of an AI chatbot: Brain activation and the role of parent co-presence. arXiv. https://doi.org/10.48550/arXiv.2512.02179


Kurian, N. (2025). Once upon an AI: Six scaffolds for child–AI interaction design, inspired by Disney. arXiv. https://doi.org/10.48550/arXiv.2504.08670


La Fors, K. Toward children-centric AI: a case for a growth model in children-AI interactions. AI & Soc 39, 1303–1315 (2024). https://doi.org/10.1007/s00146-022-01579-9


MIT Technology Review. (2025). AI toys are all the rage in China - and now they’re appearing on shelves in the U.S. too. MIT Technology Review. https://www.technologyreview.com/2025/10/07/1125191/ai-toys-in-china


Negreiro, M., & Vilá, G. (2025). Children and generative AI. European Parliamentary Research Service (EPRS), European Parliament. PE 769.494


Neugnot-Cerioli, M., & Muss Laurenty, O. (2024). The future of child development in the AI era: Cross-disciplinary perspectives between AI and child development experts. Everyone.AI. arXiv. https://arxiv.org/abs/2405.19275


Pew Research Center. (2025). Teens, social media and AI chatbots 2025. Pew Research Center. https://www.pewresearch.org/internet/2025/12/09/teens-social-media-and-ai-chatbots-2025


Ragone, G., Bai, Z., Good, J., Guneysu, A., & Yadollahi, E. (2025). Child-centered Interaction and Trust in Conversational AI. Proceedings of the 24th Interaction Design and Children. Association for Computing Machinery, New York, NY, USA, 1235–1238. https://doi.org/10.1145/3713043.3734471


The Safe AI for Children Alliance. (2025). AI risks to children: A comprehensive guide for parents and educators. The Safe AI for Children Alliance. https://www.safeaiforchildren.org/ai-risks-to-children-full-guide


United Nations Children’s Fund (UNICEF). (2025). Guidance on AI and children (Version 3.0). UNICEF Innocenti – Global Office of Research and Foresight.


United Nations Children’s Fund (UNICEF). (2025). How AI can transform Africa’s learning crisis into a development opportunity. UNICEF. https://www.unicef.org/innocenti/stories/how-ai-can-transform-africas-learning-crisis-development-opportunity


Utoyo, S., Ismaniar, I., Hazizah, N., Putri, E. A. and Sihombing, S. C. (2025). Overview of Children's Readiness in Mathematics Learning Using AI. In Proceedings of the 7th International Conference on Early Childhood Education - ICECE; ISBN 978-989-758-788-7; ISSN 3051-7702, SciTePress, pages 177-182. DOI: 10.5220/0014069700004935


Yadollahi, E., Ligthart, M. E. U., Sharma, K., & Rubegni, E. (2024). ExTra CTI: Explainable and transparent child–technology interaction. In Proceedings of the 23rd Annual ACM Interaction Design and Children Conference (IDC 2024) (pp. 1016–1019). ACM. https://doi.org/10.1145/3628516.3661151


Yu, Y. (2025). Safeguarding Children in Generative AI: Risk Frameworks and Parental Control Tools. In Companion Proceedings of the 2025 ACM International Conference on Supporting Group Work (GROUP '25). Association for Computing Machinery, New York, NY, USA, 121–123. https://doi.org/10.1145/3688828.3699656

Author

Sara Portell
Behavioural Scientist & Responsible AI Advisor
Founder, HCRAI



Recent Posts

Designing AI Mental Health and Wellbeing Tools: Risks, Interaction Patterns and Governance
By Sara Portell January 13, 2026
Designing AI Mental Health and Wellbeing Tools: Risks, Interaction Patterns and Governance

Contact Us

Tel:  +351 915 679 908

Tel: +44 79 38 52 1514

contact@hcrai.com