Research & Insights

By Sara Portell
•
January 4, 2026
AI is already a core part of children’s and teens’ digital lives. In the UK, 67% of teenagers now use AI , and in the US 64% of teens report using AI chatbots . Even among younger children, adoption is significant: 39% of elementary school children in the US use AI for learning, and 37% of children aged 9-11 in Argentina report using ChatGPT to seek information, as stated in the latest Unicef Guidance on AI and Children. In parallel, child-facing AI products are expanding: more than 1,500 AI toy companies were reportedly operating in China as of October 2025. Adoption is accelerating across age groups and regions, often surpassing the development of child-specific ethical standards, safeguards and governance mechanisms. Experts warn that many child-facing AI products expose children to developmental, psychological and ethical risks that current safeguards do not adequately address . For companies building AI systems that children directly use, or are likely to encounter, this creates a duty of care as well as reputational and regulatory exposure. AI safety therefore requires designing systems that actively protect children’s wellbeing, rights and lived realities. To make this actionable, this article sets out a practical design and ethics framework for child-centred AI products.

Terms & Policies
