Research & Insights

By Yasmina El Fassi
•
February 19, 2026
W hat do Woebot , Wysa and Youper have in common? These are all AI agents that use therapeutic techniques to help users improve mental well-being, guide meditation and even help with managing anxiety. In this article, AI mental‑health agents are goal‑directed conversational systems that sit with you in a chat or voice interface to support specific wellbeing tasks; for example, walking through CBT‑style exercises, practicing coping strategies, or checking in on mood over time. I n the broader AI literature , these would be considered agents because they are built around particular goals and workflows, whereas “agentic” AI usually refers to more autonomous systems that can independently plan multi‑step actions, call tools, and adap t their behaviour with relatively little human steering.

By Sara Portell
•
February 6, 2026
Design systems were built to scale consistency, efficiency and quality in user-centric applications: reusable components, shared patterns and practices, and a common language across design and engineering , promoting collaboration. They improve velocity because teams stop solving the same interface problems repeatedly, providing measurable ROI . AI introduces both immense opportunities and complex (technical, legal and social) challenges, and it is reshaping the operating conditions traditional design systems were built for. User-facing outputs are adaptive and can vary by input, model behaviour can shift over time and responses that sound credible can still be wrong . These systems can also reproduce or amplify bias, creating unequal outcomes across users. In high-confidence, relational interactions, they can shape user judgment and behaviour . These shifts raise the bar for accountability, transparency, and governance across the full product lifecycle. The challenge is not only consistency and quality. It is ensuring consistency and quality safely, fairly and responsibly as both system behaviour and human behaviour evolve. At the same time, AI-powered copilots and no-code tools are increasingly used in the design process to support ideation, prototyping, and delivery, but their adoption also raises concerns about transparency, bias, privacy, and the need to preserve human judgment and oversight . Fast, polished design outputs often look complete even when the underlying logic is incomplete or flawed. As a result, familiar UX failures, misalignment with real user needs, hidden edge cases and context breakdowns, become harder to detect and more costly to correct later. Design systems can take on a bigger operational role in AI-enabled product development by codifying user-centric foundations, rules and infrastructure that guide consistent, safe, ethical and scalable human-AI experiences.

By Sara Portell
•
January 21, 2026
Generative AI (GenAI) and emerging agentic systems are moving AI into the learning process itself. These systems don’t stop at delivering content. They explain, adapt, remember and guide learners through tasks. In doing so, they change where cognitive effort sits. I.e., what learners do themselves and what gets delegated to machines. This shift unlocks significant opportunities. GenAI can provide on-demand explanations, examples and feedback at a scale. It can diversify learning resources through multimodal content, support learners working in a second language and reduce friction when students get stuck, lowering barriers to engagement and persistence. For some learners, AI-mediated feedback can feel psychologically safer, encouraging experimentation (trial and error), revision and assistance without fear of judgement . But these gains come with important risks. The same design choices that improve short-term performance, confidence, or engagement can weaken i ndependent reasoning, distort social development or introduce hidden dependencies over time . In educational contexts, especially those involving children and teens, we are talking about learning, safeguarding, regulatory and reputational risks. If the “Google effect” (digital amnesia) raised concerns about outsourcing memory to search engines , LLMs can be even more powerful in practice.

By Sara Portell
•
January 4, 2026
AI is alread y a core part of children’s and teens’ digital lives. In the UK, 67% of teenagers now use AI , and in the US 64% of teens report using AI chatbots . Even among younger children, adoption is significant: 39% of elementary school children in the US use AI for learning, and 37% of children aged 9-11 in Argentina report using ChatGPT to seek information, as stated in the latest Unicef Guidance on AI and Children. In parallel, child-facing AI products are expanding: more than 1,500 AI toy companies w ere reportedly operating in China as of October 2025. Adoption is accelerating across age groups and regions, often surpassing the development of child-specific ethical standards, safeguards and governance mechanisms.

Terms & Policies
