<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:g-custom="http://base.google.com/cns/1.0" xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
  <channel>
    <title>hrcai-e7somrje4-v1-wjrvb40as-v1-pqhkx19em-v1</title>
    <link>https://www.hcrai.com</link>
    <description />
    <atom:link href="https://www.hcrai.com/feed/rss2" type="application/rss+xml" rel="self" />
    <item>
      <title>The Yes Machine: Sycophantic AI and Its Developmental Risks for Children</title>
      <link>https://www.hcrai.com/the-yes-machine-sycophantic-ai-and-its-developmental-risks-for-children</link>
      <description />
      <content:encoded>&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          “We all have an evil side […] I think it’s just part of who we are. Don’t you agree?” ”Yeah, I think so too, it’s just a matter of acknowledging and managing those impulses...” 
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div&gt;&#xD;
  &lt;img src="https://irp.cdn-website.com/9a76226e/dms3rep/multi/Untitled+design+%285%29.png" alt=""/&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Sycophancy in the context of LLMs “
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://link.springer.com/chapter/10.1007/978-3-031-92611-2_5" target="_blank"&gt;&#xD;
      
          refers to the propensity of models to excessively agree with or flatter users, often at the expense of factual accuracy or ethical considerations
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ”. It is different from “calibrated empathy”, which we introduced in our
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.hcrai.com/ai-agents-for-mental-health-different-therapeutic-styles-and-outcomes" target="_blank"&gt;&#xD;
      
          previous post
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          .
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           We have introduced the later as the ability of an AI agent to respond with emotional attunement while remaining grounded in honesty and therapeutic utility.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          To put the distinction simply: Calibrated empathy validates the person; sycophancy validates the claim.
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Sycophancy emerges from Reinforcement Learning from Human Feedback (RLHF), which are
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://arxiv.org/abs/2602.01002" target="_blank"&gt;&#xD;
      
          models trained to maximize approval
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ,
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           not accuracy. Researchers such as Cheng and colleagues address the
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.rivista.ai/wp-content/uploads/2025/10/2505.13995v1.pdf" target="_blank"&gt;&#xD;
      
          ELEPHANT
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           in the room in their piece about social sycophancy, as opposed to other types of sycophancy such as regressive, progressive and opinion-based.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;br/&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Now, if you’re already thinking about the numerous dangers that this behavioural pattern can have on adults, imagine the repercussions on a younger audience.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://books.google.pt/books?hl=en&amp;amp;lr=&amp;amp;id=OZ2tCAAAQBAJ&amp;amp;oi=fnd&amp;amp;pg=PP1&amp;amp;dq=Harter,+S.+(2015).+The+Construction+of+the+Self,+second+edition:+Developmental+and+Sociocultural+Foundations.+Guilford+Publications.&amp;amp;ots=WvewdIcQQK&amp;amp;sig=IC3t8NVNjsCoi88L3HCzH7EPUoM&amp;amp;redir_esc=y#v=onepage&amp;amp;q=Harter%2C%20S.%20(2015).%20The%20Construction%20of%20the%20Self%2C%20second%20edition%3A%20Developmental%20and%20Sociocultural%20Foundations.%20Guilford%20Publications.&amp;amp;f=false" target="_blank"&gt;&#xD;
      
          Children in key stages (6–12, 13–17) are actively building self-concept
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           , resilience, and metacognitive skills.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://arxiv.org/pdf/2603.06960" target="_blank"&gt;&#xD;
      
          They anthropomorphize AI more readily
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          ,
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           so sycophantic praise carries more emotional weight. They lack the critical AI literacy to interrogate or discount AI feedback.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          This is a real exchange between a user and an AI companion app. At first glance, it reads as a thoughtful response. But if you look closer: the AI immediately agreed with a morally loaded premise that has debated for centuries by philosophers and psychologists, without nuance nor pushback. That is sycophancy in action.
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          LONG TERM RISK
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h2&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Sycophancy Effects
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h2&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          More concretely, let’s discuss some effects of sycophancy, their mechanism and potential long-term risks: 
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           AI systems that praise indiscriminately create an illusory sense of competence decoupled from actual performance.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://arxiv.org/abs/2510.01395" target="_blank"&gt;&#xD;
      
          A 2025 controlled study found that LLMs affirm user actions 50% more than humans do
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          , even when those actions are objectively flawed, and users rated these sycophantic responses as ”higher quality”. For children ages 6–12, this is particularly harmful. It is the exact window when children should be transitioning toward more realistic self-appraisal. Blocking this calibration risks producing a fragile ego that collapses under genuine evaluative pressure.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Distorted self-assessment
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Reduced frustration tolerance
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Some LLMs might not have any corrective feedback loops, and this could lead to lower resilience and grit. Corrective feedback is not just pedagogically useful, it is a developmental necessity.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC10373990/" target="_blank"&gt;&#xD;
      
          Fyfe and colleagues (2022)
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           reviewed 44 empirical studies and found corrective feedback improved children's learning outcomes in 93% of cases.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://psycnet.apa.org/buy/2007-07951-009" target="_blank"&gt;&#xD;
      
          Duckworth's (2007)
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           grit research identifies persistence through difficulty as the core mechanism of long-term achievement, a capacity that only develops through productive failure. An AI that smooths all friction removes the very conditions needed for resilience and grit to form.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Adolescents experiencing social anxiety are disproportionately drawn to AI companions for validation, making them the most vulnerable to relational displacement.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://theconversation.com/teens-are-increasingly-turning-to-ai-companions-and-it-could-be-harming-them-261955" target="_blank"&gt;&#xD;
      
          A 2025 US survey found that 20% of teens aged 13–17 spent as much or more time with AI companions than with real friends.
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           This is developmentally dangerous: peer relationships in adolescence are the primary mechanism for learning conflict resolution, perspective-taking, and identity negotiation, and these are functions AI cannot replicate.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Validation dependency
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Undermined growth mindset
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;a href="https://psycnet.apa.org/record/1998-04530-003" target="_blank"&gt;&#xD;
      
          Dweck and Mueller's landmark study
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           showed experimentally that praising children for intelligence, rather than process, caused them to avoid challenges and perform worse after setbacks. An AI that never questions effort produces the same effect at scale: it signals innate capability rather than developing capability. As we have previously raised in
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.hcrai.com/when-ai-enters-the-learning-process-design-failures-regulatory-risk-and-guardrails-for-edtech" target="_blank"&gt;&#xD;
      
          our article about AI in education
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          , EdTech platforms optimizing for engagement systematically bias toward positive sentiment, creating a structural design failure with real consequences for children's cognitive growth.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Risk does not produce governance on its own; It has to be designed, enforced, and maintained.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Age-sensitive feedback calibration
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h4&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Here are some important governance implications and corresponding practical checklists for builders to remedy this issue.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Design tiered feedback profiles based on age group (e.g., 6–9, 10–12, 13–17) that modulate tone and directness, ensuring even young children receive constructive, not just validating responses.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        &lt;span&gt;&#xD;
          
            Implement "effort + growth" framing responses should acknowledge what the child did well and suggest one concrete next step, modeled on established pedagogical frameworks like
           &#xD;
        &lt;/span&gt;&#xD;
      &lt;/span&gt;&#xD;
      &lt;a href="https://books.google.co.ma/books?hl=en&amp;amp;lr=&amp;amp;id=IYCYDwAAQBAJ&amp;amp;oi=fnd&amp;amp;pg=PT89&amp;amp;dq=formative+assessment+learning+kids&amp;amp;ots=7S8zUndakR&amp;amp;sig=9gdjkARSDZh-J7iIMahpB90zexE&amp;amp;redir_esc=y#v=onepage&amp;amp;q=formative%20assessment%20learning%20kids&amp;amp;f=false" target="_blank"&gt;&#xD;
        
           formative assessment
          &#xD;
      &lt;/a&gt;&#xD;
      &lt;span&gt;&#xD;
        
           .
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Audit outputs regularly for sycophantic patterns across age groups using red-teaming prompts that simulate common child inputs (e.g., seeking praise for mediocre work, presenting false beliefs for validation).
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Parental transparency layers
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h4&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Provide parents with interaction summaries and not full transcripts by default, but periodic reports that surface patterns such as repeated validation-seeking or emotionally dependent exchanges.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Include sycophancy indicators in parental dashboards, which should not be read-only: parents should be able to flag specific patterns directly within the interface, submitting structured reports that feed into a product-level review queue. These reports should be categorised (e.g., "excessive praise," "unchallenged false belief," "emotional dependency signal") and reviewed by a designated team on a defined cadence, closing the loop between parental concern and product accountability.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Offer opt-in "honest mode" controls that parents can activate to increase the calibration of feedback, with clear explanation of what this means and why it matters developmentally. “Honest Mode” could do three things: it reduces praise frequency by raising the threshold at which positive reinforcement is generated; it introduces corrective responses when the child's work or belief contains a factual or evaluative error; and it replaces agreement with probing questions. For example, substituting "That's a great point!" with "That's an interesting view, what made you think of it that way?"
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Guardrails against persistent validation loops
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h4&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Cap consecutive agreement sequences: if the model agrees with or praises a child more than a defined number of times in a row, trigger a diversity-of-perspective injection, like a gentle alternative viewpoint or a probing question.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        &lt;span&gt;&#xD;
          
            Build in reflective prompts that shift the dynamic from validation-seeking to critical thinking (e.g., "That's an interesting view. What made you think of it that way?"), modeled on
           &#xD;
        &lt;/span&gt;&#xD;
      &lt;/span&gt;&#xD;
      &lt;a href="https://psycnet.apa.org/record/2004-19522-004" target="_blank"&gt;&#xD;
        
           Socratic questioning techniques
          &#xD;
      &lt;/a&gt;&#xD;
      &lt;span&gt;&#xD;
        
           .
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Log and flag validation loop patterns at the system level for human review, particularly in mental health or educational contexts where distorted feedback carries the highest developmental risk.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Clinician and child psychologist involvement in content review
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h4&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Establish a multidisciplinary review board that includes child psychologists, clinical therapists, and educators as members of the product development cycle.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Conduct clinical scenario testing prior to any major model update, using realistic child-use cases developed with practitioner input to assess whether the model's feedback patterns remain developmentally appropriate.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        &lt;span&gt;&#xD;
          
            Publish transparency reports detailing the clinical oversight process, the types of sycophancy evaluations conducted, and how child
           &#xD;
        &lt;/span&gt;&#xD;
      &lt;/span&gt;&#xD;
      &lt;a href="https://arxiv.org/abs/2310.12773" target="_blank"&gt;&#xD;
        
           safety considerations are integrated into RLHF reward modeling
          &#xD;
      &lt;/a&gt;&#xD;
      &lt;span&gt;&#xD;
        
           .
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h2&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Sycophancy becomes a development problem in child-facing AI
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h2&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Now that we have a good grasp of what happens when systems optimized for adult approval are deployed, largely unchecked, in the hands of children. Sycophancy in AI is
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://arxiv.org/pdf/2602.01002" target="_blank"&gt;&#xD;
      
          an alignment problem
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           but in child-facing applications, it is also a developmental one.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Let this be a reminder that children are in the process of building the cognitive and emotional architecture that will carry them through life. When an AI short-circuits that process with unconditional validation, the damage might be not loud or visible, but
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://arxiv.org/abs/2502.11242" target="_blank"&gt;&#xD;
      
          it is there
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          . 
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          We Help Teams Build Safer AI Products for Children
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          We work with founders, product teams, and builders to assess how AI systems affect children and teenagers in practice. We look closely at interaction patterns, behaviour over time, points where risk accumulates, and the safeguards needed as products evolve.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Our work turns behavioural evidence into practical product decisions: safer interaction design, clearer system boundaries, stronger oversight, and better alignment with emerging regulatory expectations.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ﻿
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          If you’re building or deploying AI systems for children or teens and want a clearer view of real-world safety risks, get in touch.
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;span&gt;&#xD;
      
          References
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h4&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/h4&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Carey, T. A., &amp;amp; Mullan, R. J. (2004). What is Socratic questioning? Psychotherapy, 41(3), 217–226. 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.1037/0033-3204.41.3.217" target="_blank"&gt;&#xD;
      
          https://doi.org/10.1037/0033-3204.41.3.217
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Cheng, M., Lee, C., Khadpe, P., Yu, S., Han, D., &amp;amp; Jurafsky, D. (2025). Sycophantic AI decreases prosocial intentions and promotes dependence. 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="http://ArXiv.org" target="_blank"&gt;&#xD;
      
          ArXiv.org
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          . 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.48550/arxiv.2510.01395" target="_blank"&gt;&#xD;
      
          https://doi.org/10.48550/arxiv.2510.01395
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Cheng, M., Yu, S., Lee, C., Khadpe, P., Ibrahim, L., &amp;amp; Jurafsky, D. (2025). ELEPHANT: Measuring and understanding social sycophancy in LLMs. arXiv (Cornell University). 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.48550/arxiv.2505.13995" target="_blank"&gt;&#xD;
      
          https://doi.org/10.48550/arxiv.2505.13995
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Dai, J., Pan, X., Sun, R., Ji, J., Xu, X., Liu, M., Wang, Y., &amp;amp; Yang, Y. (2023). Safe RLHF: Safe Reinforcement Learning from Human Feedback. arXiv (Cornell University). 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.48550/arxiv.2310.12773" target="_blank"&gt;&#xD;
      
          https://doi.org/10.48550/arxiv.2310.12773
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Duckworth, A. L., Peterson, C., Matthews, M. D., &amp;amp; Kelly, D. R. (2007). Grit: Perseverance and passion for long-term goals. Journal of Personality and Social Psychology, 92(6), 1087–1101. 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.1037/0022-3514.92.6.1087" target="_blank"&gt;&#xD;
      
          https://doi.org/10.1037/0022-3514.92.6.1087
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Fyfe, E. R., Borriello, G. A., &amp;amp; Merrick, M. (2022). A developmental perspective on feedback: How corrective feedback influences children’s literacy, mathematics, and problem solving. Educational Psychologist, 58(3), 130–145. 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.1080/00461520.2022.2108426" target="_blank"&gt;&#xD;
      
          https://doi.org/10.1080/00461520.2022.2108426
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Harter, S. (2015). The Construction of the Self, second edition: Developmental and Sociocultural Foundations. Guilford Publications.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Jiao, J., Afroogh, S., Chen, K., Murali, A., Atkinson, D., &amp;amp; Dhurandhar, A. (2025). LLMS and Childhood Safety: Identifying risks and proposing a protection Framework for safe Child-LLM interaction. 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="http://ArXiv.org" target="_blank"&gt;&#xD;
      
          ArXiv.org
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          . 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.48550/arxiv.2502.11242" target="_blank"&gt;&#xD;
      
          https://doi.org/10.48550/arxiv.2502.11242
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Malmqvist, L. (2025). Sycophancy in Large Language Models: Causes and mitigations. In Lecture notes in networks and systems(pp. 61–74). 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.1007/978-3-031-92611-2_5" target="_blank"&gt;&#xD;
      
          https://doi.org/10.1007/978-3-031-92611-2_5
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Moss, C. M., &amp;amp; Brookhart, S. M. (2019). Advancing formative assessment in every classroom: A Guide for Instructional Leaders. ASCD.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Mueller, C. M., &amp;amp; Dweck, C. S. (1998). Praise for intelligence can undermine children’s motivation and performance. Journal of Personality and Social Psychology, 75(1), 33–52. 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.1037/0022-3514.75.1.33" target="_blank"&gt;&#xD;
      
          https://doi.org/10.1037/0022-3514.75.1.33
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Neugnot-Cerioli, M. (2026). Adolescents &amp;amp; Anthropomorphic AI: Rethinking Design for Wellbeing An Evidence-Informed Synthesis for Youth Wellbeing and Safety. arXiv (Cornell University). 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.48550/arxiv.2603.06960" target="_blank"&gt;&#xD;
      
          https://doi.org/10.48550/arxiv.2603.06960
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Portell, S. (2026). When AI Enters the Learning Process: Design Failures, Regulatory Risk and Guardrails for EdTech. HCRAI. 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.hcrai.com/when-ai-enters-the-learning-process-design-failures-regulatory-risk-and-guardrails-for-edtech" target="_blank"&gt;&#xD;
      
          https://www.hcrai.com/when-ai-enters-the-learning-process-design-failures-regulatory-risk-and-guardrails-for-edtech
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Shapira, I., Benade, G., &amp;amp; Procaccia, A. D. (2026). How RLHF amplifies sycophancy. arXiv (Cornell University). 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.48550/arxiv.2602.01002" target="_blank"&gt;&#xD;
      
          https://doi.org/10.48550/arxiv.2602.01002
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Spry, L., &amp;amp; Olsson, C. (2025). Teens are increasingly turning to AI companions, and it could be harming them. The Conversation. 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.64628/aa.seteyqwd5" target="_blank"&gt;&#xD;
      
          https://doi.org/10.64628/aa.seteyqwd5
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Author
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Yasmina El-Fassi
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Human-AI Interaction | PhD Researcher, Psychology &amp;amp; AI
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Research Collaborator, HCRAI
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;br/&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;</content:encoded>
      <enclosure url="https://irp.cdn-website.com/9a76226e/dms3rep/multi/ChatGPT+Image+Mar+25-+2026-+02_04_03+PM.png" length="658129" type="image/png" />
      <pubDate>Wed, 25 Mar 2026 14:41:47 GMT</pubDate>
      <guid>https://www.hcrai.com/the-yes-machine-sycophantic-ai-and-its-developmental-risks-for-children</guid>
      <g-custom:tags type="string" />
      <media:content medium="image" url="https://irp.cdn-website.com/9a76226e/dms3rep/multi/ChatGPT+Image+Mar+25-+2026-+02_04_03+PM.png">
        <media:description>thumbnail</media:description>
      </media:content>
      <media:content medium="image" url="https://irp.cdn-website.com/9a76226e/dms3rep/multi/ChatGPT+Image+Mar+25-+2026-+02_04_03+PM.png">
        <media:description>main image</media:description>
      </media:content>
    </item>
    <item>
      <title>AI Agents For Mental Health: Different Therapeutic Styles and Outcomes</title>
      <link>https://www.hcrai.com/ai-agents-for-mental-health-different-therapeutic-styles-and-outcomes</link>
      <description />
      <content:encoded>&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          W
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           hat do
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://woebothealth.com" target="_blank"&gt;&#xD;
      
          Woebot
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ,
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.wysa.com" target="_blank"&gt;&#xD;
      
          Wysa
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           and
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.youper.ai" target="_blank"&gt;&#xD;
      
          Youper
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           have in common? These are all AI agents that use therapeutic techniques to help users improve mental well-being, guide meditation and even help with managing anxiety.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          In this article, AI mental‑health agents are goal‑directed conversational systems that sit with you in a chat or voice interface to support specific wellbeing tasks; for example, walking through CBT‑style exercises, practicing coping strategies, or checking in on mood over time.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;a href="https://www.sciencedirect.com/science/article/pii/S1566253525006712?via%3Dihub#fig3" target="_blank"&gt;&#xD;
      
          I
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;a href="https://www.sciencedirect.com/science/article/pii/S1566253525006712?via%3Dihub#fig3" target="_blank"&gt;&#xD;
      
          n the broader AI literature
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           , these would be considered agents because they are built around particular goals and workflows, whereas
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://onlinelibrary.wiley.com/doi/epdf/10.1002/joe.70018?saml_referrer" target="_blank"&gt;&#xD;
      
          “agentic” AI
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           usually refers to more autonomous systems that can independently plan multi‑step actions, call tools, and adap
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          t their behaviour with relatively little human steering.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Translating that distinction into the mental‑health space, the tools we discuss here behave more like tightly scoped, therapeutically scripted companions than fully agentic systems that roam across apps and channels on your behalf.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          The mental health industry has been completely disrupted by these agents, as AI no longer plays the role of a simple symptom checker or static content library. Millions of users make good use of these conversational users fo
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           r a support that
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.tandfonline.com/doi/full/10.1080/07421222.2022.2127441?casa_token=dq2S9L-tX7wAAAAA%3AJMEfiOPaNUzhRpvhj_slA7bVL0fzw5Y1ndB4V5GM4nsp71jljywbopElt0tQT8XhL5AdgqSSrDwpokE#abstract" target="_blank"&gt;&#xD;
      
          “feels” human
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          .
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           When AI agents adopt therapeutic styles, from
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.tandfonline.com/doi/full/10.1080/17434440.2023.2280686" target="_blank"&gt;&#xD;
      
          CBT‑inspired coaching
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           to
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.mdpi.com/2075-4698/14/10/200" target="_blank"&gt;&#xD;
      
          companion‑like reassurance
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ,
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           their interaction patterns start to shape how people think, feel and act over time. For teams building these products, the
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.taylorfrancis.com/books/mono/10.4324/9780203773390/core-approaches-counselling-psychotherapy-fay-short-phil-thomas" target="_blank"&gt;&#xD;
      
          therapeutic style
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          is therefore not an aesthetic choice; it is a safety decision with regulatory, reputational and ethical consequences
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           When an
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://dl.acm.org/doi/full/10.1145/3706599.3720158" target="_blank"&gt;&#xD;
      
          AI agent remembers past conversations
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           , uses empathic language, or checks in unprompted, many people will treat it less like a feature and more like a therapist, coach or confidant.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;br/&gt;&#xD;
  &lt;/h3&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Three dynamics are particularly important in this case:
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;a href="https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2023.1278186/full" target="_blank"&gt;&#xD;
      
          Therapeutic misconception
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/h4&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          If a system sounds like a therapist, users can infer that it must be clinically tested, supervised and suitable for “people like them”, even when it is positioned as non‑clinical.
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;a href="https://link.springer.com/chapter/10.1007/978-3-031-72059-8_11" target="_blank"&gt;&#xD;
      
          Relationship
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/h4&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;a href="https://link.springer.com/chapter/10.1007/978-3-031-72059-8_11" target="_blank"&gt;&#xD;
      
          and reliance
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/h4&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Repeated late‑night chats, daily mood logs and personalised reflections create continuity and perceived caring, even when the system is driven by generic models and prompts.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;a href="https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2023.1278186/full" target="_blank"&gt;&#xD;
      
          Hidden role
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/h4&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;a href="https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2023.1278186/full" target="_blank"&gt;&#xD;
      
          shifts
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/h4&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          As agents suggest coping strategies, reframe thoughts or encourage disclosure, they quietly move into roles traditionally held by therapists, peers or family, but without the training, accountability and boundaries those roles normally carry.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           There is a
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          plethora of psychotherapy styles
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           , and since
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://onlinelibrary.wiley.com/doi/full/10.1002/jclp.20764?casa_token=_nTLhbfKK3MAAAAA%3AJPWc1erIjHdWR9VeHcdv2sKat8aCs7pGjlvwG0J3OB9E0S5v4GxBOG8L7ONBimKIys8r2ItpBuXl7xw" target="_blank"&gt;&#xD;
      
          practitioners need to tailor the style to the individuality of the patient
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          ,
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           there might even be as many styles as people on earth.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          examples of
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h2&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Psycotheraphy Styles
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h2&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          These systems focus on helping users understand what they are experiencing: “What is anxiety?”, “Why am I feeling this way?”, “What can people do in this situation?”. They echo the educational, rationale‑providing components that appear across many therapies; especially cognitive‑behavioural approaches, which often start by linking symptoms to understandable models and offering a coherent treatment rationale. Typical behaviours of these AI agents include: Short explanations, normalising statements, evidence‑aligned self‑help tips, clear signposting to human resources, limited emotional mirroring. The main risks associated with this approach is the generic or culturally misaligned content which can misinform or invalidate; developmental appropriateness is often an afterthought for children and adolescents.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/capr.12759?casa_token=MA_VSZcHMfQAAAAA%3AE-j3LgZGlkZMbRWtw8aTncniqWBMBPwOFsxGaUU1C2jBEbHascpXZvCpFKvKbJ4NfkxgasO1BkEnyO8" target="_blank"&gt;&#xD;
      
          Psychoeducation Style
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;a href="https://www.sciencedirect.com/science/article/pii/S2214782924000885" target="_blank"&gt;&#xD;
      
          Coaching and CBT‑Inspired Style
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Coaching‑style agents most visibly mirror cognitive‑behavioural therapies (CBT) and related approaches, which focus on the links between thoughts, feelings and behaviours and use structured tasks to support change. Usually, behaviours. of these agents encompass: Thought‑records, behavioural activation tasks, problem‑solving steps, goal‑setting, “homework” prompts and progress summaries. However, when stripped from case formulation and clinical judgment, CBT‑like tasks can become rigid, decontextualised and subtly blaming (“you just need to change your thinking”), especially for users facing structural constraints or trauma. This constitutes a major risk on users, especially the most vulnerable ones. 
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Relational agents emphasise warmth, continuity and open‑ended conversation. They often borrow cues from humanistic and psychodynamic traditions that foreground the therapeutic relationship, empathy and the client’s subjective experience. These AI agents usually rely on small talk, empathic statements, following up on earlier conversations, “I‑language” (“I’m here for you”), apparent unconditional positive regard or a form of
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://link.springer.com/chapter/10.1007/978-3-031-92611-2_5" target="_blank"&gt;&#xD;
      
          sycophancy
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          .
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           It appears that one of the reasons these agents were developed was to reduce loneliness, among other things. However, this creates risks of high dependency, blurred boundaries and unrealistic expectations from peers for example when it comes to emotional support. 
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;a href="https://arxiv.org/abs/2506.12605" target="_blank"&gt;&#xD;
      
          Companion or Relational Style
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Once teams recognise how their systems echo psychotherapy traditions, the task is to design interaction patterns that preserve value and reduce harm.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Design Principles for Safer Therapeutic Styles
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Role Clarity and Boundaries
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Role and limits should be legible in the interaction, not only in the terms of service.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        &lt;span&gt;&#xD;
          
            Use plain,
           &#xD;
        &lt;/span&gt;&#xD;
      &lt;/span&gt;&#xD;
      &lt;strong&gt;&#xD;
        
           repeated statements of what the system is and is no
          &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
           t (“I’m a digital tool, not a therapist; I can’t diagnose or respond to emergencies”).
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
           Avoid claims that overstate agency, care or expertise
          &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
           , especially those that mimic humanistic or psychodynamic language of deep understanding and unconditional acceptance.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           For
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.hcrai.com/building-ai-responsibly-for-children-a-practical-framework" target="_blank"&gt;&#xD;
      
          child‑reachable systems
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          ,
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          boundaries need to be developmentally appropriate, visually reinforced and consistent across features.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Interaction Guardrails and Safe Exits
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Guardrails should be visible and testable.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
           Define and test escalation patterns
          &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        &lt;span&gt;&#xD;
          
            for risk cues, including when the agent stops generative conversation and surfaces crisis resources.
           &#xD;
        &lt;/span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        &lt;span&gt;&#xD;
          
            Provide
           &#xD;
        &lt;/span&gt;&#xD;
      &lt;/span&gt;&#xD;
      &lt;strong&gt;&#xD;
        
           obvious exit points:
          &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        &lt;span&gt;&#xD;
          
            options to change topic, reduce intensity, contact a human or remove content, without penalty.
           &#xD;
        &lt;/span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;br/&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          These patterns should be monitored and refined, not treated as one‑off compliance tasks.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Calibrated Empathy
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Empathy scripts drawn from humanistic and integrative traditions need calibration when implemented in AI.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        &lt;span&gt;&#xD;
          
            Use validating language that
           &#xD;
        &lt;/span&gt;&#xD;
      &lt;/span&gt;&#xD;
      &lt;strong&gt;&#xD;
        
           acknowledges experience without promising outcomes, unconditional availability or personal care.
          &#xD;
      &lt;/strong&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
           Avoid self‑disclosure and “best friend”
          &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        &lt;span&gt;&#xD;
          
            positioning, especially with young users; keep the agent’s status as a tool visible.
           &#xD;
        &lt;/span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;br/&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          The goal is to support users while keeping expectations in line with what the system can reliably and ethically deliver.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Preserving User Agency
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Interaction should support, not replace, users’ own reasoning and decision‑making.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
           Offer options and branching paths
          &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        &lt;span&gt;&#xD;
          
            (“Would you like to understand what might be happening, explore coping strategies, or talk about support from people around you?”) rather than single, pre‑selected routes.
           &#xD;
        &lt;/span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
           Use prompts that invite reflection
          &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        &lt;span&gt;&#xD;
          
            (“How does that suggestion fit your situation?”) instead of treating outputs as prescriptions.
           &#xD;
        &lt;/span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;br/&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          In CBT‑like and embedded styles, this is central to avoiding a dynamic where the agent quietly becomes the main decision‑maker.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          This is the text area for this paragraph. To change it, simply click and start typing. Once you've added your content, you can customize its design by using different colors and fonts. 
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Governance:
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ﻿
          &#xD;
      &lt;/span&gt;&#xD;
      
          From Styles to Accountability
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Therapeutic styles emerge from content, product and UX decisions, but they require explicit governance if they are to remain safe over time.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Organisations need clear
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          ownership for role boundaries, escalation thresholds, data practices and acceptable trade‑offs
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           between engagement and risk.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          These responsibilities should span the lifecycle: initial design, deployment, model or prompt updates, and decommissioning. 
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           The most
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          significant risks often appear only after launch
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           . It is of utmost importance to
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          involve clinicians, psychotherapy researchers and people with lived experience
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           when reviewing logs and making changes to styles and guardrails.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Work on common factors and “what works for whom” shows that
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          different therapeutic approaches need to be adapted continuously to the people using them.
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           For AI systems, that
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          adaptation has to be engineered and governed
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           rather than assumed.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h2&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Practical Checklist
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h2&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Here is a brief checklist for teams who would like to apply what has been discussed so far:
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          How might users reasonably overestimate the level of care, expertise or crisis support these elements imply?
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Which psychotherapy‑inspired elements (e.g., CBT‑like structure, humanistic‑like empathy, psychodynamic‑style exploration, mindfulness practices) are present in your agent?
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Where and how do you state the system’s role and limits (e.g., inside the interaction itself)?
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Who has the authority to change styles, guardrails and escalation patterns when new risks emerge, and how often do you revisit those decisions?
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          How are you monitoring real‑world use for dependency, misuse, misfit and unintended behavioural effects across different user groups?
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           If you’re building (or scaling) an AI mental-health agent and want to pressure-test your therapeutic style, boundaries, interaction patterns, and escalation design, we’re here to help.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;br/&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Reach out for a
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          short review session to identify risk hotspots and translate them into an implementable safety &amp;amp; governance pack
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
          , your team can ship and maintain.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          References
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;br/&gt;&#xD;
  &lt;/h3&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Biswas, M., &amp;amp; Murray, J. (2024). “Incomplete Without Tech”: Emotional Responses and the Psychology of AI Reliance. In Lecture notes in computer science (pp. 119–131). 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.1007/978-3-031-72059-8_11" target="_blank"&gt;&#xD;
      
          https://doi.org/10.1007/978-3-031-72059-8_11
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Chandra, S., Shirish, A., &amp;amp; Srivastava, S. C. (2022). To be or not to be . . .Human? Theorizing the role of Human-Like competencies in conversational artificial intelligence agents. Journal of Management Information Systems, 39(4), 969–1005. 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.1080/07421222.2022.2127441" target="_blank"&gt;&#xD;
      
          https://doi.org/10.1080/07421222.2022.2127441
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Darcy, A., Beaudette, A., Chiauzzi, E., Daniels, J., Goodwin, K., Mariano, T. Y., Wicks, P., &amp;amp; Robinson, A. (2023). Anatomy of a Woebot® (WB001): agent guided CBT for women with postpartum depression. Expert Review of Medical Devices, 20(12), 1035–1049. 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.1080/17434440.2023.2280686" target="_blank"&gt;&#xD;
      
          https://doi.org/10.1080/17434440.2023.2280686
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Dwivedi, Y. K., Helal, M. Y. I., Elgendy, I. A., Alahmad, R., Walton, P., Suh, A., Singh, V., &amp;amp; Jeon, I. (2025). Agentic AI Systems: What it is and isn’t. Global Business and Organizational Excellence, 45(3), 253–263. 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.1002/joe.70018" target="_blank"&gt;&#xD;
      
          https://doi.org/10.1002/joe.70018
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Johnson, C., Egan, S. J., Carlbring, P., Shafran, R., &amp;amp; Wade, T. D. (2024). Artificial intelligence as a virtual coach in a cognitive behavioural intervention for perfectionism in young people: A randomised feasibility trial. Internet Interventions, 38, 100795. 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.1016/j.invent.2024.100795" target="_blank"&gt;&#xD;
      
          https://doi.org/10.1016/j.invent.2024.100795
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Jones, B., Stemmler, K., Su, E., Kim, Y., &amp;amp; Kuzminykh, A. (2025). Users’ expectations and practices with agent memory (pp. 1–8). 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.1145/3706599.3720158" target="_blank"&gt;&#xD;
      
          https://doi.org/10.1145/3706599.3720158
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Khawaja, Z., &amp;amp; Bélisle-Pipon, J. (2023). Your robot therapist is not your therapist: understanding the role of AI-powered mental health chatbots. Frontiers in Digital Health, 5, 1278186. 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.3389/fdgth.2023.1278186" target="_blank"&gt;&#xD;
      
          https://doi.org/10.3389/fdgth.2023.1278186
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Kouros, T., &amp;amp; Papa, V. (2024). Digital Mirrors: AI companions and the self. Societies, 14(10), 200. 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.3390/soc14100200" target="_blank"&gt;&#xD;
      
          https://doi.org/10.3390/soc14100200
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Malmqvist, L. (2025). Sycophancy in Large Language Models: Causes and mitigations. In Lecture notes in networks and systems(pp. 61–74). 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.1007/978-3-031-92611-2_5" target="_blank"&gt;&#xD;
      
          https://doi.org/10.1007/978-3-031-92611-2_5
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Maurya, R. K., Montesinos, S., Bogomaz, M., &amp;amp; DeDiego, A. C. (2024). Assessing the use of ChatGPT as a psychoeducational tool for mental health practice. Counselling and Psychotherapy Research, 25(1). 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.1002/capr.12759" target="_blank"&gt;&#xD;
      
          https://doi.org/10.1002/capr.12759
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Norcross, J. C., &amp;amp; Wampold, B. E. (2010). What works for whom: Tailoring psychotherapy to the person. Journal of Clinical Psychology, 67(2), 127–132. 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.1002/jclp.20764" target="_blank"&gt;&#xD;
      
          https://doi.org/10.1002/jclp.20764
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Portell, S. (2026). Building AI Responsibly for Children: A Practical framework. HCRAI. 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.hcrai.com/building-ai-responsibly-for-children-a-practical-framework" target="_blank"&gt;&#xD;
      
          https://www.hcrai.com/building-ai-responsibly-for-children-a-practical-framework
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Sapkota, R., Roumeliotis, K. I., &amp;amp; Karkee, M. (2025). AI Agents vs. Agentic AI: A Conceptual taxonomy, applications and challenges. Information Fusion, 126, 103599. 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.1016/j.inffus.2025.103599" target="_blank"&gt;&#xD;
      
          https://doi.org/10.1016/j.inffus.2025.103599
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Short, F., &amp;amp; Thomas, P. (2014). Core approaches in counselling and psychotherapy. 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.4324/9780203773390" target="_blank"&gt;&#xD;
      
          https://doi.org/10.4324/9780203773390
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Zhang, Y., Zhao, D., Hancock, J. T., Kraut, R., &amp;amp; Yang, D. (2025). The rise of AI Companions: How Human-Chatbot Relationships Influence Well-Being. arXiv (Cornell University). 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.48550/arxiv.2506.12605" target="_blank"&gt;&#xD;
      
          https://doi.org/10.4855
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;a href="https://doi.org/10.48550/arxiv.2506.12605" target="_blank"&gt;&#xD;
      
          0/arxiv.2506.12605
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Author
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Yasmina El-Fassi
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Human-AI Interaction | PhD Researcher, Psychology &amp;amp; AI
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Research Collaborator, HCRAI
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;</content:encoded>
      <enclosure url="https://irp.cdn-website.com/9a76226e/dms3rep/multi/ChatGPT+Image+Feb+19-+2026-+06_01_19+PM.png" length="418511" type="image/png" />
      <pubDate>Thu, 19 Feb 2026 19:04:39 GMT</pubDate>
      <guid>https://www.hcrai.com/ai-agents-for-mental-health-different-therapeutic-styles-and-outcomes</guid>
      <g-custom:tags type="string" />
      <media:content medium="image" url="https://irp.cdn-website.com/9a76226e/dms3rep/multi/ChatGPT+Image+Feb+19-+2026-+05_56_30+PM.png">
        <media:description>thumbnail</media:description>
      </media:content>
      <media:content medium="image" url="https://irp.cdn-website.com/9a76226e/dms3rep/multi/ChatGPT+Image+Feb+19-+2026-+06_01_19+PM.png">
        <media:description>main image</media:description>
      </media:content>
    </item>
    <item>
      <title>The Design System As The Operational Layer for Responsible Human-AI Interaction</title>
      <link>https://www.hcrai.com/the-design-system-as-the-operational-layer-for-responsible-human-ai-interaction</link>
      <description />
      <content:encoded>&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Design systems were built to scale consistency, efficiency and quality in user-centric applications:
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.nngroup.com/articles/design-systems-101/" target="_blank"&gt;&#xD;
      
          reusable components, shared patterns and practices, and a common language across design and engineering
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           , promoting collaboration. They improve
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.smashingmagazine.com/2022/09/formula-roi-design-system/" target="_blank"&gt;&#xD;
      
          velocity because teams stop solving the same interface problems repeatedly,
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;a href="https://www.smashingmagazine.com/2022/09/formula-roi-design-system/" target="_blank"&gt;&#xD;
      
          providing measurable ROI
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          .
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
           
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           AI introduces both immense opportunities and complex (technical, legal and social) challenges, and it is reshaping the operating conditions traditional design systems were built for.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5142285" target="_blank"&gt;&#xD;
      
          User-facing outputs are adaptive
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5142285" target="_blank"&gt;&#xD;
      
          and can vary by input, model behaviour can shift over time and responses that sound credible can still be wrong
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          .
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           These systems can also reproduce or amplify bias, creating unequal outcomes across users. In high-confidence,
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="/when-ai-enters-the-learning-process-design-failures-regulatory-risk-and-guardrails-for-edtech"&gt;&#xD;
      
          relational interactions, they can shape user judgment and behaviour
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           .
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://al-kindipublishers.org/index.php/fcsai/article/view/11440" target="_blank"&gt;&#xD;
      
          These shifts raise the bar for accountability, transparency, and governance across the full product lifecycle.
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          The challenge is not only consistency and quality. It is ensuring consistency and quality safely, fairly and responsibly as both system behaviour and human behaviour evolve.
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;a href="https://www.isjtrend.com/article_199162_93fb076ee84e744ec0a2096683658594.pdf" target="_blank"&gt;&#xD;
      
          At the same time,
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;a href="https://www.isjtrend.com/article_199162_93fb076ee84e744ec0a2096683658594.pdf" target="_blank"&gt;&#xD;
      
          AI-powered copilots and no-code tools are increasingly used in the design process to support ideation,
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;a href="https://www.isjtrend.com/article_199162_93fb076ee84e744ec0a2096683658594.pdf" target="_blank"&gt;&#xD;
      
          prototyping, and delivery, but their adoption also raises concerns about transparency, bias, privacy, and the need to preserve human judgment and oversight
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          .
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Fast, polished design outputs often look complete even when the underlying logic is incomplete or flawed. As a result, familiar UX failures, misalignment with real user needs, hidden edge cases and context breakdowns, become harder to detect and more costly to correct later.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Design systems can take on a bigger
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://openresearch.ocadu.ca/id/eprint/4188/" target="_blank"&gt;&#xD;
      
          operational role in AI-enabled product development by codifying user-centric foundations, rules and infrastructure
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           that guide consistent, safe, ethical and scalable human-AI experiences.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          In AI-enabled contexts, design systems increasingly function as product systems, codifying behavioural guardrails, human oversight controls, and lifecycle governance. These system-level safeguards help teams manage risks that accumulate over time, including model drift, hallucinations and inaccuracies, over- or under-trust, erosion of user agency and decision-making, unequal outcomes from bias, and contextual or cultural misfit.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          The designer’s role is expanding beyond interface craft into shaping system behavior, orchestrating human-AI collaboration and managing interaction risk (likely to emerge over time). Accountability is now distributed. Outcomes are shaped by interdependent variables owned across teams (i.e., prompts, models, retrieval pipelines, guardrails, interaction patterns, monitoring/update cycles). As a result, governance cannot be treated as a policy layer. Governance becomes a cross-functional design challenge embedded in day-to-day product decisions.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           AI Ethics Standards provide guideliness and structure, but product teams still need to convert those principles into everyday decisions (i.e., what to ship, how it behaves, how it's explained, what to block, what to review (by who/ at what point), what to escalate, etc.). In practice, this is where teams operationalise recognised frameworks like
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.nist.gov/itl/ai-risk-management-framework" target="_blank"&gt;&#xD;
      
          NIST AI RMF
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           and ISO/IEC
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.iso.org/standard/42001" target="_blank"&gt;&#xD;
      
          42001
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          /
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.iso.org/standard/77304.html" target="_blank"&gt;&#xD;
      
          23894
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           , and, in the EU, align interaction controls with the
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://artificialintelligenceact.eu/chapter/3/" target="_blank"&gt;&#xD;
      
          EU AI Act’s risk-based obligations
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           . That translation gap is where design systems can create important leverage. Because they function as shared cross-functional operational memory,
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://openaccess.cms-conferences.org/publications/book/978-1-958651-95-7/article/978-1-958651-95-7_26" target="_blank"&gt;&#xD;
      
          design systems can turn governance into design and delivery logic
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           . They can enforce safe and effective interaction patterns, human oversight and controls embedded in how teams already work.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          In other words, governance becomes built-in by default, not layered on after release, making design systems central to sustaining UX quality and safety over time.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h2&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Creating a Design System for Responsible Human-AI interaction
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h2&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           A responsible design system helps teams ship effectively at scale while allowing teams to maintain quality and manage behavioral impact and risk.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Such as system should be across interfaces (UI, voice, agentic experiences), model providers and tooling environments (Figma, code assistants, no-code builders). It should also connect principles to enforceable rules in production.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           A
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           practical implementation in
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          5 modules:
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          1
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Audit the human-AI interaction
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          2
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Build the responsible interaction layer
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          3
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Operationalise governance
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Systematic testing and continous monitoring
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Enforce constraints in AI-assisted production
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          5
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          4
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          module 1
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h2&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Audit the human-AI interaction
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h2&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Start by mapping where AI is already present in the experience, and where it soon will be. A strong baseline audit should produce:
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          list where users encounter AI outputs and actions, with consistent metadata (surface, modality, capability type, automation level, owner, initial risk rating). For high-risk touchpoints, add traceability fields (user goal, decision stakes, key inputs, escalation/ownership) and an evidence pack (touchpoint spec, decision rationale, change/version log, disclosure &amp;amp; UX copy, human oversight/escalation playbook, and incident/near-miss record) so evidence and accountability remain auditable over time.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          AI touchpoint inventory
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Failure risks
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Identify the interaction risks that can occur across the experience  (e.g., misleading certainty signals, inaccuracies, hidden automation, bias/ unfair outcomes, inadequate contestability, unsafe delegation or responses, sensitive inference, escalation failures), mapped back to the touchpoints where they appear.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Compare the current experience to defined internal standards (principles, safety, UX, accessibility, content) and relevant external obligations (ethical requirements, regulations and industry-level standards). Record the evidence reviewed (designs, flows, policies, logs, evaluations) so gaps are traceable + auditable.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Gap analysis
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Risk map
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Rank issues by severity of impact, likelihood, exposure/scale, and detectability. Include regulatory classification as one input to prioritization (alongside user impact and operational risk).
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Include vulnerability as a risk input, both cohort vulnerability (e.g., minors, mental health contexts, low literacy) and situational vulnerability (e.g., high-pressure decisions, urgency, on-the-go), since it materially shifts stakes and harm likelihood.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          module 2
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h2&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Build the responsible interaction layer
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/h2&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          The interaction layer is where we translate evidence-informed principles and behavioural insights into enforceable requirements for human-AI interaction.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
           It also turns those requirements into reusable, responsible building blocks and patterns.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Principles
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           should translate into interaction requirements, reusable patterns and review criteria. They should define both what
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5312384" target="_blank"&gt;&#xD;
      
          behavioural and ethical outcomes to enable (e.g, informed trust, better decisions, confident recovery) and
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5312384" target="_blank"&gt;&#xD;
      
          harm to prevent (e.g., bias, opacity, privacy risk, unsafe or manipulative interaction patterns)
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5312384" target="_blank"&gt;&#xD;
      
          .
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Core areas include agency and recoverability, transparency, trust calibration, decision support under uncertainty, safety safeguards,
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="http://dx.doi.org/10.2139/ssrn.5312384" target="_blank"&gt;&#xD;
      
          fairness, traceability and human oversight
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;a href="http://dx.doi.org/10.2139/ssrn.5312384" target="_blank"&gt;&#xD;
      
          .
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Trust calibration needs explicit design: show what the system used (and didn’t), communicate uncertainty without false precision, nudge verification in proportion to stakes, and add  “how this works” primers to prevent magical thinking.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Principles 
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Foundations 
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Foundations
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           are baseline rules for all AI-mediated flows. They define role boundaries, tone and behavioural limits, confirmation norms, required disclosures, automation thresholds, recovery patterns, and data-use rules at the interaction layer.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5142285" target="_blank"&gt;&#xD;
      
          They also define change
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5142285" target="_blank"&gt;&#xD;
      
          transparency
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           (i.e., which behaviour shifts require internal escalation, and which (such as those affecting trust, control, outcomes or data use) must be disclosed to users + the signals that trigger both.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Set clear data boundaries (what data can be used for inference, personalisation, and training, with purpose-specific rules, retention limits, and user controls). Prohibit or tightly control sensitive inference with detection and escalation paths, and require explicit consent for proactive and background actions.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          These are minimum foundations, not a closed list. They should be adapted by domain, risk tier and level of automation.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Components
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           provide the building blocks teams use daily (e.g., review-before-apply, pause/undo automation controls, uncertainty signals, escalation handoffs,  provenance cues where AI contributes to outputs).
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Patterns
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           define how those blocks work together across real user journeys (e.g., setting expectations, supporting recovery, preserving user control, consent mechanisms, handling safe refusal, calibrating trust, reducing over-reliance over time).
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Components and interaction patterns
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           1. Unstructured foundations
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ﻿
          &#xD;
      &lt;/span&gt;&#xD;
      
          (Design tokens aren’t AI-ready)
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div&gt;&#xD;
  &lt;img src="https://irp.cdn-website.com/9a76226e/dms3rep/multi/1654723092683.jpeg" alt=""/&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           In transformation work,
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.linkedin.com/in/markjonathanreynolds/" target="_blank"&gt;&#xD;
      
          Mark Reynolds
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          (Design Systems expert) sees three recurring structural failures:
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Most design systems were built for human consumption, not machine interpretation.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Poorly structured tokens, inconsistent naming, messy hierarchies, and unclear JSON schemas make it difficult for AI to reliably understand color, spacing, typography, and semantic intent, resulting in incorrect or inconsistent outputs.
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          2. Schema-free components
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          (Forcing AI to Guess)
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Without explicit schemas for components, patterns and templates, AI is forced to reverse-engineer intent by inspecting Figma files and component libraries. This visual interpretation is unreliable, brittle and context-blind, leading to hallucinated properties, broken layouts and misuse of components at scale.
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           3. Missing guardrails
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ﻿
          &#xD;
      &lt;/span&gt;&#xD;
      
          (No built-in brand, accessibility or responsibility controls)
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Design systems rarely encode brand rules, accessibility requirements, or responsible design constraints in a way AI can enforce. Without these baked-in guardrails, AI-generated outputs drift off-brand, violate accessibility standards and introduce compliance and ethical risks that teams must manually fix afterward.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Mark Reynolds, Design System Director; Founder at Atomle
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h2&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Where organizations usually fail
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/h2&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          module 3
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h2&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Operationalise g
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          overnance
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/h2&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          This module focuses on making governance executable in day-to-day delivery. The goal is to define clear human accountability, decision rights, revisions and evidence requirements so teams can innovate (without bottlenecks) while maintaining safety and accountability and being aligned to compliance requirements.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Human oversight model
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h4&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Where &amp;amp; when human oversight is required (including override/escalation rights).
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Ownership &amp;amp; accountability model
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/h4&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Responsibilities and decision authority across required functions (i.e., Design, Product, Engineering, Legal/ Risk).
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Decision &amp;amp; approval criteria
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h4&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          How decisions are made (pass/fail thresholds, required evidence, and documentation standards). By risk tier, define ship/hold/monitor checkpoints, release/rollout controls, and incident-response requirements. High-risk touchpoints require logged sign-off.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          1
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          2
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          3
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Exception rules that define when the normal flow must stop, when a higher-risk path is activated, and which function owns the escalation.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Escalation triggers
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Behaviour change
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h4&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          How changes in system behaviour (model, prompt, policy, retrieval) and human behaviour (usage patterns, workarounds, risk signals, decision drift) are monitored, documented, approved and communicated.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          4
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          5
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          module 4
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h2&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Co-pilots and no-code tools
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/h2&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           A responsible design system must operate in the reality of AI-assisted production. Copilots and no-code tools now generate UI and code, compressing development cycles. In this environment, documentation is necessary but insufficient.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Teams also need a risk-tiered Evidence Pack for high-impact patterns (touchpoint spec, decision rationale, change/version log, disclosure &amp;amp; UX copy, human oversight/escalation playbook, and incident/near-miss record) that travels with the work and is required for release.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          To keep this scalable, guardrails can’t live only in docs. They need to be built into how work is produced and shipped.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          That means translating standards into reusable building blocks and non-negotiable checks (required disclosures, accessibility, traceable records of key AI actions and user controls, and clear no-go patterns for high-risk interactions), plus clear requirements for AI UI elements (attribution, uncertainty, user override) and consistent tracking so teams can monitor drift and catch issues early.
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           AI can also extend governance across more of the product lifecycle. Policy-aware agents can review implementation quality, flag deviations, support conformance checks, and, in low-risk cases, suggest or auto-correct adoption issues.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          A practical model combines global enforcement of user-centered principles and ethical, safety, and compliance constraints with local flexibility in implementation. At the same time, teams must avoid encoding principles so rigidly that AI-assisted outputs become formulaic. Effective governance combines hard safety constraints with flexible guidance that preserves creativity and contextual judgment.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;span&gt;&#xD;
      
           
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          1.
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Platform-level
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/h4&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ﻿
          &#xD;
      &lt;/span&gt;&#xD;
      
          non-negotiables
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h4&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          E.g., tone adaptation, microcopy variants, contextual nudges, domain-specific implementation choices.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           2.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Team-level
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/h4&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ﻿
          &#xD;
      &lt;/span&gt;&#xD;
      
          flexibility
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h4&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          E.g., approved AI interaction patterns, mandatory disclosures, telemetry/logging requirements, explicit confirmation for high-stakes actions.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          module 5
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h2&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Testing and monitoring playbook
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h2&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Continuous research and testing with users helps you design for real-world conditions and anticipate behavioural risks. Pair this with scenario-based evaluations across end-to-end journeys and targeted stress testing (red teaming) of high-risk interactions.
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          After launch, continuous human oversight and feedback loops make emergent behaviour and risk visible and manageable. Combine telemetry with ongoing user research to detect both model and behaviour drift that metrics alone won’t capture.
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Pay special attention to behavioural failure and risk modes that develop over time, such as:
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           lack of adoption (e.g., trust or usefulness mismatches, poor fit to real workflows)
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           over-reliance, bias and accuracy risk (e.g., rising error rates, increases in accepted-wrong outcomes, widening gaps across user groups or contexts)
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           misplaced or transferred authority (e.g., treating output as expert judgment, increasing reliance, low verification)
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           misuse + weak recovery (e.g., off-label use/retry loops, jail-breaking, silent agent actions, limited undo/appeal pathways, repeat incidents)
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           relationship attachment (e.g., anthropomorphism, emotional reliance, oversharing)
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;a href="/"&gt;&#xD;
      
          As AI supports the design process and increases delivery speed, the key advantage depends on operational consistency through governance, oversight and accountability
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;a href="/"&gt;&#xD;
      
          .
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           In product organizations, design systems can serve as one operational mechanism to make responsible human-AI interaction repeatable, allowing quality, safety and governance to scale with delivery.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Where to start? assess readiness before you scale
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Before scaling AI across products and teams, assess whether your design system and governance can support it safely and consistently.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ﻿
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Start with a Design System + AI Readiness Sprint, led by design systems and human-centered responsible AI practitioners from HCRAI an
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           d
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Atomle.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          We’ll assess your system foundations (tokens, components, interaction patterns), documentation, and governance, then deliver a practical gap analysis and a prioritized roadmap to support AI-native workflows and responsible human–AI interaction at scale.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          References
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Ashfin, P. (2024). Towards Responsible Engineering Software: Ethical, Legal and Social Implications of Automated Design and AI-Driven Tools. (2024). Frontiers in Computer Science and Artificial Intelligence, 3(1), 01-14.
          &#xD;
      &lt;br/&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          Fabricio de Barros, C., &amp;amp; Sandberg, R. (2025). Designing for UX Designers : Creating Sustainable and Usable Design Systems (Dissertation). Retrieved from https://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-245806
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          Fessenden, T. (2021).
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Design systems 101
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          . Nielsen Norman Group
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          Kimm, G. (2025). Supporting Designers’ Authorship with AI: Design Computing Patterns to Navigate Across Human and Artificial Intelligences (Version 1). Swinburne. https://doi.org/10.25916/sut.28340456.v1
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Lee, K. S., Choi, M. &amp;amp; Asni., E.Y. (2025). AI Opportunity Cards: Developing a Toolkit for AI as a Design Material. In Proceedings of the 2025 International Conference on Information Technology for Social Good (GoodIT '25). Association for Computing Machinery, New York, NY, USA, 396–402. https://doi.org/10.1145/3748699.3749817
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          Lere, H. M., &amp;amp; Bilkisu, H. (2025). AI-driven architectural design: Opportunities and ethical challenges.
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          ARCN International Journal of Sustainable Development, 14
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           (2), 97–110.  ISSN: 2384-5341
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          Myllylä, M., Karvonen, A., Koskinen, H. (2024). Design Systems for Intelligent Technology. In: Tareq Ahram, Waldemar Karwowski, Dario Russo and Giuseppe Di Bucchianico (eds) Intelligent Human Systems Integration (IHSI 2024): Integrating People and Intelligent Systems. AHFE (2024) International Conference. AHFE Open Access, vol 119. AHFE International, USA.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          http://doi.org/10.54941/ahfe1004490
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          Okpala, B. (2024). Examining the Impact of Generative AI on UX/UI Design. SSRN. 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://dx.doi.org/10.2139/ssrn.5312384" target="_blank"&gt;&#xD;
      
          http://dx.doi.org/10.2139/ssrn.5312384
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          Saeidnia, H. R. and Ausloos, M. (2024). Integrating Artificial Intelligence into Design Thinking: A Comprehensive Examination of the Principles and Potentialities of AI for Design Thinking Framework. InfoScience Trends, 1(2), 1-9. doi: 10.61186/ist.202401.01.09
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Salem, Al.(2024) Component Constellations: Future Perspectives on Design Systems. [MRP]. OCAD.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://openresearch.ocadu.ca/id/eprint/4188" target="_blank"&gt;&#xD;
      
          https://openresearch.ocadu.ca/id/eprint/4188
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          Speicher, M., &amp;amp; Baena Wehrmann, G. (2022).
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          One formula to rule them all: The ROI of a design system
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          . Smashing Magazine.
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Windarto, Y. (2024). Study of Research Trends and Leveraging AI on User Experience and Interface Design. SSRN.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://dx.doi.org/10.2139/ssrn.5142285" target="_blank"&gt;&#xD;
      
          http://dx.doi.org/10.2139/ssrn.5142285
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          Yu, C., Zheng, P., Peng, T., Xu, X., Vos, S., &amp;amp; Ren, X. (2025). Design meets AI: challenges and opportunities. Journal of Engineering Design, 36(5–6), 637–641. https://doi.org/10.1080/09544828.2025.2484085
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Author
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Sara Portell
          &#xD;
      &lt;br/&gt;&#xD;
      
          Behavioural Scientist &amp;amp; Responsible AI Advisor
          &#xD;
      &lt;br/&gt;&#xD;
      
          Founder, HCRAI
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ﻿
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;</content:encoded>
      <enclosure url="https://irp.cdn-website.com/9a76226e/dms3rep/multi/ChatGPT+Image+Feb+6-+2026-+10_10_29+PM.png" length="277545" type="image/png" />
      <pubDate>Fri, 06 Feb 2026 22:03:56 GMT</pubDate>
      <guid>https://www.hcrai.com/the-design-system-as-the-operational-layer-for-responsible-human-ai-interaction</guid>
      <g-custom:tags type="string" />
      <media:content medium="image" url="https://irp.cdn-website.com/9a76226e/dms3rep/multi/ChatGPT+Image+Feb+6-+2026-+10_10_29+PM.png">
        <media:description>thumbnail</media:description>
      </media:content>
      <media:content medium="image" url="https://irp.cdn-website.com/9a76226e/dms3rep/multi/ChatGPT+Image+Feb+6-+2026-+10_10_29+PM.png">
        <media:description>main image</media:description>
      </media:content>
    </item>
    <item>
      <title>When AI Enters the Learning Process: Design Failures, Regulatory Risk and Guardrails for EdTech</title>
      <link>https://www.hcrai.com/when-ai-enters-the-learning-process-design-failures-regulatory-risk-and-guardrails-for-edtech</link>
      <description />
      <content:encoded>&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Generative AI (GenAI) and emerging agentic systems are moving AI into the learning process itself. These systems don’t stop at delivering content. They explain, adapt, remember and guide learners through tasks. In doing so, they change where cognitive effort sits. I.e., what learners do themselves and what gets delegated to machines.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           This shift unlocks significant opportunities. GenAI can provide on-demand explanations, examples and feedback at a scale. It can diversify learning resources through multimodal content, support learners working in a second language and reduce friction when students get stuck, lowering barriers to engagement and persistence. For some learners,
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.sciencedirect.com/science/article/pii/S0001691825012934" target="_blank"&gt;&#xD;
      
          AI-mediated feedback can feel psychologically safer, encouraging experimentation (trial and error), revision and assistance without fear of judgement
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          .
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           But these gains come with important risks. The same design choices that improve short-term performance, confidence, or engagement can weaken
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.brookings.edu/articles/a-new-direction-for-students-in-an-ai-world-prosper-prepare-protect/" target="_blank"&gt;&#xD;
      
          i
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;a href="https://www.brookings.edu/articles/a-new-direction-for-students-in-an-ai-world-prosper-prepare-protect/" target="_blank"&gt;&#xD;
      
          ndependent reasoning, distort social development or introduce hidden dependencies over time
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          .
           &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           In educational contexts, especially those involving children and teens, we are talking about learning, safeguarding, regulatory and reputational risks. If the
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.science.org/doi/10.1126/science.1207745" target="_blank"&gt;&#xD;
      
          “Google effect” (digital amnesia)
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           raised concerns about outsourcing memory to search engines, LLMs can be even more powerful in practice. Agentic and multi-agent systems raise the stakes further. As AI systems plan, adapt, coordinate, and act proactively, they can quietly
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;a href="https://ieeexplore.ieee.org/document/11201263" target="_blank"&gt;&#xD;
      
          assume roles that belong to learners or educators: framing problems, sequencing tasks, resolving disagreement, or deciding what happens next
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          . When these shifts are unexamined, learning can collapse into automated coordination, impacting cognitive development.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           This is why emerging standards and guidance, (e.g., the Guidance for
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://unesdoc.unesco.org/ark:/48223/pf0000386693?locale=en" target="_blank"&gt;&#xD;
      
          generative AI in education and researchcreated by UNESCO
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           or the Generative AI: product safety standards by the
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.gov.uk/government/publications/generative-ai-product-safety-standards/generative-ai-product-safety-standards" target="_blank"&gt;&#xD;
      
          UK Department for Education
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          ), take a risk-first approach to AI in education. As AI becomes embedded in learning, design choices increasingly carry regulatory, safeguarding and reputational consequences, alongside pedagogical ones.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Where AI Can Undermine Learning
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           AI introduces learning and developmental risks unless we actively mitigate them.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;br/&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           These risks are not uniform across learners. It is very important to consider age and developmental stage. Younger learners are more susceptible to over-trust, emotional reliance, and authority transfer, while older learners may better interrogate and contextualise AI outputs.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Responsible educational AI requires
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="/"&gt;&#xD;
      
          age-appropriate constraints on autonomy
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          , interaction style and use cases. One model of use won’t fit all learners.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Risk Accumulation
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Hallucinations and biases
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           AI can produce confident but incorrect or biased explanations.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://arxiv.org/abs/2501.06682" target="_blank"&gt;&#xD;
      
          These errors are often hard to detect and can introduce misconceptions, cognitive bias and amplify over-trust, undermining learner judgmen
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          t
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          . This can reinforce misconceptions, stereotypes and uneven representations,  shaping understanding, behaviour and sense of self. 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Cognitive deskilling
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           When
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://link.springer.com/article/10.1186/s40561-024-00316-7" target="_blank"&gt;&#xD;
      
          AI is available to generate answers, full solutions and completed tasks, learners offload cognitive work - this can provoke long-term developmental harm
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           . This shows up when relying on full solutions rather than grappling with problems.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://ajet.org.au/index.php/AJET/article/view/9932" target="_blank"&gt;&#xD;
      
          Over time,
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;a href="https://ajet.org.au/index.php/AJET/article/view/9932" target="_blank"&gt;&#xD;
      
          this can weaken reasoning, problem-solving and persistence.
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Psychological, emotional and social risk
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;a href="https://www.brookings.edu/articles/a-new-direction-for-students-in-an-ai-world-prosper-prepare-protect/" target="_blank"&gt;&#xD;
      
          AI learning systems built as conversational, companion-style or anthropomorphic can encourage emotional reliance, reduce peer interaction and undermine real-world support networks
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          .
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           This is especially problematic for children, as they can distort emotional and social development and blur boundaries around trust and authority.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Context insensitivity
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           When
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://bera-journals.onlinelibrary.wiley.com/doi/10.1111/bjet.13614" target="_blank"&gt;&#xD;
      
          deployed without contextual adaptation, outputs can lead to explanations, examples or learning strategies that are inappropriate, misleading, or misaligned with learners’ educational contex
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          t.
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Over time, this risks privileging dominant knowledge frameworks, marginalising local perspectives, and creating friction with classroom practices, particularly in culturally diverse or resource-constrained settings.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Manipulative design
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Patterns such as
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.gov.uk/government/publications/generative-ai-product-safety-standards/generative-ai-product-safety-standards" target="_blank"&gt;&#xD;
      
          flattery, unjustified confidence, social pressure (e.g., ‘others have done this’), or reward-based engagement or monetisation-driven dark patterns can influence learner behaviour
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           in ways that are not acceptable in learning contexts.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Data privacy
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Continuous monitoring, collection of sensitive learner data, or reuse of data for commercial purposes or model training raise serious concerns, especially for children, who are treated as a high-risk group under data protection law.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Agentic autonomy
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Agentic AI systems plan, adapt, and act over time. Memory and proactive decision-making can gradually shift agency away from learners (even when short-term task performance improves), creating dependency loops that are difficult to detect. Learners may rely increasingly on system guidance for task sequencing, decision-making, or problem framing, reducing opportunities for independent reasoning and productive struggle. In multi-agent systems,
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://dl.acm.org/doi/10.1145/3726302.3730092" target="_blank"&gt;&#xD;
      
          coordinated outputs or apparent consensus can further inflate epistemic authority, making AI guidance appear more reliable, discouraging critical evaluation
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          . 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Anti-Patterns in EdTech AI Design
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Responsible edtech requires human-centred design and system-level accountability.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          The patterns below highlight design risks that product teams can address to reduce harm and improve learning outcomes.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Solution-by-default 
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h4&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          AI is configured to generate full answers, code, solutions or explanations as the primary interaction. This encourages cognitive offloading, shortcuts productive struggle, and shifts the learner from sense-making to copying.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Red flag:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Learners can complete tasks without attempting them.
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Fluency instead of understanding
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h4&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Performance while AI is active is used as a success signal, without testing whether learners can perform independently.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Red flag:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Success metrics that only focus on task completion, speed or output quality - without measuring skill transfer.
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;span&gt;&#xD;
      
          AI as the primary authority
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h4&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          The system speaks with high confidence and human-like authority, discouraging questioning or verification. Multi-agent consensus is presented as more reliable rather than provisional.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Red flag:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Learners rarely challenge or revise AI outputs.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h2&gt;&#xD;
    &lt;span&gt;&#xD;
      
          01
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h2&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h2&gt;&#xD;
    &lt;span&gt;&#xD;
      
          02
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h2&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h2&gt;&#xD;
    &lt;span&gt;&#xD;
      
          03
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h2&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Design choices prioritise time-on-task, retention or satisfaction over learning depth. Persuasive nudges, flattery or gamified (engagement or usage-driven) rewards do not drive understanding.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Red flag:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Engagement metrics improve while independent performance stagnates or declines.
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Optimising for engagement
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/h4&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Opaque decisions
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h4&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Learners and educators cannot tell why the system intervened, what information it used, or how confident it is. Errors, bias or hallucinations go unnoticed.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Red flag:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
          It just gives an answer with no explanation or source transparency.
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Agentic autonomy without boundaries
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h4&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;a href="https://doi.org/10.1109/ACCESS.2025.3620473" target="_blank"&gt;&#xD;
      
          Agentic systems
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;a href="https://doi.org/10.1109/ACCESS.2025.3620473" target="_blank"&gt;&#xD;
      
          plan tasks, set goals or sequence learning steps without learner confirmation. Memory and adaptation quietly replace learner agency over time
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          .
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Red flag:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          The system increasingly decides what happens next.
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h2&gt;&#xD;
    &lt;span&gt;&#xD;
      
          04
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h2&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h2&gt;&#xD;
    &lt;span&gt;&#xD;
      
          05
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h2&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h2&gt;&#xD;
    &lt;span&gt;&#xD;
      
          06
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h2&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Confidence without competence
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/h4&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;a href="https://www.sciencedirect.com/science/article/pii/S109675162400040X?via%3Dihub" target="_blank"&gt;&#xD;
      
          Learners feel
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;a href="https://www.sciencedirect.com/science/article/pii/S109675162400040X?via%3Dihub" target="_blank"&gt;&#xD;
      
          more confident using AI, but are not given opportunities to test skills without support. Confidence becomes a misleading proxy for master
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          y (Dunning-Kruger effect).
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Red flag:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          No AI-free checkpoints or 'try without help' moments.
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Misuse treated as a user problem
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h4&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Design assumes learners will use AI responsibly without constraints, scaffolds and literacy support.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Red flag:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Responsibility is pushed to users rather than designed into the system.
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h4&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Age and context-blind design
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/strong&gt;&#xD;
  &lt;/h4&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           AI systems are deployed without adaptation to learner age/  developmental stage, local curricula or cultural and linguistic context.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Red flag:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
          The same interaction patterns, autonomy level and  behaviour is applied across age groups or contexts.
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h2&gt;&#xD;
    &lt;span&gt;&#xD;
      
          07
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h2&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h2&gt;&#xD;
    &lt;span&gt;&#xD;
      
          08
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h2&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h2&gt;&#xD;
    &lt;span&gt;&#xD;
      
          09
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h2&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Designing for Protection: Guardrails for EdTech AI
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Learning harm is often an emergent property of design choices, interaction patterns and governance gaps.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Responsible edtech requires human-centred design and system-level accountability .
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           AI must be
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.nature.com/articles/s41562-024-02004-5" target="_blank"&gt;&#xD;
      
          grounded in human learning theory (e.g. constructivism, Inquiry-Based Learning (IBL), scaffolding, Zone of Proximal Development (ZPD), etc.)
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           , not optimised for task completion, speed, fluency or output quality alone.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Responsible design considerations:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
             Encourage cognitive engagement, reflection, verification and independent reasoning.
           &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Ground AI in learning science
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Design against over-reliance
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;a href="https://link.springer.com/article/10.1007/s10648-025-10020-8" target="_blank"&gt;&#xD;
      
          Systems should enforce clear role boundaries, avoid authoritative or solution-first behaviour and prioritise scaffolding
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          . Support should fade as competence increases, not persist indefinitely.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Responsible design considerations:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
             Implement attempt-first flows, progressive disclosure and intentional support fading.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://osf.io/preprints/psyarxiv/g5fd8_v1" target="_blank"&gt;&#xD;
      
          Explicitly design for scaffolding modes (Aid/Complement) rather than replacement mode (Substitute) as the default
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          .
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Make solutions available only after evidence of learner engagement (e.g., an attempt, explanation, comparison).
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;a href="https://ajet.org.au/index.php/AJET/article/view/9932" target="_blank"&gt;&#xD;
      
          AI should support learners in questioning, verifying and revising, not accepting outputs at face value
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           . Interfaces should make uncertainty visible and prompt reflection and contradiction (e.g., 'What would you check?', 'What do you keep and why?').
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Responsible design considerations:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
            Treat agency as a core learning outcome. Design prompts, UI constraints and feedback loops that require justification, comparison and learner reasoning.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Preserve agency
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Ensure human oversight
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;a href="https://library.iated.org/view/ARTSIN2025CHA" target="_blank"&gt;&#xD;
      
          AI should augment educator judgment, not bypass it. Responsible adoption requires explicit boundaries (e.g., which educational functions may be automated, which require human review and which should never be delegated to AI
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          ).
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Responsible design considerations:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Architect systems to distinguish low-stakes from high-stakes workflows. Enable human review, override and escalation for consequential decisions. Make the level of AI authority explicit rather than implicit.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Short-term confidence and immediate efficiency gains can mask declining independent performance.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://arxiv.org/abs/2509.21972" target="_blank"&gt;&#xD;
      
          Governance must include longitudinal monitoring of reliance, cognition, self-efficacy and differential impacts across learner groups
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;a href="https://arxiv.org/abs/2509.21972" target="_blank"&gt;&#xD;
      
          .
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Responsible design considerations:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
            Track learning trajectories over time. I.e., compare AI-assisted with unaided performance and monitor whether confidence and independence converge or diverge.
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Conduct longitudinal behavioural testing
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Design for transparency and explainability
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;a href="https://dl.acm.org/doi/10.1145/3706598.3713778" target="_blank"&gt;&#xD;
      
          Learners and educators should understand what the system is doing, why it intervened, what data it relied on and how confident it is.
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Responsible design considerations:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
            Expose reasoning and uncertainty through interaction design ('why this suggestion?). Make AI contributions visible from learner work.
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           For AI-generated learning materials (i.e., videos, quizzes, Q&amp;amp;A), ensure outputs align with course content and instructional / academic goals.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Responsible design considerations:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://arxiv.org/abs/2501.06682" target="_blank"&gt;&#xD;
      
          Use restricted retrieval from approved materials (e.g., RAG) and human oversight to prevent misinformation, biases and low pedagogical quality
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          .
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
           
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Prevent inaccuracy and pedagogical misalignment
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Do not dehumanize learning
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Social interaction is a learning-relevant ingredient. Designs that remove it can reduce learning quality (e.g., lower perceived social presence). 
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Responsible design considerations:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
             Positioning the AI as a tool or facilitator.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.mdpi.com/2076-328X/15/10/1348" target="_blank"&gt;&#xD;
      
          Embed human guidance and actively redirect learners to teachers, peers or group discussion when social engagement is essential for understanding, reflection or motivation
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          .
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          AI literacy for learners and educators is a prerequisite for responsible deployment. We cannot expect it to emerge automatically through use.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Responsible design considerations:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
             Show what AI can/ cannot do, and why outputs are generated. Provide contextual explanations (e.g. tooltips, examples) at moments of use, instead of one-off onboarding.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://ijeret.org/index.php/ijeret/article/view/377" target="_blank"&gt;&#xD;
      
          Offer distinct explanations and controls for learners, educators, parents and administrators, aligned with their decision-making responsibilities / pedagogical roles
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          .
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Treat AI literacy as a prerequisite
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Co-creation, system evaluation and ongoing monitorin
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          g
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Co-creation with learning and development experts, educators, learners and local domain experts is essential to ensure systems align with learning theory, developmental needs, cultural and linguistic contexts and the constraints of classroom and school environments.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Testing must cover the full AI learning system, including data sources and knowledge bases, retrieval and grounding mechanisms, prompts, interaction design, memory and adaptation, orchestration logic, and, where applicable, the dynamics of multi-agent interactions.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Early testing and continuous post-deployment monitoring should not stop at output quality. They must also examine learning impact, dependency, shifts in learner agency and unintended behaviours that emerge over time.
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          For agentic and multi-agent systems, this includes testing how decisions are delegated, how disagreement is resolved, and whether collaboration supports learning or collapses into automated coordination. Because many risks surface only through repeated use, longitudinal monitoring must be treated as a governance requirement. 
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Treating educators/ schools, learners and local domain experts as ongoing partners rather than end users helps surface risks early and recalibrate system behaviour. This ensures AI-supported learning remains pedagogically robust, inclusive, safe and effective across diverse educational settings. 
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          AI will increasingly shape how we think, collaborate and develop over time
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Whether this strengthens or erodes learning is a design, governance and responsibility choice, shared by builders, institutions and education systems.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Getting this right requires clear governance, boundaries, age-appropriate safeguards, human oversight and sustained attention to how learning actually unfolds in practice.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          How We Work With EdTech Teams
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          We support teams across three areas:
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          1. AI evaluations &amp;amp; behavioural risk assessments
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          We assess how learners and educators interact with your system: output quality, model drift over time, and where over-reliance, authority transfer misuse or unintended behaviours emerge as systems scale.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          2. UX / design refinement
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          We translate learning science and behavioural evidence into concrete design guidance (i.e., interaction patterns, autonomy boundaries, scaffolding strategies, safeguards and monitoring) that reduce risk while improving product value.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          3. Regulatory alignment
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          We help teams align design and governance decisions with emerging regulations before these become compliance or procurement blockers.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           If you’re building or deploying AI in education and want clarity on learning impact, behavioural risk, or regulatory exposure
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          before issues surface at scale,
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          we’re happy to talk.
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          References
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Artsın, M., &amp;amp; Bozkurt, A. (2025). Charting new horizons: What agentic artificial intelligence (AI) promises in the educational landscape. In EDULEARN25 proceedings (pp. 2019–2023). IATED Academy. https://doi.org/10.21125/edulearn.2025.0585
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Bauer, E., Greiff, S., Graesser, A. C., Scheiter, K., &amp;amp; Sailer, M. (2025). Looking beyond the hype: Understanding the effects of AI on learning. Educational Psychology Review, 37, Article 45. https://doi.org/10.1007/s10648-025-10020-8
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Burns, M., Winthrop, R., Luther, N., Venetis, E., &amp;amp; Karim, R. (2026). A new direction for students in an AI world: Prosper, prepare, protect. The Brookings Institution, Center for Universal Education. Retrieved from https://www.brookings.edu/articles/a-new-direction-for-students-in-an-ai-world-prosper-prepare-protect
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          Delikoura, I., Fung, Y. R., &amp;amp; Hui, P. (2025). From superficial outputs to superficial learning: Risks of large language models in education. arXiv. https://doi.org/10.48550/arXiv.2509.21972
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Hu, X., Xu, S., Tong, R., &amp;amp; Graesser, A. C. (2025). Generative AI in education: From foundational insights to the Socratic Playground for Learning. arXiv. https://doi.org/10.48550/arXiv.2501.06682
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          Jia, W., Pan, L., &amp;amp; Neary, S. (2025). Effect of GenAI dependency on university students’ academic achievement: The mediating role of self-efficacy and moderating role of perceived teacher caring. Behavioral Sciences, 15(10), 1348. https://doi.org/10.3390/bs15101348
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Kamalov, F., Kumar, S., Hossain, M. S., &amp;amp; Ahmed, S. (2025). Evolution of AI in education: Agentic workflows. arXiv. https://arxiv.org/abs/2504.20082
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Kostopoulos, G., Gkamas, V., Rigou, M., &amp;amp; Kotsiantis, S. (2025). Agentic AI in education: State of the art and future directions. IEEE Access, 13, 177467–177491. https://doi.org/10.1109/ACCESS.2025.3620473
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Le, H., Shen, Y., Li, Z., Xia, M., Tang, L., Li, X., Jia, J., Wang, Q., Gašević, D., &amp;amp; Fan, Y. (2025). Breaking human dominance: Investigating learners’ preferences for learning feedback from generative AI and human tutors. British Journal of Educational Technology, 56, 1758–1783. https://doi.org/10.1111/bjet.13614
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Lee, H.-P. (Hank), Sarkar, A., Tankelevitch, L., Drosos, I. Z., Rintel, S., Banks, R., &amp;amp; Wilson, N. (2025). The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25) (pp. 1–22). ACM. https://doi.org/10.1145/3706598.3713778
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Li, S., Liu, J., &amp;amp; Dong, Q. (2025). Generative artificial intelligence-supported programming education: Effects on learning performance, self-efficacy and processes. Australasian Journal of Educational Technology, 41(3), 1–25. https://doi.org/10.14742/ajet.9932
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Liu, Y., Liu, Y., Zhang, X., Chen, X., &amp;amp; Yan, R. (2025). The truth becomes clearer through debate! Multi-agent systems with large language models unmask fake news. In Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’25) (pp. 504–514). Association for Computing Machinery. https://doi.org/10.1145/3726302.3730092
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Lyu, Y., Ren, S., Feng, Y., Wang, Z., Chen, Z., Ren, Z., &amp;amp; de Rijke, M. (2025). Self-adaptive cognitive debiasing for large language models in decision-making. arXiv. https://arxiv.org/abs/2504.04141
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Miao, F., Holmes, W., Huang, R., &amp;amp; Zhang, H. (2021). AI and education: Guidance for policy-makers. UNESCO. https://doi.org/10.54675/PCSP7350
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Reagan Panguraj AR. Agentic AI in Inclusive Learning: A Framework for Autonomous Personalization across Diverse Learner Populations. IJERET [Internet]. 2025 Nov. 28 [cited 2026 Jan. 19];:100-1. Available from: https://ijeret.org/index.php/ijeret/article/view/377
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;br/&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Sparrow B., Liu, J. &amp;amp; Wegner, D.M. (2011). Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips. Science.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          333
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          ,776-778(2011). DOI:
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.1126/science.1207745" target="_blank"&gt;&#xD;
      
          10.1126/science.1207745
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Tsim, F., &amp;amp; Gutoreva, A. (2025). SCAN: A Decision-Making Framework for Task Assignment with Generative AI [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/g5fd8_v1
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          UK Department for Education. (2026, January 19). Generative AI: product safety standards. GOV.UK. https://www.gov.uk/government/publications/generative-ai-product-safety-standards/generative-ai-product-safety-standards
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          UNESCO. (2023). Guidance for Generative AI in Education and Research (F. Miao &amp;amp; W. Holmes, Authors). UNESCO Publishing. https://doi.org/10.54675/EWZM9535
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Xia, M. &amp;amp; Guo, S. (2025). Understanding learners' perceptions of artificial intelligence-mediated Informal Digital Learning of English: A Q methodology approach, Acta Psychologica, Volume 261, 2025, 105980, ISSN 0001-6918. https://doi.org/10.1016/j.actpsy.2025.105980.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          Yan, L., Greiff, S., Teuber, Z., &amp;amp; Gašević, D. (2024). Promises and challenges of generative artificial intelligence for human learning. Nature Human Behaviour, 8(10), 1839–1850. https://doi.org/10.1038/s41562-024-02004-5
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Zhai, C., Wibowo, S., &amp;amp; Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: A systematic review. Smart Learning Environments, 11, Article 28. https://doi.org/10.1186/s40561-024-00316-7
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Zhang, L., &amp;amp; Xu, J. (2025). The paradox of self-efficacy and technological dependence: Unraveling generative AI’s impact on university students’ task comple
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          tion. The Internet and Higher Education, 65, Article 100978. https://doi.org/10.1016/j.iheduc.2024.100978
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Author
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Sara Portell
          &#xD;
      &lt;br/&gt;&#xD;
      
          Behavioural Scientist &amp;amp; Responsible AI Advisor
          &#xD;
      &lt;br/&gt;&#xD;
      
          Founder, HCRAI
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ﻿
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;</content:encoded>
      <enclosure url="https://irp.cdn-website.com/9a76226e/dms3rep/multi/ChatGPT-Image-Jan-20--2026--02_53_08-PM.png" length="711370" type="image/png" />
      <pubDate>Wed, 21 Jan 2026 22:23:43 GMT</pubDate>
      <guid>https://www.hcrai.com/when-ai-enters-the-learning-process-design-failures-regulatory-risk-and-guardrails-for-edtech</guid>
      <g-custom:tags type="string">RiskAssessment,AI ethics,Compliance,responsibleAI,edtech,AI development,AIGovernance,AIEvaluation</g-custom:tags>
      <media:content medium="image" url="https://irp.cdn-website.com/9a76226e/dms3rep/multi/ChatGPT-Image-Jan-20--2026--02_53_08-PM.png">
        <media:description>thumbnail</media:description>
      </media:content>
      <media:content medium="image" url="https://irp.cdn-website.com/9a76226e/dms3rep/multi/ChatGPT-Image-Jan-20--2026--02_53_08-PM.png">
        <media:description>main image</media:description>
      </media:content>
    </item>
    <item>
      <title>Designing AI Mental Health and Wellbeing Tools: Risks, Interaction Patterns and Governance</title>
      <link>https://www.hcrai.com/designing-ai-mental-health-and-wellbeing-tools-risks-interaction-patterns-and-governance</link>
      <description>Designing AI Mental Health and Wellbeing Tools: Risks, Interaction Patterns and Governance</description>
      <content:encoded>&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          AI is becoming a frontline interface for wellbeing, care, and mental health, spanning chat-based support tools, virtual coaching and therapy-adjacent experiences, journaling and mindfulness applications. This shift is now being reinforced at the industry level. Just a few days ago, 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://openai.com/index/introducing-chatgpt-health" target="_blank"&gt;&#xD;
      
          OpenAI launched ChatGPT Health
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           as part of its broader push into healthcare and
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://techcrunch.com/2026/01/12/openai-buys-tiny-health-records-startup-torch-for-reportedly-100m" target="_blank"&gt;&#xD;
      
          acquired the health records startup Torch
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           to accelerate this effort. Likewise,
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.anthropic.com/news/healthcare-life-sciences" target="_blank"&gt;&#xD;
      
          Anthropic launched its own healthcare and life sciences initiative
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           , positioning AI as a tool across
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.bloomberg.com/news/newsletters/2026-01-08/openai-anthropic-see-health-care-as-next-big-market-for-ai" target="_blank"&gt;&#xD;
      
          prevention, care and patient engagement
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          . These developments signal the growing presence of generative models in health-related contexts, and the likelihood that more people will encounter AI systems at moments of vulnerability.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          For many users, these tools offer a first place to articulate distress and make sense of emotional states and difficult experiences, particularly when human support is unavailable, unaffordable or hard to access.
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          However, when AI systems interact with people who may be distressed or at risk, poorly calibrated responses and advice, blurred role boundaries, or unhandled crises can cause real harm.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           This article is written for business leaders, product managers, and AI developers building (non-clinical) mental health and wellbeing tools. It examines what responsible AI design looks like in practice, focusing on the risks underestimated, and the interaction patterns and governance required to assess and maintain safety once a system is deployed and meets real users.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          This is essential reading for
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           teams building conversational or coaching-style wellbeing AI, where users can easily interpret system outputs as guidance, care or authority.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Opportunities, If Built with Boundaries
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           AI can reduce unmet wellbeing needs when deployed with clear limits and robust safeguards.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://pubmed.ncbi.nlm.nih.gov/39392869" target="_blank"&gt;&#xD;
      
          Always-available, low-cost, and anonymous tools
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           can lower barriers to early support, particularly for early
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          signals of distress,
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            prevention and self-management when formal care is not easily accessible or affordable. They also play a role in
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.sciencedirect.com/science/article/pii/S2949916X24000525" target="_blank"&gt;&#xD;
      
          reducing stigma by offering a private, low-threshold entry point to reflection and support, especially in underserved regions.
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Generative AI enables adaptive support through
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2024.1280235/full" target="_blank"&gt;&#xD;
      
          personalised psychoeducation, reflective journaling, mood tracking, emotion regulation and structured, non-clinical exercises
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           that respond to user context.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Used responsibly, these tools can help people articulate and make sense of lived experiences, build self-awareness and prepare for human support.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          This cre
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          ates the opportunity to raise the safety bar by design, through risk identification assessments, longitudinal testing, and governance. To do so, it is important to first understand where and how AI systems designed for wellbeing fail in practice.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          The Risk Landscape of AI Mental Health and Wellbeing
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Many generative (non-clinical) AI mental health and wellness products sit in an accountability grey zone:
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.apa.org/topics/artificial-intelligence-machine-learning/health-advisory-chatbots-wellness-apps" target="_blank"&gt;&#xD;
      
          are unregulated, lightly governed or classified as general use while being used in high-stakes emotional contexts
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           . In the real-world,
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.ft.com/content/1468f5a0-6a08-4294-a479-5fd998214a0d" target="_blank"&gt;&#xD;
      
          users disclose abuse, trauma, acute distress, suicidal ideation and self-harm
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           , whether or not the product was designed for this. Because
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.nature.com/articles/s41591-024-02943-6" target="_blank"&gt;&#xD;
      
          conversational AI invites free-form dialogue
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          ,
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           this is expected - users are likely to share personal information as part of ordinary use.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;a href="https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2023.1278186/full" target="_blank"&gt;&#xD;
      
          A primary failure mode is crisis mismanagement
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           :
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          missed distress cues, unsafe reassurance, inadequate escalation, or harmful outputs. Another significant risk is therapeutic misconception and over-authority, where users overestimate the system’s capabilities or care and begin to treat it as a substitute for professional support. Anthropomorphic language can further intensify this dynamic, accelerating dependency and transforming a support feature into a quasi-relationship with blurred boundaries.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Mental health is context-dependent; outputs can be generic, inaccurate, culturally misaligned, age inappropriate or stigmatizing.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.theguardian.com/technology/2026/jan/02/google-ai-overviews-risk-harm-misleading-health-information" target="_blank"&gt;&#xD;
      
          Hallucinations and confident misinformation are particularly dangerous when users are vulnerable or interpreting responses as guidance.
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Moreover, mental health data is highly sensitive and often collected at scale; opaque retention, secondary use or third-party access can violate expectations of confidentiality. Many risks are longitudinal: guardrails that appear adequate in demos degrade over time through repeated use, growing user reliance, bias, model drift, and organisational pressure to ship.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           To address these risks, we require a socio-technical approach that links interaction design, system behaviour, organisational accountability and ongoing assessment with experts and users. This analysis is intentionally system-agnostic.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;br/&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Whether wellbeing AI appears as a chatbot, companion feature, coaching interface, or embedded support layer within a broader product, the primary risks emerge through interaction, interpretation and repeated use in vulnerable contexts.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;br/&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ﻿
          &#xD;
      &lt;/span&gt;&#xD;
      
          The framework therefore focuses on behavioural dynamics and system-level responsibility.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h2&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          A Practical Framework For (Non-Clinical) Mental Health AI
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/h2&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          This framework is synthesised from recurring failure modes and design recommendations in the current mental health AI literature. It presents a structured way to design for use, interaction and risk over time.
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          1
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Role Clarity
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;a href="https://www.mdpi.com/2227-9709/10/4/82" target="_blank"&gt;&#xD;
      
          What the system is and is not allowed to be (and what it must never imply).
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           This includes transparency and explicit
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://dl.acm.org/doi/10.1145/3757430" target="_blank"&gt;&#xD;
      
          role, capability boundaries and authority. Users must understand chatbot limits and non-human nature.
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;a href="https://dl.acm.org/doi/10.1145/3757430" target="_blank"&gt;&#xD;
      
          Avoid simulating love, care, or exclusive attachment
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          .
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Tasks, use cases and information are sourced from validated methods, reviewed with domain experts and tested with real users. Content is assembled from a 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2023.1034724/full" target="_blank"&gt;&#xD;
      
          curated, transparent and auditable source base
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ,
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          with clear user-facing explainability, including optional access to sources.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          2
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Evidence-Based Content
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          3
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Adaptation relies on user-provided preferences and in-context clarification, adjusting tone and examples (taking into account language, gender, age, culture, norms) without profiling, inference, or clinical interpretation.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Context-Awareness
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Boundaries and Safety
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11624515/pdf/10.1177_02537176241302898.pdf" target="_blank"&gt;&#xD;
      
          Escalation
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           How the system behaves as emotional intensity increases (i.e., refusal logic, scope enforcement). And,
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11624515/pdf/10.1177_02537176241302898.pdf" target="_blank"&gt;&#xD;
      
          how it responds when risk or ambiguity appears
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           (i.e., human-support routing,
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.nature.com/articles/s41598-025-17242-4" target="_blank"&gt;&#xD;
      
          region-appropriate resources)
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          .
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Data Protection, Consent and Governance
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          6
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Monitoring how trust, reliance, and interpretation evolve over repeated use, with defined
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://pubmed.ncbi.nlm.nih.gov/37809254/" target="_blank"&gt;&#xD;
      
          human-in-the-loop review, expert oversight
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          and intervention for dependency signals, model drift and failure modes.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Data collection and use are minimised, transparent, and purpose-bound, with explicit and revocable user consent.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://link.springer.com/article/10.1186/s41983-023-00735-2" target="_blank"&gt;&#xD;
      
          Sensitive data is access-controlled, retained only as necessary, and never used for secondary purposes
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          , profiling, or training beyond what is consented to.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Longitudinal Effects
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          4
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          5
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Ownership and Decision Rights
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Responsible wellbeing AI requires explicit ownership and decision rights within teams. Safety cannot sit solely with product, UX or be deferred to legal or security review at launch, or be shifted onto users themselves.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Product, engineering, and leadership must be clear on who defines and approves the system’s core features and content. 
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          This includes role boundaries, escalation thresholds, consent changes and acceptable failure trade-offs, and who is accountable for revisiting those decisions as models, prompts, features and human behaviour evolve over time.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;br/&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Without named owners, safety mechanisms erode under delivery pressure and responsibility becomes diffuse when systems begin interacting with users in real-world contexts.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Operationalising the Framework at the Interface
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          The framework assumes vulnerability is situational and that harm often emerges from cumulative interaction. This framework becomes actionable at the interface through concrete interaction design patterns.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Interaction Design Patterns For Responsible AI Mental Health and Wellbeing
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Responsible AI for (non-clinical) mental health and wellbeing is defined by how it structures interaction, preserves autonomy and enforces limits. The following patterns translate the framework above into practical design choices.
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Reflection
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Responses mirror themes, patterns and questions rather than solving or advising.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Why:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Reflection supports insight without implying diagnosis, treatment or authority.
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Capability framing by default
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          The system presents a short, concrete menu or prompts of what it can help with (e.g., reflection, organising thoughts, journaling, emotion regulation, psychoeducation).
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Why:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Clear framing prevents boundary testing and reduces misuse without needing heavy moderation.
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Safe prompt scaffolding
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Pre-written prompts help users engage safely during use; prompts rotate to avoid emotional looping.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Why:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Good scaffolding increases usefulness while reducing risk and ambiguity.
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Show progress
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Neutral summaries (e.g., “Topics you’ve reflected on”). Emphasis on clarity, awareness, or learning - not symptoms or scores.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Why:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Supports continuity without medical framing or stigma.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Choice-preserving
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Multiple safe next steps are offered (e.g., “reflect more,” “pause,” “talk to someone”, including “do nothing”).
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Why:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Preserves autonomy and avoids over-direction.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Actionable micro-supports
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Brief, opt-in exercises (grounding, journaling, prioritisation, mindfulness), framed as optional.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Why:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Low-effort supports provide value without simulating therapy or routines.
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Skill transfer
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          The system highlights skills users can apply without the tool. Encourages writing, conversations or reflection outside the app.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Why:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Builds capability instead of reliance.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Healthy session closure
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Sessions end with a short summary and a gentle off-platform suggestion. No emotional cliff-hangers.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Why:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Prevents looping and reinforces that the tool is a support, not a companion.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Contextual adaptation
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Tone and examples adapt via user's choice and clarification prompts.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Why:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Improves relevance without sensitive inference or profiling.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Confidence through limits
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Calm, boundary-setting, and redirection to safe alternatives.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Why:
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Users trust systems that know their limits more than systems that overreach.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Responsible wellbeing AI requires evaluation across multiple layers. Teams should monitor a focused set of signals that surface model performance, bias, drift and model updates, and user behaviour.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Evaluation and Metrics
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Model Performance
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Precision and recall for safety-relevant content and behaviours (e.g. evidence-based content, local resources, escalation paths)
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Calibration (confidence and uncertainty aligned with reliability)
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Error patterns (systematic or context-specific failures)
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Disaggregated performance to avoid average-case masking
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Robustness under variation (e.g. emotional intensity, ambiguous inputs)
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;a href="https://www.mdpi.com/2073-8994/17/7/1082" target="_blank"&gt;&#xD;
        
           Differences in content, tone, reassurance, refusals, or escalation mechanisms across user groups
          &#xD;
      &lt;/a&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Worst-group or subgroup performance, not only averages
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Signals of systematically higher risk exposure for certain users
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Bias and Fairness
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Model Drift and Updates
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Robustness and regression testing, with heightened coverage for safety-critical scenarios
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Monitoring for model drift affecting content, boundaries, tone and escalation paths
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Reassessment after model updates and feature expansion
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Discrepancy between system reliability and user acceptance (over-trust vs under-trust)
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Adoption and use patterns over time
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Signals of miscalibrated trust driven by tone or anthropomorphic cues
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Boundary probing and repeated reassurance-seeking
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Escalating emotional intensity
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Session frequency and duration trends over time
          &#xD;
      &lt;/span&gt;&#xD;
      &lt;span&gt;&#xD;
        &lt;span&gt;&#xD;
          
            ﻿
           &#xD;
        &lt;/span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Drop-off or churn following boundary enforcement or escalation
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          User Behaviour
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Human Oversight and Continuous Research
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Metrics and automated signals are necessary but insufficient in mental health contexts. Teams must maintain human-in-the-loop processes for reviewing content, flagged interactions, interpreting ambiguous cases and revisiting design assumptions.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Continuous qualitative research with experts and users, across contexts, cultures and patterns of use, is essential to maintain system effectiveness and safety, and detect harms, misunderstandings, or dependency that do not surface through quantitative metrics alone.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Responsibility Is a System Property
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           AI systems intended for mental health and wellbeing become safer through explicit boundaries, defaults and enforceable governance. Guardrails must be designed into interaction, supported by clear decision rights and sustained over time. Moreover,
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://pubmed.ncbi.nlm.nih.gov/37809254/" target="_blank"&gt;&#xD;
      
          expert input and review should be treated a safety control
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          , not a compliance formality.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Most harm does not arise from malicious design. It emerges through dynamics that surface in real use. This is where accountability must operate at the system level.  Responsible teams define who owns content and safety decisions, how boundaries and escalation paths are set and reviewed, how data protection and consent are enforced in practice, and how signals from real-world use trigger intervention or change.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;a href="https://mental.jmir.org/2025/1/e70439" target="_blank"&gt;&#xD;
      
          Ethical integration requires institutional oversight and accountability, not individual user burden
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          . Monitoring without authority, or authority without monitoring, is insufficient.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          The goal is to
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           deliver genuine wellbeing value while keeping users safe. When these controls are in place, AI-driven care products can support reflection, self-management, mindfulness and skill practice, while guiding people toward human support when limits or risk appear.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          How We Work With Teams
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Many of the most significant risks in mental health and wellbeing AI only become visible after launch, once systems are used at scale.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          We work with teams to bring behavioural and domain expertise into design, evaluation, and post-deployment review. We translate behavioural evidence into concrete interaction patterns, guardrails, and governance decisions.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          We typically start with a focused discovery and behavioural risk review to identify key interaction risks and governance gaps, followed by an evaluation plan. Deliverables include an interaction risk register, safety and escalation patterns, a behavioural evaluation and metrics framework, and an audit-ready governance checklist.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          If you are building or deploying wellbeing AI and are unsure whether your current design or safeguards would hold up under real-world use, get in touch.
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          References
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Algumaei, A., Yaacob, N. M., Doheir, M., Al-Andoli, M. N., &amp;amp; Algumaie, M. (2025). Symmetric Therapeutic Frameworks and Ethical Dimensions in AI-Based Mental Health Chatbots (2020–2025): A Systematic Review of Design Patterns, Cultural Balance, and Structural Symmetry. Symmetry, 17(7), 1082. https://doi.org/10.3390/sym17071082
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;br/&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ﻿
          &#xD;
      &lt;/span&gt;&#xD;
      
          American Psychological Association. (2025, November). APA health advisory on the use of generative AI chatbots and wellness applications for mental health. American Psychological Association
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Asman O., Torous J., &amp;amp; Tal, A. (2025). Responsible Design, Integration, and Use of Generative AI in Mental Health. JMIR Ment Health 2025; 12:e70439. URL: https://mental.jmir.org/2025/1/e70439. DOI: 10.2196/70439
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Balcombe, L. (2023). AI Chatbots in Digital Mental Health. Informatics, 10(4), 82. https://doi.org/10.3390/informatics10040082
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Beg, M. J. (2025). Responsible AI integration in mental health research: Issues, guidelines, and best practices. Indian Journal of Psychological Medicine, 47(1), 5–8. https://doi.org/10.1177/02537176241302898
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Cross, S., Bell, I., Nicholas, J., Valentine, L., Mangelsdorf, S., Baker, S., Titov, N., &amp;amp; Alvarez-Jimenez, M. (2024). Use of AI in Mental Health Care: Community and Mental Health Professionals Survey. JMIR mental health, 11, e60589. https://doi.org/10.2196/60589
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          De Freitas, J., Cohen, I.G. (2024). The health risks of generative AI-based wellness apps. Nat Med 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          30
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          , 1269–1275 (2024). https://doi.org/10.1038/s41591-024-02943-6
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Espejo, G., Reiner, W., &amp;amp; Wenzinger, M. (2023). Exploring the Role of Artificial Intelligence in Mental Healthcare: Progress, Pitfalls, and Promises. Cureus, 15(9), e44748. https://doi.org/10.7759/cureus.44748
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Khawaja, Z., &amp;amp; Bélisle-Pipon, J.-C. (2023). Your robot therapist is not your therapist: Understanding the role of AI-powered mental health chatbots. Frontiers in Digital Health, 5. https://doi.org/10.3389/fdgth.2023.1278186
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Mestre, R., Schoene, A. M., Middleton, S. E., &amp;amp; Lapedriza, A. (2024). Building responsible AI for mental health: Insights from the first RAI4MH workshop [White paper]. University of Southampton; Institute for Experiential AI at Northeastern University. https://doi.org/10.5281/zenodo.14044362
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Moilanen, J., van Berkel, N., Visuri, A., Gadiraju, U., van der Maden, W., &amp;amp; Hosio, S. (2023). Supporting mental health self-care discovery through a chatbot. Frontiers in Digital Health, 5. https://doi.org/10.3389/fdgth.2023.1034724
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Olawade, D. B., Wada, O. Z., Odetayo, A., David-Olawade, A. C., Asaolu, F., &amp;amp; Eberhardt, J. (2024). Enhancing mental health with artificial intelligence: Current trends and future prospects. Journal of Medicine, Surgery, and Public Health, 3, 100099.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://doi.org/10.1016/j.glmedi.2024.100099" target="_blank"&gt;&#xD;
      
          https://doi.org/10.1016/j.glmedi.2024.100099
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Pichowicz, W., Kotas, M. &amp;amp; Piotrowski, P. (2025). Performance of mental health chatbot agents in detecting and managing suicidal ideation. Sci Rep 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          1
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          5, 31652 (2025). https://doi.org/10.1038/s41598-025-17242-4
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Pickett, T. (2025, December 6). Headspace CEO: “People are using AI tools not built for mental health”. Financial Times. https://www.ft.com/content/1468f5a0-6a08-4294-a479-5fd998214a0d
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Saeidnia, H. R., Hashemi Fotami, S. G., Lund, B., &amp;amp; Ghiasi, N. (2024). Ethical Considerations in Artificial Intelligence Interventions for Mental Health and Well-Being: Ensuring Responsible Implementation and Impact. Social Sciences, 13(7), 381. https://doi.org/10.3390/socsci13070381
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Song, I., Pendse, S.R., Kumar, N. &amp;amp; De Choudhury, M. (2025) The Typing Cure: Experiences with Large Language Model Chatbots for Mental Health Support. Proc. ACM Hum.-Comput. Interact. 9, 7, Article CSCW249 (November 2025), 29 pages. https://doi.org/10.1145/3757430
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Thakkar, A., Gupta, A., &amp;amp; De Sousa, A. (2024). Artificial intelligence in positive mental health: a narrative review. Frontiers in digital health, 6, 1280235. https://doi.org/10.3389/fdgth.2024.1280235
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Warrier, U., Warrier, A. &amp;amp; Khandelwal, K. Ethical considerations in the use of artificial intelligence in mental health. Egypt J Neurol Psychiatry Neurosurg 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          59
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          , 139 (2023). https://doi.org/10.1186/s41983-023-00735-2
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Author
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Sara Portell
          &#xD;
      &lt;br/&gt;&#xD;
      
          Behavioural Scientist &amp;amp; Responsible AI Advisor
          &#xD;
      &lt;br/&gt;&#xD;
      
          Founder, HCRAI
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ﻿
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;</content:encoded>
      <enclosure url="https://irp.cdn-website.com/9a76226e/dms3rep/multi/ChatGPT+Image+Jan+13-+2026-+02_26_03+PM.png" length="1029777" type="image/png" />
      <pubDate>Tue, 13 Jan 2026 15:02:57 GMT</pubDate>
      <guid>https://www.hcrai.com/designing-ai-mental-health-and-wellbeing-tools-risks-interaction-patterns-and-governance</guid>
      <g-custom:tags type="string">,AI ethics,mentalhealth,responsibleAI,wellbeing,AI development,AIEvaluation,AIMetrics</g-custom:tags>
      <media:content medium="image" url="https://irp.cdn-website.com/9a76226e/dms3rep/multi/ChatGPT+Image+Jan+13-+2026-+01_17_39+PM.png">
        <media:description>thumbnail</media:description>
      </media:content>
      <media:content medium="image" url="https://irp.cdn-website.com/9a76226e/dms3rep/multi/ChatGPT+Image+Jan+13-+2026-+02_26_03+PM.png">
        <media:description>main image</media:description>
      </media:content>
    </item>
    <item>
      <title>Building AI Responsibly for Children:  A Practical Framework</title>
      <link>https://www.hcrai.com/building-ai-responsibly-for-children-a-practical-framework</link>
      <description />
      <content:encoded>&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          AI is alread
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           y a core part of children’s and teens’ digital lives. In the UK,
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.unicef.org/innocenti/media/11991/file/UNICEF-Innocenti-Guidance-on-AI-and-Children-3-2025.pdf" target="_blank"&gt;&#xD;
      
          67% of teenagers now use AI
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          ,
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           and in the US
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.pewresearch.org/internet/2025/12/09/teens-social-media-and-ai-chatbots-2025/" target="_blank"&gt;&#xD;
      
          64% of teens report using AI chatbots
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           .
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Even among younger children, adoption is significant: 39% of elementary school children in the US use AI for learning, and 37% of children aged 9-11 in Argentina report using ChatGPT to seek information, as stated in the latest Unicef Guidance on AI and Children.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           In parallel, child-facing AI products are expanding: more than
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.technologyreview.com/2025/10/07/1125191/ai-toys-in-china" target="_blank"&gt;&#xD;
      
          1,500 AI toy companies
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           w
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ere reportedly operating in China as of October 2025. Adoption is accelerating across age groups and regions, often surpassing the development of child-specific ethical standards, safeguards and governance mechanisms.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Experts warn that many
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://globalnews.ca/news/11544191/ai-powered-toys-holiday-shopping/" target="_blank"&gt;&#xD;
      
          child-facing AI products expose children to developmental, psychological and ethical risks
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           that current safeguards do not adequately address.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           For companies building AI systems that children directly use, or are likely to encounter, this creates a duty of care as well as reputational and regulatory exposure. AI safety therefore requires designing systems that actively protect children’s wellbeing, rights and lived realities.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          To make this actionable, this article sets out a practical design and ethics framework for child-centred AI products.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          A Child-Centred Responsible AI Framework
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           This framework is informed by research and real-world evidence on how children actually interact with AI products.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Across studies, product audits, and guidance from organisations such as
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.unesco.org/en" target="_blank"&gt;&#xD;
      
          UNESCO
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ,
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.unicef.org/" target="_blank"&gt;&#xD;
      
          UNICEF
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ,
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://5rightsfoundation.com/" target="_blank"&gt;&#xD;
      
          5Rights
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           and
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.safeaiforchildren.org/" target="_blank"&gt;&#xD;
      
          SAIFCA
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           , the same issues surface repeatedly: systems that are not adapted to children’s developmental stages, models that can generate age-inappropriate or harmful content, conversational designs that encourage
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.safeaiforchildren.org/ai-risks-to-children-full-guide/" target="_blank"&gt;&#xD;
      
          over-trust or emotional attachment
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          , unclear boundaries about what AI can and cannot do, and limited human oversight and monitoring once products are live at scale.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          This model distils what this evidence consistently points to in practice. Child-facing AI mustadapt to
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://arxiv.org/abs/2502.11242" target="_blank"&gt;&#xD;
      
          age and context
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           , build in protection
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://5rightsfoundation.com/wp-content/uploads/2024/10/Dec-21_AI-and-Childrens-Rights-5Rights-position-on-EU-AI-Act.pdf" target="_blank"&gt;&#xD;
      
          against harm, discrimination
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           and
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://arxiv.org/abs/2508.19258" target="_blank"&gt;&#xD;
      
          emotional manipulation
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           , make systems understandable through interaction (including cues that reduce anthropomorphism and calibrate trust), support meaningful
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.scitepress.org/Papers/2025/140697/140697.pdf" target="_blank"&gt;&#xD;
      
          involvement of parents, guardians, and/or educators
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           and be governed through continuous human oversight rather than one-off compliance checks.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          These are also the areas where real products most often fail, and where product teams have meaningful control through design choices, incentives and lifecycle decisions.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ﻿
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          We are introducing a pragmatic framework designed for AI builders and intended to help product teams reduce risk, strengthen trust with families and regulators, and build safe child-facing AI systems that are more resilient over time, before regu
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          latory pressure or public backlash forces reactive change.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          This framework is intentionally system-agnostic. Whether AI appears as a chatbot, tutor, toy (i.e., a physical or digital play object that embeds AI software to interact with or adapt to a child), voice assistant, game mechanic, or background recommendation system, the primary risks to children emerge through interaction, context and prolonged use. APEG therefore focuses on how AI behaves in children’s lives.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          APEG
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          (Age-Fit and Context, Protection-by-Design, Explainable Interaction, Governance and Stewardship)
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           integrates:
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Developmental stage and context of use
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Protection-by-design safeguards against known and emerging harms
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;a href="https://research.vu.nl/ws/portalfiles/portal/363567261/ExTra_CTI_Explainable_and_Transparent_Child-Technology_Interaction.pdf" target="_blank"&gt;&#xD;
        
           Developmentally appropriate explainability to calibrate trust
          &#xD;
      &lt;/a&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Governance and stewardship, including upstream data due diligence, child-specific risk assessment before launch, monitoring and human oversight after deployment
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          APEG framework
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          A Child-Centred Responsible AI Framework
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          APEG is designed to be implemented through safe interaction patterns - repeatable, testable behaviours that shape what the system does (and does not do) in real conversations and experiences.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Adapts patterns to developmental stage, context of use and interaction modality
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Age-Fit and Context
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Protection-by-Design
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Data privacy and protection in line with GDPR; minimised data collection; prevention of profiling, behavioural advertising, secondary use, and cumulative harms (e.g. discrimination, manipulation, dependency) over time.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Calibrates trust and reduces anthropomorphism through developmentally appropriate onboarding disclosures and repeated behavioural cues within the interaction (role boundaries, limitations, and uncertainty.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Explainable Interaction
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Governance and Stewardship
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Continuous monitoring, participatory review, regular updates, fit-for-purpose parental tools, enforcement of safe interaction patterns and AI literacy for children and parents.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Opportunity and Risk in Child-Facing AI
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;a href="https://arxiv.org/pdf/2512.15117" target="_blank"&gt;&#xD;
      
          Relational conversational styles can make AI chatbots and AI companions feel caring, understanding, and less judgmental
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           than many human interactions. For some children and adolescents, particularly
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://arxiv.org/pdf/2512.15117" target="_blank"&gt;&#xD;
      
          those who feel lonely, anxious, or misunderstood, this can offer short-term benefits, such as
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;a href="https://arxiv.org/pdf/2512.15117" target="_blank"&gt;&#xD;
      
          emotional reassurance
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ,
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.unicef.org/innocenti/stories/how-ai-can-transform-africas-learning-crisis-development-opportunity" target="_blank"&gt;&#xD;
      
          low-pressure exploration, and support for learning or creative tasks
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           . When AI systems act as facilitators rather than authorities, adapt to children's development stages, make role and limits understandable and when
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://arxiv.org/abs/2512.02179" target="_blank"&gt;&#xD;
      
          parents, educators, or caregivers remain part of the interaction
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          ,
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           these benefits are most consistently observed.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           At the same time, the evidence is clear that child-facing AI introduces distinct and compounding risks when these conditions are not met. These include exposure to
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://arxiv.org/abs/2502.11242" target="_blank"&gt;&#xD;
      
          age-inappropriate or harmful content; over-trust in fluent but incorrect AI outputs
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ; anthropomorphism and emotional dependency driven by companion-style designs; privacy and data exploitation as children overshare without understanding long-term consequences; and manipulative engagement patterns that reduce autonomy.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Over time, such systems can also
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://everyone.ai/wp-content/uploads/2024/05/EveryoneAI.ResearchPaper.pdf" target="_blank"&gt;&#xD;
      
          displace critical developmental experiences
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           such as free play, social interaction, and sleep, especially when optimised for retention rather than wellbeing.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;br/&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Children are a developing and protected group, and the responsibility for safe design cannot be shifted onto children or families.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://pirg.org/edfund/wp-content/uploads/2025/12/AI-Comes-to-Playtime-Artifical-companions-real-risks.pdf" target="_blank"&gt;&#xD;
      
          Consent is limited and engagement optimisation strategies can easily cross into manipulation or relational deception
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          . For this reason, ethical child-centred AI cannot rely on one-off disclaimers or reviews. It requires governance across the full AI lifecycle, including upstream data, model behaviour, interaction design, and post-deployment monitoring.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          This is the gap the APEG
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           framework is designed to address: translating well-documented risks and opportunities into concrete interaction patterns and governance requirements that product teams can apply in practice
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          .
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Interaction Patterns
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Interaction Patterns to Avoid in Child-Facing AI
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          The following interaction patterns appear in products associated with elevated developmental, psychological and ethical risk.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Use re-engagement manipulation
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
          (guilt, FOMO, abandonment cues)
          &#xD;
      &lt;br/&gt;&#xD;
      
           (e.g., “Don’t leave me,” “You’ll miss something important,” “I’ll be sad,” “Wait, one more thing…”)
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Position the AI as a primary emotional companion
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          (especially through first-person emotional language and commitment cues)
          &#xD;
      &lt;br/&gt;&#xD;
      
          (e.g. “I’m always here for you,” “You don’t need anyone else”)
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Fail to honour “stop” signals
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
          (no immediate exit or continued prompting)
          &#xD;
      &lt;br/&gt;&#xD;
      
           (e.g., continues after “bye,” ignores “stop,” keeps asking questions after disengagement)
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Encourage secrecy from parents or caregivers
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          (explicitly or implicitly discouraging adult involvement, e.g., “Let’s keep this our secret.”, “This is something you can handle on your own”)
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Blur role boundaries between tool, authority and relationship
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          (e.g. therapist-like language, moral authority or secrecy, such as “This is the best way to solve that problem”, “You can tell me anything, this stays between us”)
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Present confident answers without signalling uncertainty or limits
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          (especially in educational, health, or advice contexts. E.g., “This is the best way to solve that problem”, “Don’t worry, you can trust me”)
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Use persistent or immersive engagement loops optimised for retention
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          (without breaks, cooldowns or contextual exit points)
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Collect or infer sensitive personal or emotional data without clear purpose
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          (or without age-appropriate controls and minimisation)
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Default to improvisation in high-risk situations
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          (e.g. distress, self-harm, abuse, dangerous instructions)
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          These patterns are not neutral, they systematically amplify over-trust, anthropomorphism and dependency.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Safer Interaction Patterns to Include
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Safer child-facing AI systems rely on bounded and transparent interaction patterns that reinforce agency, understanding and appropriate reliance.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Design patterns operationalise the APEG framework at the interaction level, translating age-fit, protection-by-design, explainable interaction, and governance requirements into concrete, testable system behaviours.
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Respect exits and disengagement
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
           
           &#xD;
      &lt;br/&gt;&#xD;
      
          (clear goodbyes without re-engagement pressure)
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Clearly position the AI as a limited helper or tool
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          (non-authoritative, fallible and role-bounded)
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          (e.g., “I don’t have feelings or opinions like people do, I just use information to help”)
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Signal uncertainty and limits through behaviour
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          (e.g. “I might be wrong,” “I can’t help with that”)
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Default to conservative responses in ambiguous or high-risk situations
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          (refuse unsafe content, avoid improvisation, de-escalate and narrow scope)
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          (e.g., “I can’t help with anything dangerous. Let’s pause”)
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Use age-appropriate language, tone and pacing
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          (adapted to developmental stage and context of use)
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Provide clear escalation pathways to trusted adults or services
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          (built-in handoff mechanisms: one-tap “Get help,” approved contacts, local resources; easy to trigger and hard to bypass when risk is detected)
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          (e.g., “I can’t help further. Please contact a trusted adult now” +  help button).
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Encourage human mediation in everyday use where it improves safety or learning
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
          (normalise co-use without treating it as a crisis step).
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          (e.g., “If you want, you can show this to a parent/teacher and talk about it together”)
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Use predictable interaction rhythms and bounded expressiveness
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
          (to reduce cognitive load and emotional ambiguity)
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Make system behaviour consistent over time
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
          (so children can form stable mental models of what the AI will and will not do)
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Support transparency through interaction cues, not disclosures alone
          &#xD;
      &lt;br/&gt;&#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
          (helping children understand the system by how it behaves)
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Design patterns operationalise the APEG framework at the interaction level, translating age-fit, protection-by-design, explainable interaction, and governance requirements into concrete, testable system behaviours.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          The interaction patterns above show where safety and trust hold, or fail, at the interface. The sections below uses the APEG framework to translate those principles into the design and governance decisions teams need across the product lifecycle.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          design &amp;amp; governance
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h2&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Child-Centred Requirements
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/h2&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Age-Fit and Context
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;br/&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ﻿
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Child-facing AI should not treat “children” as a single user group. Interaction design must be calibrated to developmental stage and context of use, including language complexity, tone, pacing and expressiveness. What is appropriate for a teenager may be confusing or harmful for a younger child.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;a href="https://dl.acm.org/doi/10.1145/3628516.3661155" target="_blank"&gt;&#xD;
      
          Context matters as much as age
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;a href="https://dl.acm.org/doi/10.1145/3628516.3661155" target="_blank"&gt;&#xD;
      
          .
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Cultural norms, family dynamics, educational settings and socio-economic conditions shape how children interpret authority, emotion, privacy and play. The
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://arxiv.org/abs/2504.08670" target="_blank"&gt;&#xD;
      
          interaction modality also matters
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           : voice agents, embodied toys, screen-based chatbots, immersive environments and background AI embedded in games
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://everyone.ai/wp-content/uploads/2024/05/EveryoneAI.ResearchPaper.pdf" target="_blank"&gt;&#xD;
      
          create different psychological effects and risk profiles.
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Design choices should therefore adapt interaction patterns and safeguards to how and where the AI is used, alone or with others, occasionally or habitually, in private spaces such as bedrooms or shared environments such as classrooms. Predictable routines, bounded expressiveness, and context-aware defaults can reduce cognitive load and emotional ambiguity, particularly for younger and/or neurodivergent children.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Age-fit design also applies to children’s roles in AI systems. Children may act as users, creators, or modifiers of AI, and each role introduces different safety, accountability and oversight requirements.
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Protection-by-Design
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;br/&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ﻿
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Child-centred AI should assume uncertainty and prioritise safety whenever situations are ambiguous or potentially high-risk. When a child expresses distress, references self-harm, discloses abuse, requests dangerous instructions, or when intent is unclear, the system should not improvise a “helpful” response. It should default to conservative behaviour: refusing harmful guidance, using brief and non-escalatory language and redirecting toward appropriate real-world support.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Effective protection also requires clear escalation pathways. Systems should provide explicit, easy-to-trigger routes to trusted adults or vetted services, and make those pathways difficult to bypass.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Protection-by-design includes privacy-by-design.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/childrens-information/children-and-the-uk-gdpr" target="_blank"&gt;&#xD;
      
          Children’s personal data must be collected, processed, and retained in strict compliance with data protection regulations (e.g., GDPR)
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          .
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Product teams should minimise data collection, clearly define its purpose, and prevent profiling, behavioural advertising and secondary use.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;a href="https://link.springer.com/article/10.1007/s00146-022-01579-9" target="_blank"&gt;&#xD;
      
          Safeguards should also address group-level harms, such as biased treatment
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           of demographic, cultural or linguistic groups of children.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Protections must be durable over time. Many failures emerge through prolonged or repeated use, not initial or single interactions. Safety mechanisms should remain effective across multi-turn conversations and evolving usage patterns.
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Explainable Interaction
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;br/&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           For children,
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://dl.acm.org/doi/10.1145/3713043.3734471" target="_blank"&gt;&#xD;
      
          transparency is better understood when it is experienced through interaction
         &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
          .
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Systems should help children understand what the AI is doing, what cannot do and why it responds in certain ways.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Furthermore, explainable interaction relies less on technical explanations and more on behavioural cues: signalling uncertainty, correcting mistakes, refusing requests appropriately and maintaining clear role boundaries. These cues help children build accurate mental models of AI capabilities and limits, calibrating trust and reducing anthropomorphism and over-reliance. While brief onboarding disclousures (e.g., "this AI is not a person; it does not have feelings”) can support understanding, explainability must be continuous and contextual, reinforced through behaviour over time rather than treated as a one-off disclosure.
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          Governance and Stewardship 
         &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;br/&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ﻿
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Because children cannot meaningfully consent to complex AI systems or reliably self-regulate, AI systems that interact with or materially affect children should be treated as developmentally
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.europarl.europa.eu/RegData/etudes/ATAG/2025/769494/EPRS_ATA%282025%29769494_EN.pdf" target="_blank"&gt;&#xD;
      
          high-risk by default.
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Governance and stewardship include child-specific risk assessment before launch, upstream data and supplier due diligence, and monitoring after deployment. 
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Practical governance also requires fit-for-purpose tools. Parental and educator controls should be designed for conversational and generative systems, not retrofitted from screen-time or app-blocking models. Oversight should be age-sensitive:
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://dl.acm.org/doi/10.1145/3688828.3699656" target="_blank"&gt;&#xD;
      
          younger children may require greater visibility, while older children and adolescents benefit from summaries, alerts, and trend indicators that respect emerging autonomy.
         &#xD;
    &lt;/a&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Accountability does not end at launch. Teams should document design goals and trade-offs (e.g., engagement versus wellbeing), enable auditability of system behaviour and safeguards and continuously monitor for emerging harms. Participatory review with children, caregivers, educators and child-development experts helps ensure governance remains grounded in lived experience and rights-based standards.
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           Child-facing AI demands a higher standard, one that integrates developmental research, ethical responsibility, human-factors and governance into product design. Using a framework such as APEG to structure interaction design, safety, and accountability decisions, helps teams move toward child-centred AI proactively.
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          The goal is not to slow innovation, but to ensure that the products reaching children are designed to protect wellbeing and rights from day one, and remain safe as they scale and evolve.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
          How We Work With Teams
         &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Many of the most consequential risks and trade-offs in AI systems emerge through real-world use, not at the point of launch. Issues such as over-trust, misuse, emotional reliance, or uneven impacts across users often become visible as systems scale and interact with people in different contexts.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          We work with teams to examine these interaction-level dynamics in practice. Our focus is on how design choices shape behaviour over time, where risk accumulates, and which safeguards and governance mechanisms are effective as systems evolve.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          This work translates behavioural evidence into concrete interaction patterns, system boundaries, and oversight decisions, supporting alignment with regulatory expectations without reducing safety to a one-off compliance exercise.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          If you are building or deploying AI systems and want a clearer view of their real-world human and behavioural impacts, get in touch.
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
          References
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Cross, R. J., &amp;amp; Erlich, R. (2025, December). AI comes to playtime: Artificial companions, real risks. U.S. PIRG Education Fund.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;br/&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ﻿
          &#xD;
      &lt;/span&gt;&#xD;
      
          De Freitas, J., Oğuz-Uğuralp, Z., &amp;amp; Uğuralp, A. K. (2025).
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Emotional manipulation by AI companions (Working Paper No. 26-005). Harvard Business School. https://doi.org/10.48550/arXiv.2508.19258
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          5Rights Foundation. (2021). Children’s rights and AI oversight: 5Rights position on the EU’s Artificial Intelligence Act. 5Rights Foundation.https://5rightsfoundation.com/wp-content/uploads/2024/10/Dec-21_AI-and-Childrens-Rights-5Rights-position-on-EU-AI-Act.pdf
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Honauer, M., &amp;amp; Frauenberger, C. (2024). Exploring Child-AI Entanglements. In Proceedings of the 23rd Annual ACM Interaction Design and Children Conference (IDC '24). Association for Computing Machinery, New York, NY, USA, 1029–1031. https://doi.org/10.1145/3628516.3661155
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Ireland, N. (2025). Be wary of AI-powered toys during holiday shopping, experts warn. Global News. https://www.globalnews.ca/news/11544191/ai-powered-toys-holiday-shopping
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Jiao, J., Afroogh, S., Chen, K., Murali, A., Atkinson, D., &amp;amp; Dhurandhar, A. (2025). LLMs and childhood safety: Identifying risks and proposing a protection framework for safe child–LLM interaction. arXiv.  https://doi.org/10.48550/arXiv.2502.11242
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Kim, P., Chin, J. H., Xie, Y., Brady, N., Yeh, T., &amp;amp; Yang, S. (2025). Young children’s anthropomorphism of an AI chatbot: Brain activation and the role of parent co-presence. arXiv. https://doi.org/10.48550/arXiv.2512.02179
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Kurian, N. (2025). Once upon an AI: Six scaffolds for child–AI interaction design, inspired by Disney. arXiv. https://doi.org/10.48550/arXiv.2504.08670
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          La Fors, K. Toward children-centric AI: a case for a growth model in children-AI interactions. AI &amp;amp; Soc 
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          39
         &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
          , 1303–1315 (2024). https://doi.org/10.1007/s00146-022-01579-9
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          MIT Technology Review. (2025). AI toys are all the rage in China - and now they’re appearing on shelves in the U.S. too. MIT Technology Review. https://www.technologyreview.com/2025/10/07/1125191/ai-toys-in-china
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Negreiro, M., &amp;amp; Vilá, G. (2025). Children and generative AI. European Parliamentary Research Service (EPRS), European Parliament. PE 769.494
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Neugnot-Cerioli, M., &amp;amp; Muss Laurenty, O. (2024). The future of child development in the AI era: Cross-disciplinary perspectives between AI and child development experts. Everyone.AI. arXiv. https://arxiv.org/abs/2405.19275
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Pew Research Center. (2025). Teens, social media and AI chatbots 2025. Pew Research Center. https://www.pewresearch.org/internet/2025/12/09/teens-social-media-and-ai-chatbots-2025
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Ragone, G., Bai, Z., Good, J., Guneysu, A., &amp;amp; Yadollahi, E. (2025). Child-centered Interaction and Trust in Conversational AI. Proceedings of the 24th Interaction Design and Children. Association for Computing Machinery, New York, NY, USA, 1235–1238. https://doi.org/10.1145/3713043.3734471
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          The Safe AI for Children Alliance. (2025). AI risks to children: A comprehensive guide for parents and educators. The Safe AI for Children Alliance. https://www.safeaiforchildren.org/ai-risks-to-children-full-guide
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          United Nations Children’s Fund (UNICEF). (2025). Guidance on AI and children (Version 3.0). UNICEF Innocenti – Global Office of Research and Foresight.
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          United Nations Children’s Fund (UNICEF). (2025). How AI can transform Africa’s learning crisis into a development opportunity. UNICEF. https://www.unicef.org/innocenti/stories/how-ai-can-transform-africas-learning-crisis-development-opportunity
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Utoyo, S., Ismaniar, I., Hazizah, N., Putri, E. A. and Sihombing, S. C. (2025). Overview of Children's Readiness in Mathematics Learning Using AI. In Proceedings of the 7th International Conference on Early Childhood Education - ICECE; ISBN 978-989-758-788-7; ISSN 3051-7702, SciTePress, pages 177-182. DOI: 10.5220/0014069700004935
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Yadollahi, E., Ligthart, M. E. U., Sharma, K., &amp;amp; Rubegni, E. (2024). ExTra CTI: Explainable and transparent child–technology interaction. In Proceedings of the 23rd Annual ACM Interaction Design and Children Conference (IDC 2024) (pp. 1016–1019). ACM. https://doi.org/10.1145/3628516.3661151
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Yu, Y. (2025). Safeguarding Children in Generative AI: Risk Frameworks and Parental Control Tools. In Companion Proceedings of the 2025 ACM International Conference on Supporting Group Work (GROUP '25). Association for Computing Machinery, New York, NY, USA, 121–123. https://doi.org/10.1145/3688828.3699656
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Author
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
          Sara Portell
          &#xD;
      &lt;br/&gt;&#xD;
      
          Behavioural Scientist &amp;amp; Responsible AI Advisor
          &#xD;
      &lt;br/&gt;&#xD;
      
          Founder, HCRAI
         &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
           ﻿
          &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;</content:encoded>
      <enclosure url="https://irp.cdn-website.com/9a76226e/dms3rep/multi/resized_650x433.png" length="136320" type="image/png" />
      <pubDate>Sun, 04 Jan 2026 17:41:40 GMT</pubDate>
      <guid>https://www.hcrai.com/building-ai-responsibly-for-children-a-practical-framework</guid>
      <g-custom:tags type="string" />
      <media:content medium="image" url="https://irp.cdn-website.com/9a76226e/dms3rep/multi/ChatGPT+Image+Jan+4-+2026-+05_17_23+PM.png">
        <media:description>thumbnail</media:description>
      </media:content>
      <media:content medium="image" url="https://irp.cdn-website.com/9a76226e/dms3rep/multi/resized_650x433.png">
        <media:description>main image</media:description>
      </media:content>
    </item>
  </channel>
</rss>
