Bridging the Gap: When AI Output Becomes Real-World Action

Silvia Rocha • May 4, 2026

Share this article

A practitioner roundtable on AI governance

A few weeks ago, Japmandeep (Sunny) proposed a simple but important question to a small group of AI governance and ethics practitioners across the UK and Europe: “When AI systems start influencing real-world decisions, where does governance actually sit?”


What began as an idea quickly evolved into a rich, international roundtable, bringing together perspectives from policy, compliance, system design, behavioural science, and operational governance.


The central issue explored was: The real risk does not begin when AI produces an output. It begins when that output starts shaping a real-world decision. That is the point where governance must move beyond static policy and into proactive operational execution.

Meet the Panel

The discussion brought together a diverse group of experts across Europe and the UK to dissect AI Ethics and Governance through multiple lenses:

Japmandeep (Sunny) Ahluwalia (moderator)

Founder Luxyn Ethics and Governance (LEG) and a seasoned expert in commercial leadership.  More recently focusing on translating ethical and governance principles into operational "accountability triggers" once AI starts influencing real decisions.

Sílvia Rocha

A product veteran with 20+ years of experience building mission-critical products across Manufacturing, Healthcare, Retail, Gaming, and Enterprise Application Development. Now specializing in Ethical, Responsible AI and AI Governance.

Wouter Kleynen

Co-founder of LexFriend, bringing a mathematical and engineering background to the practical application of AI Governance and EU AI Act compliance.

Lisa Verhoeven

A specialist in the technical design and integration of compliant AI systems within the legal domain.

Sara Portell

Founder of HCRAI, and behavioral scientist with 17+ years of experience in product strategy/UX, research and innovation at global tech companies across enterprise SaaS, eCommerce, travel, fintech, edtech, mental health/ wellbeing and HR/ talent management.

Danielle Hopkins

An AI governance and digital rights specialist whose work sits at the intersection of technology, harm, and humanity. With +10 years in Data and AI Governance across public sector, civil society, health, social care and global education bodies, she brings a rare blend of regulatory depth, lived‑experience insight and technical fluency.

The Shift from Advisory to Decision-Making:

A Product/Engineering Perspective

THE QUESTION

What actually changes when AI moves from advisory to determining real world decisions?

Sílvia Rocha argued that as AI assumes autonomous or high-influence roles, governance must evolve from reactive auditing into a "proactive engineering discipline" centered on three pillars: liability, transparency and explainability, and robustness and safety. She emphasizes that when AI moves from an advisory capacity to making business decisions, liability shifts from the professional to the product, requiring stakeholders to govern the AI’s underlying logic rather than just its output.


Sílvia advocates for explainability at a level where any user can understand the rationale behind a decision. Her safety framework further mandates rigorous "red teaming" to test resilience, operational "sandboxing" to enforce strict limits—such as capping loan approvals, and "automated halts" that trigger immediate human oversight the moment a model drifts from its performance baseline.

The Operational Challenge of Compliance

THE QUESTION

Where do organisations struggle most when translating governance frameworks into real implementation?

Wouter Kleynen highlighted that the biggest hurdle is turning abstract governance into something an organization can actually operate and sustain.


He identified shadow AI and insufficient visibility into where AI is being developed or used as major pressure points. Because AI can "learn" and drift from its original goals, Kleynen stressed that governance cannot be a one-time setup.


He also pointed to a significant gap in AI literacy within organizations, noting that the EU AI Act necessitates that all employees understand how to, and how not to, use these tools.

Building on a Technical Foundation

THE QUESTION

What are the main challenges in making AI systems controllable enough to support governance in practice?

Lisa Verhoeven cautioned against the "rush to adopt AI" driven by a fear of falling behind. She argued that AI is only as good as the underlying data and architecture, which are often fragmented, making effective implementation and oversight difficult.


Her advice to organizations is to "fix their data and systems" first and clearly define the problem before applying AI in a "risk-aware way".


From a technical standpoint, she noted that testing must be grounded in "risk levels on the business level," ensuring that AI behavior is provable and deterministic where required.

The Human Layer:

Behavior Shaping and Post-Deployment Governance

THE QUESTION

From a behavioural perspective, where do you see the biggest gaps when people interact with AI systems influencing decisions?

Sara Portell emphasized that governance must extend beyond the system's design to include the "human layer".


She explained how AI "framing" shapes behaviour and decision-making in real-world usage; for instance, a clinician might treat an AI's suggestion as a "final answer" because it appears as a definitive prediction presented with high confidence.


These effects often become visible only in use, and only with real users, not synthetic testing scenarios.


And as models drift from their baseline over time, human behaviour evolves alongside them. Portell argued that governance must therefore continue post-deployment, monitoring how systems are used in practice, in real world setting, to account for the evolving behavioural patterns and downstream human impact they create.

The Ground Reality:

Tech Debt and Literacy

THE QUESTION

Organisations often feel they already "have governance." Where does this break down in practice?

Danielle Hopkins spoke to the practical decay of governance caused by "tech debt," short life cycles, and a general lack of technical awareness.


She noted that governance often breaks down when it is treated as a "top-down" policy rather than being broken down into "little bits" that integrate into daily workflows.


Danni also warned about "procedural bias" and the impact of the underlying operating system on AI performance, suggesting that governance must be holistic rather than siloed.

AI governance cannot remain a document on paper.

It must become a living part of the system itself.

Closing Thoughts

The discussion made one point especially clear: AI governance becomes of the essence when an output starts shaping a real-world action. That is where principles such as accountability, oversight, transparency, robustness, and human judgment must stop living in frameworks and start functioning in practice.


Across the panel, a consistent theme emerged: the
challenge is not merely defining good governance, but embedding it into the technical systems, organisational processes, and human behaviours that determine how AI is used. Without that operational layer, governance remains superficial, no matter how well written the policy may be.


As
Japmandeep Ahluwalia closed the discussion, AI governance cannot remain a document on paper. It must become a living part of the system itself: visible in design choices, embedded in workflows, sustained across the lifecycle, and activated precisely where people, decisions, and accountability meet.

Watch the Full Roundtable

Written by Sílvia Rocha. Roundtable organised and moderated by Japmandeep (Sunny) Ahluwalia, with contributions from Wouter Kleynen, Lisa Verhoeven, Sara Portell, and Danielle Hopkins

Recent Posts

By Yasmina El Fassi March 25, 2026
“We all have an evil side […] I think it’s just part of who we are. Don’t you agree?” ”Yeah, I think so too, it’s just a matter of acknowledging and managing those impulses...”
AI agents for mental health Different therapeutic styles and outcomes
By Yasmina El Fassi February 19, 2026
W hat do Woebot , Wysa and Youper have in common? These are all AI agents that use therapeutic techniques to help users improve mental well-being, guide meditation and even help with managing anxiety. In this article, AI mental‑health agents are goal‑directed conversational systems that sit with you in a chat or voice interface to support specific wellbeing tasks; for example, walking through CBT‑style exercises, practicing coping strategies, or checking in on mood over time. I n the broader AI literature , these would be considered agents because they are built around particular goals and workflows, whereas “agentic” AI usually refers to more autonomous systems that can independently plan multi‑step actions, call tools, and adap t their behaviour with relatively little human steering.
By Sara Portell February 6, 2026
Design systems were built to scale consistency, efficiency and quality in user-centric applications: reusable components, shared patterns and practices, and a common language across design and engineering , promoting collaboration. They improve velocity because teams stop solving the same interface problems repeatedly, providing measurable ROI . AI introduces both immense opportunities and complex (technical, legal and social) challenges, and it is reshaping the operating conditions traditional design systems were built for. User-facing outputs are adaptive and can vary by input, model behaviour can shift over time and responses that sound credible can still be wrong . These systems can also reproduce or amplify bias, creating unequal outcomes across users. In high-confidence, relational interactions, they can shape user judgment and behaviour . These shifts raise the bar for accountability, transparency, and governance across the full product lifecycle. The challenge is not only consistency and quality. It is ensuring consistency and quality safely, fairly and responsibly as both system behaviour and human behaviour evolve. At the same time, AI-powered copilots and no-code tools are increasingly used in the design process to support ideation, prototyping, and delivery, but their adoption also raises concerns about transparency, bias, privacy, and the need to preserve human judgment and oversight . Fast, polished design outputs often look complete even when the underlying logic is incomplete or flawed. As a result, familiar UX failures, misalignment with real user needs, hidden edge cases and context breakdowns, become harder to detect and more costly to correct later. Design systems can take on a bigger operational role in AI-enabled product development by codifying user-centric foundations, rules and infrastructure that guide consistent, safe, ethical and scalable human-AI experiences.
When AI Enters the Learning Process: Design Failures, Regulatory Risk and Guardrails for EdTech
By Sara Portell January 21, 2026
Generative AI (GenAI) and emerging agentic systems are moving AI into the learning process itself. These systems don’t stop at delivering content. They explain, adapt, remember and guide learners through tasks. In doing so, they change where cognitive effort sits. I.e., what learners do themselves and what gets delegated to machines. This shift unlocks significant opportunities. GenAI can provide on-demand explanations, examples and feedback at a scale. It can diversify learning resources through multimodal content, support learners working in a second language and reduce friction when students get stuck, lowering barriers to engagement and persistence. For some learners, AI-mediated feedback can feel psychologically safer, encouraging experimentation (trial and error), revision and assistance without fear of judgement . But these gains come with important risks. The same design choices that improve short-term performance, confidence, or engagement can weaken i ndependent reasoning, distort social development or introduce hidden dependencies over time .
Designing AI Mental Health and Wellbeing Tools: Risks, Interaction Patterns and Governance
By Sara Portell January 13, 2026
Designing AI Mental Health and Wellbeing Tools: Risks, Interaction Patterns and Governance
Building AI Responsibly for Children: A Practical Framework
By Sara Portell January 4, 2026
AI is alread y a core part of children’s and teens’ digital lives. In the UK, 67% of teenagers now use AI , and in the US 64% of teens report using AI chatbots . Even among younger children, adoption is significant: 39% of elementary school children in the US use AI for learning, and 37% of children aged 9-11 in Argentina report using ChatGPT to seek information, as stated in the latest Unicef Guidance on AI and Children. In parallel, child-facing AI products are expanding: more than 1,500 AI toy companies w ere reportedly operating in China as of October 2025. Adoption is accelerating across age groups and regions, often surpassing the development of child-specific ethical standards, safeguards and governance mechanisms.

Contact Us

Tel:  +351 915 679 908

Tel: +44 79 38 52 1514

contact@hcrai.com