A practitioner roundtable on AI governance
A few weeks ago, Japmandeep (Sunny) proposed a simple but important question to a small group of AI governance and ethics practitioners across the UK and Europe: “When AI systems start influencing real-world decisions, where does governance actually sit?”
What began as an idea quickly evolved into a rich, international roundtable, bringing together perspectives from policy, compliance, system design, behavioural science, and operational governance.
The central issue explored was: The real risk does not begin when AI produces an output. It begins when that output starts shaping a real-world decision. That is the point where governance must move beyond static policy and into proactive operational execution.
Meet the Panel
The discussion brought together a diverse group of experts across Europe and the UK to dissect AI Ethics and Governance through multiple lenses:
Japmandeep (Sunny) Ahluwalia (moderator)
Founder Luxyn Ethics and Governance (LEG) and a seasoned expert in commercial leadership. More recently focusing on translating ethical and governance principles into operational "accountability triggers" once AI starts influencing real decisions.

Sílvia Rocha
A product veteran with 20+ years of experience building mission-critical products across Manufacturing, Healthcare, Retail, Gaming, and Enterprise Application Development. Now specializing in Ethical, Responsible AI and AI Governance.

Lisa Verhoeven
A specialist in the technical design and integration of compliant AI systems within the legal domain.

Sara Portell
Founder of HCRAI, and behavioral scientist with 17+ years of experience in product strategy/UX, research and innovation at global tech companies across enterprise SaaS, eCommerce, travel, fintech, edtech, mental health/ wellbeing and HR/ talent management.

Danielle Hopkins
An AI governance and digital rights specialist whose work sits at the intersection of technology, harm, and humanity. With +10 years in Data and AI Governance across public sector, civil society, health, social care and global education bodies, she brings a rare blend of regulatory depth, lived‑experience insight and technical fluency.

The Shift from Advisory to Decision-Making:
A Product/Engineering Perspective
THE QUESTION
What actually changes when AI moves from advisory to determining real world decisions?
Sílvia Rocha argued that as AI assumes autonomous or high-influence roles, governance must evolve from reactive auditing into a "proactive engineering discipline" centered on three pillars: liability, transparency and explainability, and robustness and safety. She emphasizes that when AI moves from an advisory capacity to making business decisions, liability shifts from the professional to the product, requiring stakeholders to govern the AI’s underlying logic rather than just its output.
Sílvia advocates for explainability at a level where any user can understand the rationale behind a decision. Her safety framework further mandates rigorous "red teaming" to test resilience, operational "sandboxing" to enforce strict limits—such as capping loan approvals, and "automated halts" that trigger immediate human oversight the moment a model drifts from its performance baseline.
The Operational Challenge of Compliance
THE QUESTION
Where do organisations struggle most when translating governance frameworks into real implementation?
Wouter Kleynen highlighted that the biggest hurdle is turning abstract governance into something an organization can actually operate and sustain.
He identified shadow AI and insufficient visibility into where AI is being developed or used as major pressure points. Because AI can "learn" and drift from its original goals, Kleynen stressed that governance cannot be a one-time setup.
He also pointed to a
significant gap in AI literacy within organizations, noting that the EU AI Act necessitates that all employees understand how to, and how
not to, use these tools.
Building on a Technical Foundation
THE QUESTION
What are the main challenges in making AI systems controllable enough to support governance in practice?
Lisa Verhoeven cautioned against the "rush to adopt AI" driven by a fear of falling behind. She argued that AI is only as good as the underlying data and architecture, which are often fragmented, making effective implementation and oversight difficult.
Her advice to organizations is to "fix their data and systems" first and clearly define the problem before applying AI in a "risk-aware way".
From a technical standpoint, she noted that testing must be grounded in "risk levels on the business level," ensuring that AI behavior is provable and deterministic where required.
The Human Layer:
Behavior Shaping and Post-Deployment Governance
THE QUESTION
From a behavioural perspective, where do you see the biggest gaps when people interact with AI systems influencing decisions?
Sara Portell emphasized that governance must extend beyond the system's design to include the "human layer".
She explained how AI "framing" shapes behaviour and decision-making in real-world usage; for instance, a clinician might treat an AI's suggestion as a "final answer" because it appears as a definitive prediction presented with high confidence.
These effects often become visible only in use, and only with real users, not synthetic testing scenarios.
And as models drift from their baseline over time, human behaviour evolves alongside them. Portell argued that governance must therefore continue post-deployment,
monitoring how systems are used in practice, in real world setting, to account for the evolving behavioural patterns and downstream human impact they create.
The Ground Reality:
Tech Debt and Literacy
THE QUESTION
Organisations often feel they already "have governance." Where does this break down in practice?
Danielle Hopkins spoke to the practical decay of governance caused by "tech debt," short life cycles, and a general lack of technical awareness.
She noted that governance often breaks down when it is treated as a "top-down" policy rather than being broken down into "little bits" that integrate into daily workflows.
Danni also warned about "procedural bias" and the impact of the underlying operating system on AI performance, suggesting that governance must be holistic rather than siloed.
AI governance cannot remain a document on paper.
It must become a living part of the system itself.
Closing Thoughts
The discussion made one point especially clear: AI governance becomes of the essence when an output starts shaping a real-world action. That is where principles such as accountability, oversight, transparency, robustness, and human judgment must stop living in frameworks and start functioning in practice.
Across the panel, a consistent theme emerged: the
challenge is not merely defining good governance, but
embedding it into the technical systems, organisational processes, and human behaviours that determine how AI is used. Without that operational layer, governance remains superficial, no matter how well written the policy may be.
As
Japmandeep Ahluwalia closed the discussion, AI governance cannot remain a document on paper. It must become a living part of the system itself: visible in design choices, embedded in workflows, sustained across the lifecycle, and activated precisely where people, decisions, and accountability meet.
Watch the Full Roundtable
Written by Sílvia Rocha. Roundtable organised and moderated by Japmandeep (Sunny) Ahluwalia, with contributions from Wouter Kleynen, Lisa Verhoeven, Sara Portell, and Danielle Hopkins
Recent Posts





Terms & Policies


