Behavioural Risk Workshop for AI Products

2 DAY WORKSHOP

A practical 2-day workshop to help teams identify behavioural risks, strengthen trustworthy use, and operationalise governance for AI features in real-world contexts.


Limited cohort size for high-touch feedback on industry-specific and real use cases. 

Why This Programme

Built on Evidence

Practical and Immediately Usable

Apply the method to realistic AI features and leave with governance artefacts, decision tools, and outputs you can use straight away.


Grounded in Behavioural Science

Built at the intersection of behavioural science, human-AI interaction, and AI ethics and governance, with a clear focus on product decisions in practice.


Built for Cross-Functional Reality

Support collaboration across Product, UX, Ops, Legal, Risk, and Compliance with stakeholder-ready tools for ownership, escalation, and oversight.


Led by an Experienced Practitioner

Sara Portell brings 17+ years of experience across product strategy, UX, behavioural science, and AI governance in global technology environments, alongside compliance expertise.


Why AI Products Fail in Production

Many AI products can pass audits. Fewer remain safe, reliable, and trustworthy in real-world use.

Risk often emerges once systems interact with real users in context and over time. Common failure points include:

  • User calibration risk (over-trust, under-trust, misuse of system, scope boundary, agency prevention)
  • Fairness risk (bias and unfair outcomes accumulated & unnoticed across groups)
  • Performance drift risk (behavioural drift goes unmonitored over time)
  • Recommendation safety risk (system gives inappropriate or unsafe recommendations for the user’s context & demographics, especially involving high-stakes decisions and vulnerable groups)
  • Output integrity risk (wrong, unverifiable, or misleading)
  • Accountability risk (ownership and escalation are unclear when incidents occur)

From Policy Awareness to Product-Ready Governance

Regulations, standards, model controls




Compliance checklists


Principles and ethical guidelines

Documentation for audit readiness

What Most Training Covers

What This Workshop Includes

Requirement-to-interaction mapping for AI feature/use case


Behavioural risk foresight in real user journeys + edge cases




Interaction-layer safeguards with mitigation and launch criteria

Operating governance model for ownership, decision-making, escalation, red-teaming and monitoring

What You’ll Work Through in 2 Days

This is applied work on realistic scenarios. 

Set the rules

translate regulatory and ethics requirements into (product/ experience level) non-negotiables

Identify behaviour & risk

identify trust and behavioural failure modes before launch

Design for trust & safe outcomes

detect bias/fairness risks in flows and decision points

Operationalise governance

define ownership, decision-rights, oversight, escalation, testing and monitoring

Prove readiness

stress-test one AI feature end-to-end with decision rationale and residual risk

What You Will Leave With

A set of methods and portfolio-ready artefacts that can be used immediately in real product decisions:

  • Regulatory + ethics requirement map to apply to real-world AI features, use cases & user journeys
  • Repeatable behavioural risk framework, plus a behavioural risk map and a prioritised mitigation backlog
  • Bias, fairness and inclusion risk register, with defined metric thresholds
  • Governance operating model, covering ownership, oversight, escalation, evaluations/red-teaming and monitoring plan
  • Audit-ready trail (decisions, rationale, evidence, and residual risk)

Who is This For (And Who Isn't)

Best Fit

  • Product and UX professionals researching, designing and shipping AI-powered features.
  • Teams accountable for adoption, trust and safety outcomes.
  • Organisations operating within high-risk areas and needing auditable decision rights & governance.



Not ideal if…

  • You only require policy-level guidance without applied design/delivery work.
  • You’re not currently involved in designing, developing or scaling AI features or usage.

Sara Portell  In

HCRAI Founder

  • 17+ years in product strategy, UX and research in global tech
  • Ms in Behavioural Science and PhD researcher in Psychology & AI (behaviour and cognition in AI systems)
  • AI ethics/compliance training + ISO/IEC 42001 certified auditor

Reach out to discuss workshop options:

Who You Are Learning From