Build AI Products People Can Trust

2 DAY WORKSHOP

A practical workshop for Product and UX professionals to govern behavioural and AI risk at the interaction layer, before failures become business, legal and reputational problems.


Limited cohort size for high-touch feedback on real use cases. 

Why This Programme

Built on Evidence

Applied Method, Not Theory

Work on real AI features and leave with practical governance artefacts you can apply immediately.


Research + Industry Combined

Grounded in behavioural science, human-AI interaction, AI ethics, risk & compliance. Applied to product design.


Built for Cross-Functional Reality

Designed for Product, UX, Ops, Legal, and Compliance collaboration. You leave with stakeholder-ready tools to manage governance across teams.


Led by an Experienced Practitioner

Sara Portell, Ms Behavioural Science, and PhD Researcher in Psychology & AI, with 17+ years across product strategy, UX, behavioural research and AI governance in global tech.


Why AI Products Fail in Production

Many AI products can pass audits.

Fewer stay safe and trustworthy in real-world use, where risk appears at the interaction layer and over time:

  • User calibration risk (over-trust, under-trust, or misuse of system)
  • Fairness risk (bias and unfair outcomes accumulate unnoticed across groups)
  • Performance drift risk (behavioural drift goes unmonitored over time)
  • Recommendation safety risk (system gives inappropriate or unsafe recommendations for the user’s context, especially high-stakes)
  • Output integrity risk (wrong, unverifiable, or misleading)
  • Accountability risk (ownership and escalation are unclear when incidents occur)

From Policy Awareness to Product-Ready Governance

Regulations, standards, model controls




Compliance checklists


Principles and ethical guidelines

Documentation for audit readiness

What Most Training Covers

What This Workshop Includes

Requirement-to-interaction mapping for AI feature/use case


Behavioural risk foresight in real user journeys + edge cases




Interaction-layer safeguards with mitigation and launch criteria

Operating governance model for ownership, decision-making, escalation, red-teaming and monitoring

What You’ll Work Through in 2 Days

This is applied work on realistic scenarios. 

Set the rules

translate regulatory and ethics requirements into (product/ experience level) non-negotiables

Identify behaviour & risk

identify trust and behavioural failure modes before launch

Design for trust & safe outcomes

detect bias/fairness risks in flows and decision points

Operationalise governance

define ownership, decision-rights, oversight, escalation, testing and monitoring

Prove readiness

stress-test one AI feature end-to-end with decision rationale and residual risk

A Repeatable Behavioural Risk Loop for Any AI Feature

You will learn and apply a reusable loop to use during design reviews, launch readiness and post-launch governance.

01

Desired vs likely real behaviour

What behaviour do you want to enable?



What behaviour could happen?

02

Failure mode / harm

What failure or harm could follow?

03

Value at risk

What human value must be protected?

04

Legal + governance

What is the ethical standard / legal / governance floor?

05

Requirements

What operational guardrails are required before launch?


What interaction patterns will implement these guardrails in the user experience?

06

Evidence + monitoring plan

How do you know it works, and keeps working over time?


What is the minimum evidence to show each requirement is working?

What You Will Leave With

A set of methods and portfolio-ready artefacts that can be used immediately in real product decisions:

  • Regulatory + ethics requirement map to apply to real-world AI features, use cases & user journeys
  • Repeatable behavioural risk framework, plus a behavioural risk map and a prioritised mitigation backlog
  • Bias, fairness and inclusion risk register, with defined metric thresholds
  • Governance operating model, covering ownership, oversight, escalation, evaluations/red-teaming and monitoring plan
  • Audit-ready trail (decisions, rationale, evidence, and residual risk)

Who is This For (And Who Isn't)

Best Fit

  • Product and UX professionals researching, designing and shipping AI-powered features.
  • Teams accountable for adoption, trust and safety outcomes.
  • Organisations operating within high-risk areas and needing auditable decision rights & governance.



Not ideal if…

  • You only require policy-level guidance without applied design/delivery work.
  • You’re not currently involved in designing, developing or scaling AI features or usage.

Sara Portell  In

HCRAI Founder

  • 17+ years in product strategy, UX and research in global tech
  • Ms in Behavioural Science and PhD researcher in Psychology & AI (behaviour and cognition in AI systems)
  • AI ethics/compliance training + ISO/IEC 42001 certified auditor

Reach out to discuss workshop options:

Who You Are Learning From

Download the full workshop outline

Download the complete programme, module contents and participation options.