Slide 1

Slide 1 text

Speed Bumps, Not Roadblocks: Designing AI Trust

Slide 2

Slide 2 text

Hello DariaTarawneh Head of Enterprise Design Miro

Slide 3

Slide 3 text

No content

Slide 4

Slide 4 text

No content

Slide 5

Slide 5 text

No content

Slide 6

Slide 6 text

No content

Slide 7

Slide 7 text

No content

Slide 8

Slide 8 text

Frictionless + high stakes = disaster.

Slide 9

Slide 9 text

No preview of scope (247 files would be affected) No risk assessment ( PRODUCTION ENVIRONMENT) No pause/stop button during execution No real-time activity log ("Currently deleting payment.js...") No confidence indicator ("I'm 40% sure about this approach") No "Wait, that seems extreme" warning And not compliant with rules and regulations. The design failures:

Slide 10

Slide 10 text

What Marcus needed wasn't ease. What Marcus needed was speed bumps.

Slide 11

Slide 11 text

ACT 1: The reality of AI adoption

Slide 12

Slide 12 text

78% of companies in the US have deployed AI in their organisation.

Slide 13

Slide 13 text

what is the percentage of AI activation in companies in Japan? 30% 90% 50% 70%

Slide 14

Slide 14 text

what is the percentage of AI activation in companies in Japan? 60% 90% 50% 70% 30%

Slide 15

Slide 15 text

84% of companies world wide say security and compliance are their #1 buying criteria.

Slide 16

Slide 16 text

Only 9.6% of Japanese AI users would feel comfortable handing over full control to AI for business, but 55.2% would be open to using AI with review or oversight mechanisms in place

Slide 17

Slide 17 text

"Show us exactly what it's doing" "Let us say 'no' to specific things" "Our rules, not yours" "Prove to us people are using this safely" The shopping list

Slide 18

Slide 18 text

How Miro does it

Slide 19

Slide 19 text

No content

Slide 20

Slide 20 text

No content

Slide 21

Slide 21 text

No content

Slide 22

Slide 22 text

No content

Slide 23

Slide 23 text

No content

Slide 24

Slide 24 text

In general companies are paying 15-30% premiums for security and control features.

Slide 25

Slide 25 text

ACT 2: The Trust framework

Slide 26

Slide 26 text

VISIBILITY The Trust Design Framework CONTROL PREDICTABILITY ACCOUNTABILITY RECOVERY Can I see what's happening? Can I stop it if it goes wrong? Does it behave consistently? Can I trace what happened? Can I fix it when it breaks?

Slide 27

Slide 27 text

Visibility: No preview of what AI would do Control: No scope limitations Predictability: No indication of how "fix bugs" would be interpreted Enter task: _______________ [Submit]

Slide 28

Slide 28 text

Visibility: Full scope preview Control: Multiple decision points Predictability: Shows AI's interpretation 🤖 AI Task Analysis Your request: "Fix all files in production" Scope detected: 247 files in production environment ⚠️ Includes critical systems: • Payment processing • User authentication • Core API AI's proposed approach: "Delete files containing bugs" ⚠️ Risk Assessment: CRITICAL Confidence in this approach: 42% (LOW) Alternative approaches available: - Flag files for human review - Create fixes without deletion - Generate detailed bug report [ Cancel ] [ Choose different approach ] [ Set safety limits ] [ Proceed with extreme caution ]

Slide 29

Slide 29 text

ACT 3: The Designers Toolkit

Slide 30

Slide 30 text

Visual language that changes based on how confident AI is. Why it matters: When all outputs look the same, users can't distinguish reliable from risky. Apply to: Background tints Border styles (solid → dashed → dotted) Typography weight Icon treatment (filled → outlined) Button states Confidence Gradient Making AI's thought process visible and traceable. Why it matters: Black box decisions break trust. Transparent reasoning builds it Always show source count Make sources clickable Log the AI's reasoning steps Present in digestible chunks Source Chain

Slide 31

Slide 31 text

Specific moments where humans can review, modify, or stop AI actions. Why it matters: Control without intervention points is an illusion. The Intervention Point AI suggests action "Delete 8 duplicate files" Affected files: contacts.csv, data.xlsx... [ Review files first ] [ Proceed ] APPROVAL (High-risk) ⛔ AI needs approval "Send email to 2,847 customers" This action cannot be undone. You must review before proceeding: ☐ I've reviewed the email content ☐ I've verified the recipient list ☐ I understand this cannot be undone [ Cancel ] [ Approve and send ] EMERGENCY STOP (Always visible) 🚨 [STOP ALL AI ACTIVITY]

Slide 32

Slide 32 text

A record of what AI did, why it did it, and what happened. Why it matters: Accountability requires traceability. You can't fix what you can't see. Scannability: Visual hierarchy, not walls of text Actionability: Every entry has action buttons Searchability: Filter by action type, date, confidence The Audit Trail Key elements: User or AI : Action taken Timestamp Reasoning Confidence score What actually happened Available actions Who: What When: Why: How certain: Result: Recovery:

Slide 33

Slide 33 text

We're not just "UX designers" anymore. We're trust architects.

Slide 34

Slide 34 text

Thank you Daria Tarawneh Connect with me