Designing Trust Between AI, Business, and Peopl...

Sponsored · Your Podcast. Everywhere. Effortlessly. Share. Educate. Inspire. Entertain. You do you. We'll handle the rest.
Avatar for Spectrum Tokyo Spectrum Tokyo PRO
February 25, 2026

Designing Trust Between AI, Business, and People / AI、ビジネス、そして人々の間の信頼をデザインする

Session by:
Daria Tarawneh / Miro

Spectrum Tokyo Design Fest 2026 Day 2 (2026/02/15)

Avatar for Spectrum Tokyo

Spectrum Tokyo PRO

February 25, 2026
Tweet

Other Decks in Design

Transcript

  1. No preview of scope (247 files would be affected) No

    risk assessment ( PRODUCTION ENVIRONMENT) No pause/stop button during execution No real-time activity log ("Currently deleting payment.js...") No confidence indicator ("I'm 40% sure about this approach") No "Wait, that seems extreme" warning And not compliant with rules and regulations. The design failures:
  2. Only 9.6% of Japanese AI users would feel comfortable handing

    over full control to AI for business, but 55.2% would be open to using AI with review or oversight mechanisms in place
  3. "Show us exactly what it's doing" "Let us say 'no'

    to specific things" "Our rules, not yours" "Prove to us people are using this safely" The shopping list
  4. VISIBILITY The Trust Design Framework CONTROL PREDICTABILITY ACCOUNTABILITY RECOVERY Can

    I see what's happening? Can I stop it if it goes wrong? Does it behave consistently? Can I trace what happened? Can I fix it when it breaks?
  5. Visibility: No preview of what AI would do Control: No

    scope limitations Predictability: No indication of how "fix bugs" would be interpreted Enter task: _______________ [Submit]
  6. Visibility: Full scope preview Control: Multiple decision points Predictability: Shows

    AI's interpretation 🤖 AI Task Analysis Your request: "Fix all files in production" Scope detected: 247 files in production environment ⚠️ Includes critical systems: • Payment processing • User authentication • Core API AI's proposed approach: "Delete files containing bugs" ⚠️ Risk Assessment: CRITICAL Confidence in this approach: 42% (LOW) Alternative approaches available: - Flag files for human review - Create fixes without deletion - Generate detailed bug report [ Cancel ] [ Choose different approach ] [ Set safety limits ] [ Proceed with extreme caution ]
  7. Visual language that changes based on how confident AI is.

    Why it matters: When all outputs look the same, users can't distinguish reliable from risky. Apply to: Background tints Border styles (solid → dashed → dotted) Typography weight Icon treatment (filled → outlined) Button states Confidence Gradient Making AI's thought process visible and traceable. Why it matters: Black box decisions break trust. Transparent reasoning builds it Always show source count Make sources clickable Log the AI's reasoning steps Present in digestible chunks Source Chain
  8. Specific moments where humans can review, modify, or stop AI

    actions. Why it matters: Control without intervention points is an illusion. The Intervention Point AI suggests action "Delete 8 duplicate files" Affected files: contacts.csv, data.xlsx... [ Review files first ] [ Proceed ] APPROVAL (High-risk) ⛔ AI needs approval "Send email to 2,847 customers" This action cannot be undone. You must review before proceeding: ☐ I've reviewed the email content ☐ I've verified the recipient list ☐ I understand this cannot be undone [ Cancel ] [ Approve and send ] EMERGENCY STOP (Always visible) 🚨 [STOP ALL AI ACTIVITY]
  9. A record of what AI did, why it did it,

    and what happened. Why it matters: Accountability requires traceability. You can't fix what you can't see. Scannability: Visual hierarchy, not walls of text Actionability: Every entry has action buttons Searchability: Filter by action type, date, confidence The Audit Trail Key elements: User or AI : Action taken Timestamp Reasoning Confidence score What actually happened Available actions Who: What When: Why: How certain: Result: Recovery: