Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Human-AI Interaction - Lecture 11 - Next Generation User Interfaces (4018166FNR)

Human-AI Interaction - Lecture 11 - Next Generation User Interfaces (4018166FNR)

This lecture forms part of a course on Next Generation User Interfaces given at the Vrije Universiteit Brussel.

Beat Signer

May 06, 2024
Tweet

More Decks by Beat Signer

Other Decks in Education

Transcript

  1. 2 December 2005 Next Generation User Interfaces Human-AI Interaction Prof.

    Beat Signer Department of Computer Science Vrije Universiteit Brussel beatsigner.com
  2. Beat Signer - Department of Computer Science - [email protected] 2

    May 6, 2024 Human-AI Interaction ▪ New technologies and applications that use AI to augment and enhance human capabilities ▪ conversational agents ▪ recommender systems ▪ augmented reality ▪ virtual reality ▪ social robots ▪ AI is fundamentally changing how we interact with computing systems ▪ e.g.consistency - consistent interfaces and predictable behaviour save users time and reduce errors - but AI systems are probabilistic and can change over time
  3. Beat Signer - Department of Computer Science - [email protected] 3

    May 6, 2024 Challenges ▪ Transparency ▪ Explainability (intelligibility) ▪ Responsiveness ▪ Adaptivity ▪ Privacy and security ▪ Data quality ▪ Bias and ethics ▪ AI systems that are fair, accountable and respectful ▪ no discrimination, harm or deception of users ▪ …
  4. Beat Signer - Department of Computer Science - [email protected] 4

    May 6, 2024 Guidelines for Human-AI Interaction https://www.microsoft.com/en-us/haxtoolkit/
  5. Beat Signer - Department of Computer Science - [email protected] 5

    May 6, 2024 G1:Make clear what the system can do ▪ Unclear user expectations about supported tasks or domains can lead to disappointment, product abandonment or even harms. ▪ avoid over-inflated user expectations ▪ Possible solutions ▪ provide brief overview of system capabilities or a specific feature ▪ provide explanations (user can gain insights) ▪ expose capabilities through system controls ▪ show possible system inputs ▪ show possible system outputs https://www.microsoft.com/en-us/haxtoolkit/
  6. Beat Signer - Department of Computer Science - [email protected] 6

    May 6, 2024 ▪ Helps users understand how often the AI system might make mistakes ▪ users often over- or underestimate how many mistakes an AI system might make ▪ Possible solutions ▪ communicate that the system is probabilistic and might make mistakes ▪ report system performance information ▪ alert user about known or anticipated issues with system performance G2:Make clear how well the system can do what it can do https://www.microsoft.com/en-us/haxtoolkit/
  7. Beat Signer - Department of Computer Science - [email protected] 7

    May 6, 2024 G3:Time services based on context ▪ Time when to act or interrupt based on the user’s current task and environment ▪ capture relevant input about a user’s surrounding context and infer about appropriate times to act https://www.microsoft.com/en-us/haxtoolkit/
  8. Beat Signer - Department of Computer Science - [email protected] 8

    May 6, 2024 G4:Show contextually relevant information ▪ Show information relevant to the user’s current task and environment ▪ e.g. an application recommending restaurants might consider a user’s current context (e.g.location) ▪ consider trade-offs between user benefits and privacy https://www.microsoft.com/en-us/haxtoolkit/
  9. Beat Signer - Department of Computer Science - [email protected] 9

    May 6, 2024 G5:Match relevant social Norms ▪ Ensure that the experience is delivered in a way that users would expect, given their social and cultural context ▪ e.g.an informal tone might be perceived as friendly in some countries but impolite in more formal cultures ▪ Possible solutions ▪ increasing diversity in development teams and study participants https://www.microsoft.com/en-us/haxtoolkit/
  10. Beat Signer - Department of Computer Science - [email protected] 10

    May 6, 2024 G6:Mitigate social biases ▪ Ensure that an AI system’s language and behaviour do not reinforce undesirable and unfair stereotypes and biases ▪ societal biases might result from the training data, how models are trained and tested and assumptions about the users who will interact or be impacted by the AI ▪ Possible solutions ▪ use of responsible AI toolkits - Error Analysis - Fairlearn - … https://www.microsoft.com/en-us/haxtoolkit/
  11. Beat Signer - Department of Computer Science - [email protected] 12

    May 6, 2024 G7:Support efficient invocation ▪ Make it easy to explicitly invoke or request an AI system’s services when needed ▪ make it easy to manually invoke a service if the AI system does not activate when needed - AI-powered writing assistant - design assistant in PowerPoint - … https://www.microsoft.com/en-us/haxtoolkit/
  12. Beat Signer - Department of Computer Science - [email protected] 13

    May 6, 2024 G7:Support efficient invocation …
  13. Beat Signer - Department of Computer Science - [email protected] 14

    May 6, 2024 G8:Support efficient dismissal ▪ Make it easy to dismiss or ignore undesired AI system services ▪ ensure that users can recover if an AI system activates when not needed or expected - AI-powered voice assistant - ads that should not be shown - … https://www.microsoft.com/en-us/haxtoolkit/
  14. Beat Signer - Department of Computer Science - [email protected] 15

    May 6, 2024 G9:Support efficient correction ▪ Make it easy to edit, refine or recover when an AI system is wrong ▪ AI system might be partially correct and users might achieve their goal by editing the output ▪ e.g.manual edit of a route recommended by a navigation app ▪ Possible solutions ▪ allow user to revert to a previous state or undo the AI system’s actions ▪ enable user to alter the AI system behaviour by editing, correcting or refining its output and making clear that their correction will be used as feedback for its learning over time - e.g. SwiftKey AI keyboard https://www.microsoft.com/en-us/haxtoolkit/
  15. Beat Signer - Department of Computer Science - [email protected] 16

    May 6, 2024 G10:Scope services when in doubt ▪ Engage in disambiguation or degrade the AI system’s service when uncertain about a user’s goals ▪ build the AI such that it can compute its own uncertainty and use this information to degrade or scope its services when in doubt ▪ Possible solutions ▪ elicit clarification from the user (human-in-the-loop) before taking action to resolve the system’s uncertainty ▪ avoid cold start problem by eliciting user preferences https://www.microsoft.com/en-us/haxtoolkit/
  16. Beat Signer - Department of Computer Science - [email protected] 17

    May 6, 2024 G11:Make clear why the system did what it did ▪ Enable the user to access an explanation of why the AI system behaved as it did (intelligibility) ▪ explanation increases user trust ▪ can provide explanations on a global level (entire system) or locally on the level of individual output ▪ Possible solutions ▪ provide global explanations on how the AI system makes decisions in general ▪ provide local explanations for specific actions or decisions of the AI system ▪ explain how user behaviour is mapped to system decisions ▪ provide “what if” explanations enabling a user to simulate and experiment with alternative input values https://www.microsoft.com/en-us/haxtoolkit/
  17. Beat Signer - Department of Computer Science - [email protected] 18

    May 6, 2024 G12:Remember recent interactions ▪ Maintain short-term memory and allow users to make efficient references to that memory ▪ e.g.in a conversation with an AI-powered voice assistant, the user should be able to refer to previous parts (context) of the conversation (e.g.“call him back” if they were previously talking about a specific person) https://www.microsoft.com/en-us/haxtoolkit/
  18. Beat Signer - Department of Computer Science - [email protected] 19

    May 6, 2024 G13:Learn from user behaviour ▪ Personalise the user’s experience by learning from their actions over time ▪ use signals from users to help the AI system learn, improve or personalise its service over time ▪ e.g.AI-powered writing assistant learning about a user’s preferred writing style ▪ personalised recommendations on shopping portals based on previous purchases https://www.microsoft.com/en-us/haxtoolkit/
  19. Beat Signer - Department of Computer Science - [email protected] 20

    May 6, 2024 G14:Update and adapt cautiously ▪ Limit disruptive changes when updating and adapting the AI system’s behaviour ▪ consider the scale and rate of changes ▪ understand how changes might disrupt or impede users ▪ when updating the model, avoid introducing new errors for tasks where the AI system was previously performing well ▪ Possible solutions ▪ system makes controlled and deliberate comprehensive update in response to user behaviour or other additional data ▪ system makes an immediate but local update maintaining the previous state to a large extent - only possible if model is able to accept additional information https://www.microsoft.com/en-us/haxtoolkit/
  20. Beat Signer - Department of Computer Science - [email protected] 21

    May 6, 2024 G15:Encourage granular feedback ▪ Enable the user to provide feedback indicating their preferences during regular interaction with the AI system ▪ enable users to provide explicit feedback on the AI system’s output and behaviour to steer how the AI evolves - e.g. via rating of the output etc. ▪ feedback can also help in monitoring that the AI system works as intended ▪ Possible solutions ▪ user-initiated explicit feedback on AI system output ▪ occasionally ask users to provide feedback ▪ implement user-feedback mechanism to flag output that is problematic, wrong or inappropriate https://www.microsoft.com/en-us/haxtoolkit/
  21. Beat Signer - Department of Computer Science - [email protected] 22

    May 6, 2024 G16:Convey consequences of user actions ▪ Immediately update or convey how user actions will impact future behaviour of the AI system ▪ help users understand how their actions influence the AI system ▪ Possible solutions ▪ feedforward: communicate to the user how taking a specific action will impact the future experience with the system ▪ feedback: communicate to the user how the action they just took impacts the experience with the system ▪ inform the user about their consequential action taken in the past and offer the option to undo or keep those actions https://www.microsoft.com/en-us/haxtoolkit/
  22. Beat Signer - Department of Computer Science - [email protected] 23

    May 6, 2024 G17:Provide global controls ▪ Allow the user to globally customise what the AI system monitors and how it behaves ▪ apply system-wide (global) preferences ▪ e.g.collected information (location information etc.) and privacy settings ▪ e.g.proofreading settings in Microsoft Word ▪ e.g.Bing SafeSearch https://www.microsoft.com/en-us/haxtoolkit/
  23. Beat Signer - Department of Computer Science - [email protected] 24

    May 6, 2024 G18:Notify users about changes ▪ Inform the user when the AI system adds or updates its capabilities ▪ inform users about major model updates so that they can recalibrate their expectations (guidelines G1 and G2) ▪ in some domains the performance might degrade while the overall performance improves - use tools (e.g. Backward Compatibility) to check for new errors https://www.microsoft.com/en-us/haxtoolkit/
  24. Beat Signer - Department of Computer Science - [email protected] 25

    May 6, 2024 Human-Centred Artificial Intelligence(HCAI) ▪ Many designs still based on a one-dimensional model of automation (more automation means less user control) ▪ e.g.also in the definition of levels of autonomy for self-driving cars ▪ Human-Centred Artificial Intelligence (HCAI) framework introduces a two-dimensional space ▪ possible to achieve high levels of human control and high levels of computer automation - more likely results in Reliable, Safe & Trustworthy (RST) applications Shneiderman, 2020
  25. Beat Signer - Department of Computer Science - [email protected] 26

    May 6, 2024 HCAI Framework ▪ Desired goal is often the upper left corner ▪ high level of human control and high level of computer automation Shneiderman, 2020
  26. Beat Signer - Department of Computer Science - [email protected] 27

    May 6, 2024 HCAI Framework … ▪ Bottom right ▪ computer autonomy requiring rapid action - no time for human intervention ▪ price of failure is high - extensive testing and monitoring during usage at scale ▪ Top left ▪ human autonomy where human mastery is desired - enable competence building, free exploration and creativity Shneiderman, 2020
  27. Beat Signer - Department of Computer Science - [email protected] 28

    May 6, 2024 HCAI Framework … ▪ Bottom left ▪ simple devices such as clocks, music boxes or mousetraps ▪ Excessive automation ▪ can be dangerous designs - Boeing 737 MAX’s MCAS system - Tesla “Autopilot” since designers allowed drivers to ignore safety warnings Shneiderman, 2020
  28. Beat Signer - Department of Computer Science - [email protected] 29

    May 6, 2024 HCAI Framework … ▪ Excessive human control ▪ deadly mistakes that could be avoided - e.g. collision avoidance system in cars ▪ Top right ▪ user has high control but system might also apply sophisticated AI-based automation Shneiderman, 2020
  29. Beat Signer - Department of Computer Science - [email protected] 30

    May 6, 2024 Prometheus Principles ▪ Consistent interfaces to allow users to form, express, and revise intent ▪ Continuous visual display of the objects and actions of interest ▪ Rapid, incremental and reversible actions ▪ Error prevention ▪ Informative feedback to acknowledge each user action ▪ Progress indicators to show status, and ▪ Completion reports to confirm accomplishment
  30. Beat Signer - Department of Computer Science - [email protected] 31

    May 6, 2024 Example: Thermostat ▪ User has better control of their temperature in their home ▪ Shows room temperature and current thermostat setting ▪ clarifies what a user can do to raise or lower the setting ▪ response is shown to any action ▪ feedback on how user controlled the automation to get the temperature they desire ▪ Thermostat will automatically keep the temperature at the new setting ▪ Machine learning to better accommodate user schedules Shneiderman, 2020
  31. Beat Signer - Department of Computer Science - [email protected] 32

    May 6, 2024 Example: Elevators ▪ Substantial automation while providing appropriate human control ▪ Calling the elevator (up/down button) ▪ feedback that user’s intent has been registered ▪ feedback about elevators current floor ▪ Entering the elevator and selecting a floor ▪ feedback that user’s intent has been registered ▪ progress on floor display ▪ Automation deals with many safety issues ▪ Scheduling of multiple elevators ▪ Override controls (e.g.firefighters or moving crews) Shneiderman, 2020
  32. Beat Signer - Department of Computer Science - [email protected] 33

    May 6, 2024 Example: Smartphone Camera ▪ Preview of image to be taken ▪ smooth update of preview image ▪ automatic adjustments to aperture and focus ▪ automatically compensating for shaking hands ▪ Various modes and filters can be selected ▪ high level of user control but also high level of automation ▪ Mistakes can be corrected by knowledgeable users ▪ e.g.use touch to manually set the desired focus point
  33. Beat Signer - Department of Computer Science - [email protected] 34

    May 6, 2024 Physical Intelligence & Liquid Networks
  34. Beat Signer - Department of Computer Science - [email protected] 35

    May 6, 2024 References ▪ S. Amershi et al., Guidelines for Human-AI Interaction, Proceedings of CHI 2019, Glasgow, UK, May 2019 ▪ https://doi.org/10.1145/3290605.3300233 ▪ HAX Toolkit ▪ https://www.microsoft.com/en-us/haxtoolkit/ ▪ M.K. Hong, A. Fourney, D. DeBellis and S. Amershi, Planning for Natural Language Failures with the AI Playbook, Proceedings of CHI 2021, Online Virtual Conference, May 2021 ▪ https://doi.org/10.1145/3411764.3445735
  35. Beat Signer - Department of Computer Science - [email protected] 36

    May 6, 2024 References … ▪ AI Playbook ▪ https://microsoft.github.io/HAXPlaybook/ ▪ B.Shneiderman, Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy, International Journal on Human-Computer Interaction, 2020 ▪ https://doi.org/10.1080/10447318.2020.1741118 ▪ Physical Computing & Liquid Networks ▪ https://www.youtube.com/watch?v=QOCZYRXL0AQ ▪ Human-AI Interaction (HAX) ▪ https://www.interaction-design.org/literature/topics/human-ai- interaction
  36. Beat Signer - Department of Computer Science - [email protected] 37

    May 6, 2024 References … ▪ IBM Design for AI ▪ https://www.ibm.com/design/ai/ ▪ Google People + AI Guidebook ▪ https://pair.withgoogle.com/guidebook/ ▪ Responsible AI ▪ https://ai.google/responsibilities/responsible-ai-practices/