to Design Practice and Workflow 2. Sensitizing Designers to Existing ML Design Opportunities 3. Developing a Designerly Understanding of ML (Yang, 2018)
design with data scientists without data at hand? (3) How to understand and stretch NLP’s technical limits? (4) Within these limits, how to envision novel, less obvious applications of NLP? (5) How to prototype an intelligently flawed UX? Five Challenges (1) a new form of wireframes: illustrate abstract language-interaction design ideas (2) a set of NLP technical properties: are relevant to UX design (3) a new wizard-of-oz-based prototyping method: rapidly simulate various kinds of NLP errors Three Contributions Sketching NLP (Yang et al., 2019)
Dashboard Data-enabled Design Canvas A Two-Step Approach Step 1: Contextual step - goal: gain contextualized understanding of the design space - an open exploration Step 2: Informed step - goal: iteratively design while the prototype(s) stay in the field - an open and dynamic set of tools for shaping the design space based on remotely collected insights - a situated exploration (an on-the-fly exploration) (2016, 2018)
data resources - “visual marionette system” that allows designers to “wizard-of-oz” AI behaviors in real time while observing users Step 2: Simulate, then implement hardware Step 1: Simulate, then implement AI - support AR simulations of AI products, so designers can rapidly test and update the form/ behavior of an embodied experience Delft AI Toolkit (van Allen, 2018)
intelligent objects physical intelligent objects NLP-involved systems component-based component-based a set of wireframe and NLP properties interact w/ users in the wild WoZ + AR Hybrid WoZ and off-the- shelf prototyping Data or not design along with data - design w/o data Algorithm N/A, might be simple rule decision tree NN-models Visualization HCP dashboard AR Toolkit Family toolkit flow-based programming toolkit Notebook (as a form of wireframe)
intelligibility transparency Explainable AI interact with new materials understand people Accessible ML uncertainty evolving learning trust control user expectation understand acceptance bias Engineer sketching prototyping understand data 1 2 3
knowledge (i.e., a shared mental model) through an iterative, interactive process. 2. Mutual Benefits • Human and AI as a team achieves superior results that a single human or AI cannot achieve alone. 3. Mutual Growth • Human and AI both have a growth mindset —i.e. they learn together, learn from each other, learn with each other, and grow and evolve over time. Key Concepts
as Coach 4. Computer as Colleague Lubart’s Framework for Human-AI Interaction (Lubart, 2005) Todd Lubart. How can computers be partners in the creative process: Classification and commentary on the special issue. Int. J. Hum.-Comput. Stud., 63(4-5):365–369, October 2005.
image classifier Daniel S. Weld and Gagan Bansal. The challenge of crafting intelligible intelligence. Commun. ACM, 62(6):70–79, May 2019. Interactive Explanation Support different follow-up and drill-down action after an initial explanation: 1. Redirecting the answer by changing the foil 2. Asking for more detail 3. Asking for a decision's rationale 4. Query the model’s sensitivity 5. Changing the vocabulary 6. Perturbing the input examples 7. Adjusting the model
Walter Lasecki, and Eric Horvitz. Updates in human-ai teams: Understanding and addressing the performance/ compatibility tradeoff. In AAAI Conference on Artificial Intelligence. AAAI, January 2019.
Silent Way Jon McCormack, Toby Gifford, Patrick Hutchings, Maria Teresa Llano Rodriguez, Matthew Yee-King, and Mark d’Inverno. In a silent way: Communication between ai and improvising musicians beyond sound. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19) Co-Learning for Music Performance
Lee, and Bongwon Suh. I lead, you help but only with enough details: Understanding user experience of co-creation with artificial intelligence. In Proc. of CHI 2018. Human-AI Collaboration (1) Let human takes the Initiative (2) Provide just enough instruction (3) Embed interesting elements in the interaction (4) Ensure balance Co-Learning for Drawing
Nicholas Liao, Jonathan Chen, Shao-Yu Chen, Shukan Shah, Vishwa Shah, Joshua Reno, Gillian Smith, and Mark O. Riedl. Friend, collaborator, student, manager: How design of an ai-driven game level editor affects creators. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI ’19
Narayan Hegde, Jason Hipp, Been Kim, Daniel Smilkov, Martin Wattenberg, Fernanda Viegas, Greg S. Corrado, Martin C. Stumpe, and Michael Terry. Human- centered tools for coping with imperfect algorithms during medical decision-making. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI ’19 Co-Learning for Medical Decision-Making
Anne Spencer Ross, Chenhao Tan, Yangfeng Ji, and Noah A. Smith. Creative writing with a machine in the loop: Case studies on slogans and stories. In 23rd International Conference on Intelligent User Interfaces, IUI ’18 Co-Learning for Writing Support (I)
P. Chen, and Mubbasir Kapadia. LISA: lexically intelligent story assistant. In Proceedings of the Thirteenth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE-17) Co-Learning for Writing Support (II)
Kocielnik, Bowen Yu, Sandeep Soni, Jaime Teevan, and Andr ́es Monroy-Hern ́andez. Calendar.help: Designing a workflow-based scheduling agent with humans in the loop. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems.