Upgrade to Pro — share decks privately, control downloads, hide ads and more …

UXLx: Intelligence Augmentation

UXLx: Intelligence Augmentation

Slides from a full day workshop at UX Lisbon 2019 by me and Annina Antinranta. Learn how to design a digital service that includes machine learning.

Daryl Weir

May 21, 2019
Tweet

Other Decks in Design

Transcript

  1. Intelligence Augmentation - Machine Learning for everyone B ERL IN

    · HELSINK I · LO NDO N · MU NICH · STO C K H O L M · TA MPE R E
  2. Twitter: @darylweir @thereminion Us M ACHINE L EAR NIN G

    F OR E VE RYO NE Daryl Weir S E NI O R DATA S C I E NT I ST Annina Antinranta D E S I GN DI R E CTO R
  3. Helsinki Tampere Stockholm Oslo London Berlin Munich Nordic Roots, Global

    Mindset FUTURE. CO-CREATED. PEOPLE 550+ NATIONALITIES 38 OFFICES 7 YoY GROWTH 30% Family of Companies eCommerce & Growth Hacking Artificial Intelligence & Machine Learning
  4. Twitter: @darylweir @thereminion You? M ACHINE L EAR NIN G

    F OR E VE RYO NE What’s your background?
  5. AI & ML add complications ‣New terminology ‣New ways for

    things to fail ‣New risks We need tools to help build common understanding and design AI applications well A D DING M L Twitter: @darylweir @thereminion
  6. M ACHINE L EAR NIN G F OR E VE

    RYO NE Agenda Introduction ‣ AI crash course ‣ Choosing a business
 problem Creating a concept ‣ Customer segments ‣ Customer journeys ‣ Data review ‣ Concept creation AI-specific issues ‣ Dealing with errors ‣ Learning loops ‣ Bias & data ethics Wrap-up ‣ Prototyping ‣ Finalising concepts ‣ Presentations ‣ Q & A Coffee break Lunch Coffee break
  7. Twitter: @darylweir @thereminion Machine Learning is shrouded in hype AI,

    big data, machine learning… These have been buzz words for years now To be able to work with these technologies, all stakeholders need a common understanding of the answers to these questions: - What can machine learning actually do? - Should I use machine learning to solve my problem? HYP ED UP
  8. Twitter: @darylweir @thereminion Defining some terms Artificial intelligence (AI): computer

    systems that show human-like intelligence in some task(s). Machine learning (ML): toolbox of algorithms and techniques that learn rules from data. Used to implement “AI” in single, well-defined tasks. Deep learning: one of the most popular tools from the machine learning toolbox. Particularly effective when the dataset is very large. M ACHINE L EAR NIN G F OR E VE RYO NE Artificial Intelligence Machine Learning Deep learning
  9. “Intelligence” is a loaded word Current AIs are not intelligent

    in the way that humans are. They are narrow: they solve one thing well, and have no ability to generalise outside that scope. When you hear the term AI today, what people are talking about is narrow AI, most often using machine learning. So called “general AIs”, that would learn, reason and adapt in a human-like way, do not exist. Some argue they cannot exist. Twitter: @darylweir @thereminion
  10. M ACHINE L EAR NIN G F OR E VE

    RYO NE Twitter: @darylweir @thereminion Machine Learning Data Rules More on machine learning Programming is about writing a set of rules to solve a problem. Machine learning helps solve problems where the rules are too hard to write down. Machine learning discovers rules from data. There are thousands of algorithms to do this discovery.
  11. M ACHINE L EAR NIN G F OR E VE

    RYO NE Twitter: @darylweir @thereminion Machine Learning Data Rules (Model) Training Trained Model Inputs Outputs Production Two phases The set of rules learned by a machine learner is often called a model. The process of learning the rules is called the training process or training the model. In production, the trained model is applied to turn inputs into outputs.
  12. Example: self driving cars Narrow question: given the road looks

    like this, how should I turn the steering wheel to not crash? Data Given: camera image of the road Tell me: steering wheel angle CMU did this - the car drove 3000 miles across the US and the machine was in control 98% of the time W H AT IS ML? Twitter: @darylweir @thereminion
  13. Twitter: @darylweir @thereminion You still can’t buy a self driving

    car Some of the key problems of self-driving cars have been solved for 20 years! However, covering the last mile has proven extremely time- consuming and expensive This is an example of the Pareto principle: 20% of the effort buys you 80% of the results This is a lesson for AI applications in general: fully automating a complex human behaviour is really hard M ACHINE L EAR NIN G F OR E VE RYO NE Effort Results
  14. Machine learning is great at answering narrow questions Humans are

    great at synthesising knowledge and decision making Why not use the best of both? Human-machine collaboration
  15. Intelligence Augmentation Intelligence Augmentation (IA): the use of machine learning

    to support and enhance human capabilities in a task The system acts as a smart assistant to the human, rather than completely automating the work (as it does in self- driving cars) ‣ Google Search ‣ Spotify Discover Weekly ‣ Siri/Alexa/Cortana/Google Assistant M ACHINE L EAR NIN G F OR E VE RYO NE Twitter: @darylweir @thereminion
  16. Twitter: @darylweir @thereminion What kind of things can we learn

    to do? M ACHINE L EAR NIN G F OR E VE RYO NE PREDICT PERSONALIZE RECOGNIZE UNCOVER STRUCTURE
  17. Twitter: @darylweir @thereminion Predict Predict something about the future, such

    as: M ACHINE L EAR NIN G F OR E VE RYO NE A NUMBER E.g. how many umbrellas will my shop sell this month? A YES / NO ANSWER E.g. will this wheel fail in the next week? ONE FROM A SET OF OPTIONS E.g. what department will this call go to?
  18. Twitter: @darylweir @thereminion Personalize M ACHINE L EAR NIN G

    F OR E VE RYO NE Tailor system behaviour to specific users or groups of users RECOMMEND CONTENT E.g Netflix, Amazon, New York Times, Google Ads. TARGET COMMUNICATION e.g. Send email campaign only to interested users.
  19. Twitter: @darylweir @thereminion Recognize Identifying information from input sources, such

    as: M ACHINE L EAR NIN G F OR E VE RYO NE IMAGES E.g. face recognition. SOUND E.g. song recognition. TEXT E.g. chatbot.
  20. Twitter: @darylweir @thereminion Uncover structure Identify interesting patterns and information

    in data. M ACHINE L EAR NIN G F OR E VE RYO NE DISCOVER GROUPS E.g. find common topics in a collection of books.
 DISCOVER ANOMALIES E.g. manufacturing defects, fraud detection, unusual medical scans
  21. 33 E XER CIS E Ideate a machine learning concept

    to enhance customer experience for a fictional startup’s product. Work in teams at your tables
 Don’t take it too seriously :) Workshop task
  22. 34 E XER CIS E • You have a business

    card for your startup on the table • Introduce yourselves to your team mates, and pick your titles within the startup • The sillier, the better Icebreaker: founding your startup
  23. AI is not a technology challenge To create real value,

    you need to understand your people, your business and your users
  24. Machine learning and design Machine learning is just a set

    of tools - it can’t replace the design process. You still need to do your research to identify pain points and problems worth solving. Machine learning gives you new ways to tackle some of those problems. Not all problems can or should be solved with ML M ACHINE L EAR NIN G F OR E VE RYO NE Twitter: @darylweir @thereminion
  25. “If a typical person can do a mental task with

    less than one second of thought, we can probably automate it using AI” - Andrew Ng
  26. Twitter: @darylweir @thereminion Our rule of thumb You might have

    a machine learning problem if: ‣A human expert could do the task in a few seconds or less ‣The rules are hard or impossible to write down ‣It’s easy to collect examples W H EN TO U SE ML
  27. Is there a cat in this image? Writing rules for

    the presence or absence of a cat is really, really hard. BUT Finding cat pictures is really, really easy. Twitter: @darylweir @thereminion W H EN TO U SE ML
  28. Twitter: @darylweir @thereminion An extra condition You might have a

    machine learning problem if: ‣A human expert could do the task in a few seconds or less ‣The rules are hard or impossible to write down ‣It’s easy to collect examples ‣Knowing the rules would allow meaningful action W H EN TO U SE ML
  29. Discovery tools Finding the right business problem for your company

    could easily be a workshop all on its own You typically need to interview potential users, analyse the market, benchmark technological options, etc At Futurice we use our own LSC toolkit when helping our clients with business problem ideation We don’t have time for the full process, so we’ll use a single canvas to give you a taste M ACHINE L EAR NIN G F OR E VE RYO NE Twitter: @darylweir @thereminion
  30. SMART SERVICE STORYLINE - - - Need Key touchpoint Service

    idea ML Value Transparency Bias Learning Loops Discoverability Customer story Service provider story - - M ACHINE L EAR NIN G F OR E VE RYO NE Your tools today Machine output/prediction Positive Negative Service takes an action, you think it's correct This is called: True positive Service doesn't take an action, you think it should have This is called: False negative Service takes an action, you think it's wrong This is called: False positive Service doesn't take an action, you agree it shouldn't have This is called: True negative CONFUSION MATRIX Reality/Customer reaction Positive Negative - Service idea
  31. Take with a grain of salt M ACHINE L EAR

    NIN G F OR E VE RYO NE Twitter: @darylweir @thereminion Remember that this workshop is intended to teach new concepts, not create a realistic business case We’ll be moving quickly though a lot of different exercises - remember that in the real world you’d spend a lot more time on each of these You might end up forcing ML into a problem where it doesn’t strictly belong: that’s fine, as long as you learn something If you discover later in the workshop that something you did earlier doesn’t work well, change it! Have fun!
  32. 48 E XER CIS E Task: come up with a

    business problem & ideate a product Discuss in your team and come up with a high level business objective for your startup. What is the root problem you try to solve? Why is it important? Decide (roughly) what kind of product your startup will offer Don’t worry about machine learning yet - focus on the problem you want to address and your first guess of a product that solves the problem Try to come up with a B2C problem, not B2B - it’ll be easier to design IA solutions when there is a clear end user for your product
  33. Finding your fit M ACHINE L EAR NIN G F

    OR E VE RYO NE Twitter: @darylweir @thereminion Obviously, you don’t come up with business problems in a vacuum. You should be informed by the needs and experiences of potential end-users. Again, effective user research is a whole workshop on its own, and we don’t have time for that We’ve prepared some potential user segments based on Finnish market research data - we’ll use those to target the products for our startups
  34. 52 E XER CIS E • Read through the customer

    segments and choose ONE to focus on (remember, grain of salt) • Ideate the customer’s need - this is basically a more refined version of your business problem, taking the user perspective into account • Based on these, you will create a service concept that uses machine learning Task: The user and their need
  35. Journey mapping M ACHINE L EAR NIN G F OR

    E VE RYO NE Twitter: @darylweir @thereminion Customer journey maps are a common design tool They can be used in a number of different ways, most commonly: 1. Mapping the current state of the world, in order to identify opportunities for new services 2. Designing the flow of a new service to maximise good experiences for the customer We’ll focus on the first case here - mapping how our customer accomplishes the goal now, before our startup comes along
  36. 55 E XER CIS E • Map out how the

    customer accomplishes their need now - where do they go, what do they do, what tools do they use • Add the channels and touchpoints the user might encounter - use the channel cards for inspiration. • Describe the emotional journey - how does the user feel at each point? Are there frustrating or boring steps involved? Task: Customer journey map
  37. Data opportunities M ACHINE L EAR NIN G F OR

    E VE RYO NE Twitter: @darylweir @thereminion Now that we know what the customer journey looks like, we can think about what data we have available You might have heard quotes like “data is the new oil”, and to some extent that’s true - data is the fuel that powers machine learning, and it’s transforming a lot of different business areas But first, what exactly is data?
  38. Twitter: @darylweir @thereminion What is data? Data is a a

    very broad term: it basically refers to facts or pieces of information In machine learning, a collection of data is usually called a dataset The individual items in a dataset are called data points or samples Data has features: individual properties that can be measured, for example: • pixels in an image • characters in a document • medical measurements M ACHINE L EAR NIN G F OR E VE RYO NE Samples Features Boston housing statistics, 1978
  39. Twitter: @darylweir @thereminion What is data? Data may have labels:

    these are like tags, quantities or categories that tell something interesting about the samples In machine learning, labels are the things that we try to predict Most commonly, labels are one the following: ‣ numbers e.g. housing sale price ‣ binary options e.g. is this machine failing? ‣ categories e.g. which account is this invoice related to? M ACHINE L EAR NIN G F OR E VE RYO NE Samples Label Boston housing statistics, 1978
  40. Where does data come from? M ACHINE L EAR NIN

    G F OR E VE RYO NE Twitter: @darylweir @thereminion Machine learning systems can be trained on many kinds of data. Some of the most common sources include: ‣ Software logs: data generated by existing software systems. For example: clickstreams, event logs, and transaction databases ‣ Sensor data: data from a physical sensor in the world. For example: GPS locations, camera images, audio recordings, temperature sensors ‣ Public data: publicly available datasets from governments and organisations. For example, weather records, company registration info, traffic levels ‣ Purchased data: many large companies sell anonymised data from their services. For example, Uber, Facebook, insurance companies ML system
  41. 61 E XER CIS E • Ideate the available data

    sources that you could use to create a data-driven product • Remember the customer journey - which touchpoints and channels might generate useful data? • Is there some external data that could be bought or acquired? • As a team, identify the most relevant data sources and add them to the data canvas • Don’t worry about features just yet Task: Data sources
  42. Putting it all together M ACHINE L EAR NIN G

    F OR E VE RYO NE Twitter: @darylweir @thereminion Now it’s time to come up with a service concept Come up with a way to use machine learning to address one of the pain points from the customer journey Make sure to choose something where data either already exists, or will be easy to generate You might come up with something that has multiple applications of ML: for example, “we’ll use face recognition to identify people in our store, then show them product recommendations on our smart mirror” That’s fine, but for this workshop you should choose one of those to analyse in more depth in the next section
  43. 64 E XER CIS E Task: Service Concept 1. Based

    on your customer journey and data sources, create a high level concept that uses machine learning to help the customer. Look at the interaction cards to remind yourself of common ML use cases 2. Complete the data canvas: what is the label you will predict, and what features from the data sources might be relevant? 3. Fill out the first row of the Smart Service Storyline canvas - consider the value the ML brings, and the channel where the user sees the ML predictions Remember: keep it simple! SMART SERVICE STORYLINE - - Who is your service for? What channel/touchpoint did you choose? What value will machine learning bring? Should the machine learning be explainable? What kind of problems/ bias could affect your service? How will your system learn? What does your service do? What business are you in? Name of your business - Intelligence Augmentation Design Toolkit free version by Futurice is licensed under a Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)
  44. Twitter: @darylweir @thereminion M ACHINE L EAR NIN G F

    OR E VE RYO NE No machine learning algorithm is 100% accurate The state of the system is unpredictable It is essential to design for failure cases Uncertainty
  45. Twitter: @darylweir @thereminion Design for failure The confusion matrix is

    an important design tool for ML systems. It allows us to think about what happens when we compare the system outputs to reality Based on that, we can reason about how the user will feel when errors occur, how trust in the system might be affected, and any domain-specific consequences. Output Positive Negative Reality Positive True positive False negative Negative False positive True negative M ACHINE L EAR NIN G F OR E VE RYO NE
  46. Twitter: @darylweir @thereminion Case: Netflix recommendations True positive: recommending item

    the user likes False positive: recommending item the user dislikes True negative: not recommending item the user dislikes False negative: not recommending item user likes Here, false positives are worse than 
 false negatives Output User likes item User dislikes item Reality User likes item User dislikes item ☹ M ACHINE L EAR NIN G F OR E VE RYO NE
  47. Twitter: @darylweir @thereminion Case: Fraud detector True positive: blocking fraudulent

    transaction False positive: blocking legitimate transaction True negative: approving legitimate transaction False negative: approving fraudulent transaction Here, false negatives are much worse than 
 false positives Output Fraud Not fraud Reality Fraud Not fraud M ACHINE L EAR NIN G F OR E VE RYO NE
  48. Twitter: @darylweir @thereminion M ACHINE L EAR NIN G F

    OR E VE RYO NE Error costs can be passed on to data scientists creating the machine learning components. Plan fallback behaviour in the different error cases. Think about whether the system can learn from errors. Using the confusion matrix
  49. 72 E XER CIS E Task: Confusion Matrix Machine output/prediction

    Positive Negative Service takes an action, you think it's correct This is called: True positive Service doesn't take an action, you think it should have This is called: False negative Service takes an action, you think it's wrong This is called: False positive Service doesn't take an action, you agree it shouldn't have This is called: True negative CONFUSION MATRIX Reality/Customer reaction Positive Negative - Service idea Intelligence Augmentation Design Toolkit free version by Futurice is licensed under a Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) Fill up a confusion matrix for your service concept What happens when your machine learning gets things right? What happens when it gets it wrong? How would your customer feel about the different errors? What should the recovery behaviour be for each type of error?
  50. Twitter: @darylweir @thereminion Learning from experience Build-measure-learn is a key

    concept from the lean startup methodology It’s about forming a feedback loop and continuously improving your designs Intelligence Augmentation systems allow a new kind of feedback loop - where the system itself learns directly from interactions with users L EA R NING LOOPS
  51. Twitter: @darylweir @thereminion Interaction loops Every time the model makes

    a prediction, it is presented in the UI Based on that prediction, the user will take some action If we can measure what the action means, we can use it as new training data This lets the system learn over time However, closing this loop can be tricky: measuring actions is not always straightforward L EA R NING LOOPS Training data Model UI User action
  52. Twitter: @darylweir @thereminion Measuring Actions L EA R NING LOOPS

    Direct Feedback Indirect Feedback User can explicitly give positive and negative feedback, and prevent similar recommendations Listening to songs and adding them to playlists are positive feedback, skipping songs is negative feedback
  53. 77 ‣ How could you form a learning loop in

    your service? ‣ How can you measure when the ML gets something right? ‣ How can you measure when it gets something wrong? ‣ Is it better to collect implicit or explicit feedback, or a mix of both? ‣ Write your conclusions on the Smart Service canvas D I SC USS ION Design your learning loop
  54. Twitter: @darylweir @thereminion New kinds of functionality D I SC

    OVERA BILIT Y Sometimes IA services offer features that may be unfamiliar to your users How are people going to find out what your service can do? If no one uses your amazing features, they’re useless
  55. Twitter: @darylweir @thereminion Methods of discoverability D I SC OVERA

    BILIT Y Conversational suggestion “Alexa, what can you do? Feature List
  56. Twitter: @darylweir @thereminion Methods of discoverability D I SC OVERA

    BILIT Y Contextual suggested next steps Utilise experience of other systems
  57. 83 ‣ Do you have any unfamiliar functionality? ‣ How

    will users find out what your service can do? ‣ Write your conclusions on the Smart Service canvas D I SC USS ION Discoverability
  58. “People worry that computers will get too smart and take

    over the world, but the real problem is that they're too stupid and they've already taken over the world.” - Pedro Domingos
  59. Twitter: @darylweir @thereminion Using AI ethically E TH ICA L

    AI As data and AI become more and more important to the way modern society operates, they present new risks and failure modes for our systems As the people developing these technologies, we have a responsibility to act ethically and consider the impact of our work What are the things we should consider? Once again: this could be a whole workshop :)
  60. Twitter: @darylweir @thereminion Factors in ethical AI systems E TH

    ICA L AI 01 PURPOSE AND IMPACT Who does the system affect, and how? 04 SAFETY AND PRIVACY Is data collected and stored safely? 
 Is user privacy respected? 02 INCLUSION AND FAIRNESS Who is included/excluded, and how could
 bias manifest? 03 TRUST AND TRANSPARENCY Are the decisions made by the system 
 transparent? Do people trust it?
  61. Twitter: @darylweir @thereminion Purpose and impact E TH ICA L

    AI Ask yourself: what is the system we’re building for? How does it affect people’s lives? If it’s successful, will it change something about society? ‣ Respect and be mindful about the impact on people affected by the system ‣ Ensure that the system has a clear purpose and can be trusted to behave as expected and anticipated ‣ Consider the impact of the system beyond the user and consider any positive and negative consequences the system might have
  62. Twitter: @darylweir @thereminion Unethical purposes Most of you have probably

    heard of Cambridge Analytica Their machine learning worked extremely well at its goal: targeting swing voters with hyper- personalised political ads Here, the use case itself was (in my opinion) deeply unethical, even if much of what they did was not actually illegal As a designer, you need to examine the ethics of your service concept, and act responsibly E TH ICA L AI
  63. Twitter: @darylweir @thereminion Unintended consequences Facebook’s content algorithm has a

    simple goal: recommend relevant content in order to keep people using Facebook Over time, their recommender system learns from its successes and failures, and its behaviour reinforces over time. This can lead to filter bubbles, where the same kind of predictions appear over and over. Thanks to their huge user base, this has significant consequences in terms of political division, the rise of fake news, and so on E TH ICA L AI
  64. Twitter: @darylweir @thereminion Expect the unexpected There’s lots that can

    go wrong in a machine learning project In fact, we’ve seen enough high profile failures that you can start to list the common “bugs” that affect ML system The Unexpected Bug cards in the IA kit can be used as a starting point for discussion: what would this kind of failure look like for our product? Are we secured against it? Keep them in mind as we go through the next exercises. E TH ICA L AI
  65. 92 D I SC USS ION Ethics: Impact & Purpose

    Fill out the first box on the ethics canvas Ask yourself questions like:
 ‣ Is the use case ethical? ‣ How will users be affected in the long term if our product is a success? ‣ Is there an obvious way the system could go wrong? ‣ What would a good news headline look like for our product? ‣ What about a bad news headline?
  66. Twitter: @darylweir @thereminion Inclusion and fairness E TH ICA L

    AI Data is not truth Machine learning predictions are not facts ML systems learn rules based on the subtle patterns found in the training data. If that data has fundamental biases, the only thing the system can do is repeat and reinforce the bias The team has a responsibility to gather data and design the system in a way that minimises the chance for biased outcomes
  67. Twitter: @darylweir @thereminion Bias in, bias out One of the

    most common forms of bias comes through data collection. If you don’t gather data that reflects the context of use for your system, you open yourself up to damaging errors. E TH ICA L AI With better design, this incident with Google Photos could have been avoided. The image recognition was not trained with a diverse set of faces, and this was the result. A good rule of thumb: think how a human could be biased in the task, and collect data accordingly
  68. Twitter: @darylweir @thereminion Some bias is systematic COMPAS is one

    of many systems used in the US to predict risk of reoffending for criminals, in order to set or deny bail The system was three times more likely to report a false positive for black defendants, despite being trained with a complete set of case data for the counties in question The data was complete, but the underlying process came from decisions from judges and police officers, some of whom had systematic biases Again, data is not truth: think about where the data comes from, and ask how bias could arise E TH ICA L AI Risk: 3 Prior offenses: Multiple robberies
 
 
 Later offenses:
 Grand theft Risk: 8 Prior offenses: Juvenile 
 misdemeanours 
 Later offenses:
 None
  69. 96 D I SC USS ION Ethics: Inclusion & Fairness

    Fill out the second box on the ethics canvas Ask yourself questions like:
 ‣ How could bias manifest in our system? ‣ Which data sources could be biased? ‣ What could we use as a test for biased outcomes? ‣ Are there any systematic issues that could be repeated even if we have all available data?
  70. Twitter: @darylweir @thereminion Trust and transparency E TH ICA L

    AI In an IA system, we want users to take actions based on the predictions we make. In order for people to modify their behaviour according to a computer system, they need to trust that the system is making sensible decisions. One good way to build trust is to design transparent systems, where the decisions made can be explained directly to the user Even when that is not possible, systems should be designed to be auditable
  71. Twitter: @darylweir @thereminion Transparency and interpretability E TH ICA L

    AI Machine learning is often used as a “black box” - inputs go in, outputs come out Some techniques learn relatively simple rules, others learn very complex ones Complex rules are often more accurate, but hard for humans to understand Decision tree Simple model Neural network Complex model
  72. Twitter: @darylweir @thereminion Transparency and interpretability E TH ICA L

    AI Q1: How important is it that we explain the machine learning predictions? Transparency critical: automatic loan application decisions Transparency useful: shopping recommendations (e.g. Amazon) Transparency unimportant: ranking stories on magazine homepage Q2: If we need to explain predictions, how should we do this? • Directly to the user? • Separate admin UI? These design decisions affect both the data science and UX work
  73. Twitter: @darylweir @thereminion Visualisation and audits E TH ICA L

    AI Visualisations of data flows can be a powerful way to build trust. Show which data is being used at each point in your application, and how much weight it has on the output Visit distill.pub for a selection of state-of-the-art visualisation techniques for modern machine learning algorithms Audits are another way to ensure your system can be trusted. The idea is that for any prediction made, you should be able to go back and recreate the conditions that led to the prediction. So you need to store information about what training dataset was in use, what the parameters of the algorithm were, and so on. If this is important to your application, you should design the UX and UI of the audit process early on.
  74. 102 D I SC USS ION Ethics: Trust & Transparency

    Fill out the third box on the ethics canvas Ask yourself questions like:
 ‣ Do your predictions need to be explained? If so, how should you present the explanation? ‣ How much trust do people need before they use your system? ‣ At what level does the system need to be audited?
  75. Twitter: @darylweir @thereminion Safety and privacy E TH ICA L

    AI Data breaches are becoming more and more common, as companies start using machine learning without also adopting best practices for data security. Public awareness of data privacy and rights is also increasing. This plays into user trust also: you should make it clear how the system uses personal data Don’t collect more data than you need, and anonymise it when possible In Europe, remember the GDPR!
  76. Twitter: @darylweir @thereminion Malicious Actors E TH ICA L AI

    Tay, day 1: Tay, day 2: Machine learning can present new attack vectors. If the time for the interaction loop is short, users can feed in lots of similar data and intentionally bias a system. Microsoft’s Tay chatbot is the most notorious example. If the system learns from people, think about the worst possible thing it could learn, and how to avoid that. With better design and sanity checks in place, this might have been prevented
  77. 105 D I SC USS ION Ethics: Trust & Transparency

    Fill out the fourth box on the ethics canvas Ask yourself questions like:
 ‣ How bad would it be if the training data was leaked? ‣ Are we storing any data we don’t need? ‣ How could someone attack our system with malicious data?
  78. Prototyping M ACHINE L EAR NIN G F OR E

    VE RYO NE Twitter: @darylweir @thereminion Until now, we’ve just assumed that our service concept is a good one. In reality, you never get it right first time. Success takes testing and iteration. Thus, it’s important to validate service concepts early and often. That’s where prototyping comes in. While this is nothing new for digital service creation, machine learning does add some new wrinkles.
  79. The basics: goals of prototyping M ACHINE L EAR NIN

    G F OR E VE RYO NE Twitter: @darylweir @thereminion For those who’ve never made a service prototype before, doing so has three main goals 1. Explore 2. Communicate 3. Engage There are many methods for prototyping
  80. Method: draw it M ACHINE L EAR NIN G F

    OR E VE RYO NE Twitter: @darylweir @thereminion ‣Wireframes ‣Storyboards ‣Illustrations
  81. Method: build it M ACHINE L EAR NIN G F

    OR E VE RYO NE Twitter: @darylweir @thereminion ‣Lego ‣Cardboard ‣Post-its ‣Photographs
  82. Method: perform it M ACHINE L EAR NIN G F

    OR E VE RYO NE Twitter: @darylweir @thereminion ‣Roleplay ‣Wizard of Oz
  83. Prototyping with machine learning M ACHINE L EAR NIN G

    F OR E VE RYO NE Twitter: @darylweir @thereminion Building ML models is slow It can take weeks to train a good model, and days to iterate if something changes Lots of assumptions in model building - can we validate those beforehand?
  84. Off to see the wizard M ACHINE L EAR NIN

    G F OR E VE RYO NE Twitter: @darylweir @thereminion Wizard of Oz studies are a useful tool for ML prototyping The idea is to have a human perform the intelligent part until you have the model 
 e.g. human answers chatbot questions Fake it ’til you make it!
  85. Bring your own data M ACHINE L EAR NIN G

    F OR E VE RYO NE Twitter: @darylweir @thereminion If you have a model that makes personalised predictions or recommendations, you can ask test users to submit some of their own data before they come to use your prototype Have a human curate the examples and create mockup scenarios for success and failure Example: user sends a list of their music library, you create a few personalised playlists for them Important: remember to get informed consent and delete the data afterwards
  86. 120 ‣ How would you prototype your service? ‣ How

    will you simulate the machine learning behaviour? ‣ What are the most important things to validate with your prototype? D I SC USS ION Prototyping
  87. 122 D I SC USS ION 1. Take a 1

    minute silence, write down the key things you learned during this workshop
 2. Discuss as a team and pick your team’s top 3 learnings 3. Choose one or more spokespeople to present the concept and the key learnings to the other tables (Use the Smart Service Storyline canvas as a summary tool for your service concept) SMART SERVICE STORYLINE - - - Need Key touchpoint Service idea ML Value Transparency Bias Learning Loops Discoverability Customer story Service provider story - - Task: Prepare your team presentation
  88. 123 D I SC USS ION Presentations Max 3 minutes

    per team What’s your company name? What were the best titles for your team members? Pitch for your concept Top 3 learnings
  89. “AI is likely to be either the best or worst

    thing to happen to humanity.” - Stephen Hawking