Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Designing Agentive Technology

UXAustralia
August 29, 2019

Designing Agentive Technology

UXAustralia

August 29, 2019
Tweet

More Decks by UXAustralia

Other Decks in Design

Transcript

  1. UX Australia 2019 (AUUXAU2908A) Main Room, Day 1 – 29th

    August, 2019 CHRIS NOESSEL: Thank you. (Applause) Thanks for the tech support. Hello. Every traveller is an ambassador, I'm the first American onstage today. Let me say, I'm sorry, we are working on it. I am here today to talk to you about something I kind of made up. And my hope is in this first part to get you to see something new. And that's not an easy thing. Wish me luck. This device is a camera. Specifically, it's a digital single lens reflex camera, a camera I personally own. And because we all walk around with good cameras in our pockets and it looks like a pretty young audience, I will describe how you use this. To get a picture, you pull out the camera, a switch on the top to turn it on. If you are smart and not jetlagged, you pull off the lens cap and there is a heads up display that gives you information about the photo you are about to take. When you have the shot frame due to press a button halfway, making sure the autofocus kicks in, then you press the button, take the picture, looking at the back of the camera, making sure it's the photo you wanted and if not you put the camera back to your right and start again. That's how you use this object to take a photograph. This object is called Google Clips. Not very popular, I looked this morning and it is not for sale on the site. I will describe to you how this object makes photos. You leave it plugged in overnight and once you unplug it, as long as it sees light to the lens, it is taking pictures and video. But it doesn't store them. It looks for good framing, good lighting, good expression, and over time, it learns the people in your orbit and it favours those faces and it saves those photos to a local drive and at the end of the day, it shares them with you and ask you what you want to do with them. You can save one and get rid of the rest. Or share that one to social media. The difference between these two things is important. Even though they are both ostensibly cameras, the thing on the left is a tool for you to take photos and the thing on the right is an object that gets you photos. That's pretty interesting. This is a vacuum cleaner. (Laughter) I think you know where I'm going with this. This is a tool for you to clean your floor. In order to do that you fetch it from the closet, undo the power cable, plug it in, step on one switch to release the handle and another to start the engine and you push it round the floor until you are satisfied it's clean.
  2. UX Australia 2019 (AUUXAU2908A) Main Room, Day 1 – 29th

    August, 2019 Page 2 of 8 This is also ostensibly a vacuum. But you tell it what time you want it to go and you go to work. At the time you specified the Roomba will leave its cradle and go around your room and have an algorithm that optimises for any shaped space and vacuums your floors. Mine is named Rusty and it only stops if you forget to empty the dust bin. One is an object for you to vacuum and the other is an object that gets you clean floors. That's interesting. This is a car, the most popular car in the world. It is a tool for you to drive from point A to point B. This is an object that gets you from point A to point B without having to do too much work. That is interesting. The first time I saw the connection between these products and services, when I was doing user research and having to do a lot of travel and I felt bad for my cat. I had an automatic feeder and an automatic litter box, a bit of anxiety every time I had to leave, wondering if my cat would eat.. A Robo investor is a piece of software and you describe your financial goals – I want to retire at this time, I want to buy a house in the next 10 years, my kids go to college and 12. When you give it some seed money, promise to give it money each month, and it helps you achieve these financial goals. It does automatic balancing every day. If one of the stocks you have selected is tanking, it will alert you and say it thinks you to get rid of it. Or there might be one taking off and it tells you you are not taking advantage of an opportunity. It will alert you to opt into that. But as I was thinking about this robo-investor I was thinking it was weird, how do you do usability test on something that worked while you were gone? And I thought of the cat feeder. Because when I leave, I didn't feel confident it would feed my cat when I got back home. And I thought about the Roomba, vacuuming while I was at my job. And I realise they were all apiece. We act an agency to act on our behalf. That is critical. So I thought I should go out and do my due diligence as a designer. I will go and see who wrote a book on that and I will study it. Nobody had written a book on that so I thought that somebody really should. So I wrote a book on that. And I wanted to give this kind of technology a name to distinguish it from other types of technology in that space, and I recognised that if we grant it agency, what is the adjective form of agent. I went to a dictionary and looked it up. It turns out it's a forgotten word called 'agentive'. So I will call this agentive technology. When I first told some technology students these nascent ideas, they thought that was cool but they will all first world problems. Who gives a shit? Who cares if you have to pull out your own camera and vacuum your own flaws you lazy bastard? And I thought that was a fair enough critique. So I tried to find real-world examples. In the top and corner you will see something called Shot
  3. UX Australia 2019 (AUUXAU2908A) Main Room, Day 1 – 29th

    August, 2019 Page 3 of 8 Spotter. It's very embarrassing to me, we need it in America because of our gun problem. But the service sprinkles microphones around gun-prone neighbourhoods and each of the microphone listens for gunfire. And it reports that gunfire to a central server which accurately knows where the gun was fired. How is implemented depends on the precinct it is in, but for the most innovative precincts, it can identify the police officers that are not currently tasked who are closest, reducing the response time from 45 minutes to four minutes. That is an agent who listens for gunfire. That is not a first world problem. Then there is one you can use when you are going on a date. It goes like this. You tell Kite String a time, then you go on your date. Then on that appointed time, it send you a text and ask you if you are OK. You can either give it a safe word you have established in advance, and then it knows you are cool and deletes the information off the server. If you don't respond in a certain amount of time, it will send a message you have given it to a number you have given it and the message might be something like, "I have gone on a date. Here is the address. I am not responding to this service, can you check on me?" Smartly, they have even thought of what happens if you are kidnapped or have a nosy creeper on the other side of the table, and if you give it a fake safe word, it will still send the message. So, it's a nice backup chaperone. Kite String. In the left-hand corner, you will see something that looks like a Roomba because it was designed by people who used to be at the company. And it's like a Roomba for a garden. You will see a green lawn sticking out the edge, which spins around and helps to weed. It helps a farmer or gardener to manage much more land than they could before. In the right-hand corner you will see a drone called the Scarecrow. This drone hovers above a herd and watches for dangerous humans. If it recognises humans, or if they are park rangers, it doesn't do anything. But if it doesn't recognise them or recognises them as poachers, the drone flies down between the humans and the herd and it scares the herd in a safe direction. In full disclosure, I made that up. But the way I made that up is the important thing. I looked at the patterns I had observed and the other systems and applied it to a problem I care about. I don't mean to be flippant about that. When I was presenting in Delph last year, a student said that was pretty cool but we are making that already. So I will give a shout out to the AV Drone, which are students who have graduated from Delph university and it's a Scarecrow scenario. We are talking about safety, food, the anthropocentric not first-world problems. Agentive technology is as big as you can think. It isn't a panacea but it works for a lot more than it originally appears when I present these ideas. If you are a sceptic, and I hope there are lots of you in the audience who are wondering how this is different from automation. The main difference is that it is an engineering goal to minimise
  4. UX Australia 2019 (AUUXAU2908A) Main Room, Day 1 – 29th

    August, 2019 Page 4 of 8 human interaction. If a human is to get involved with an automated system, the system is failing. Agentive tech as you have seen from the examples is all about humans. And it gives me an opportunity for stock photos. So, hopefully at this point I have convinced you that agentive tech is a distinct pattern in the world. Now what I would like to do is convince you that it is an interesting pattern and you should pay a great deal of attention to it. I have five reasons to share with you. The first is that it is "new". It's a new thing for you to learn and put into practice. Now, if you look closely and you are a reading person, you will see that there are air quotes around the word 'new', that is because of the image at the back of the slide. Can anybody identify that? I think I heard autopilot. And that is what that is – a piece of autopilot. Does anyone want to guess about when the first time an autopilot was demonstrated at a World Fair? No? You know it will be longer than you think. 1914. The autopilot is over 100 years old. And when it was initially released, it was a bunch of electronics, and came to be called cybernetics in the '50s, and it is a great example of agentive technology. But it's kind of an exception. Of the three dozen examples I go through in the book I read, nearly 90% of them were all launched within the last 10 years. That's because nowadays, you don't need millions of dollars and work hours dedicated to solving a particular agentive problem. People walk around with supercomputers in their pockets – they are connected to a global network, and there are KPIs of artificial intelligence you can tap into, and that is new. When I say agentive tech is new, it is kind of not, but it is new. Our opportunity to design for it quickly and that scale is new and interesting. The second reason it is interesting is because it is different. If you study interactive design formally, like I did at grad school, one of the canonical models we would talk about is a hammer. If you were to talk about the design of a hammer, you would rightly talk about its afford answers. How does a carpenter, a hammerer, know what this object is for? There is a handle, a hard part, a claw. You would talk about its constraints and its feedbacks. But none of that makes any sense when we are talking about an agent. If it's doing its work away from you, what are the avoidances for Roomba? What is the feedback mechanism? A better model than a hammer for an agent is a butler. I don't have a butler, but I have seen them in the movies. And the way I understand they work is that they know your goals, your preferences, and only bother you when you ask something of them, or when there was a problem. Or they're completing a problem for you. That is a better model for thinking about an agent than a hammer. And often I talk in workshops, like the one I ran yesterday on this topic, on what we're doing, which is giving users a promotion, from task users to task managers. That is a metaphorical
  5. UX Australia 2019 (AUUXAU2908A) Main Room, Day 1 – 29th

    August, 2019 Page 5 of 8 explanation and I want to give you something more concrete. This is a model called the see-think-do-loop. The red part is a human part. Anytime a human interact with the system, they observe that system. See is what we use for that part of the circle. But it could be any perception – hear or smell. But you perceive some state in the system. And you think it's not right – what do I need to do to change the state of the system? You formulate a plan and then executed. I will click that blue button and maybe it will do what I wanted to do. That is called the 'do'. So, the see-think-do loop has a computer on it. And because of the computer, we have different words for it – inputs and outputs. Between the two we have a loop which describes every set in interaction. As long as it is a tool you are designing. It gets different when we're talking about an agent. And remember, the red part is the human and the blue part is the computer. It changes when you get to a human. A human is mostly involved from the start to the end… I guess you have to buy the book to see the diagram. (Laughter) Here is the actual diagram. You will see that humans are mostly involved in the setup and disengagement of an agent. "I need you to stop working." And they ride the outside. But there is still another computer system helping them doing all the seeing and thinking and doing. And that is a lot different between the manual and agentive model. Lots of cameras going up, this is my most photographed slide, so I will give you the URL where you can go to get a full size image of this diagram. And what you see on the outside of this diagram is a whole bunch of use cases I have identified as unique and germane to agentive technology, that are not germane to other types of interactive design. That is interesting. That is new. Third reason. Agentive tech represents a Shangri-La of user centredness. Often times in grad school we would talk about the difference between the amount of work we have is a user and the amount of value they get from using the system. And of course we are trying to minimise the work and maximise the value. When we talk about an agent who does the work, the ratio is nearly 0, which makes whatever their value is nearly infinite. It's a pretty good trade-off. I will take that trade. If you are here in this business, in order to provide value to people, I can't think of a better one than the agentive model. In the 1990s, a couple of authors called Pine and Gilmore published a book called 'The Experience Economy', and in that book they identify different categories of product that differ according to how much work you do and how much of a premium you are willing to pay for that. Their example was a cup of coffee so I will use the same one.
  6. UX Australia 2019 (AUUXAU2908A) Main Room, Day 1 – 29th

    August, 2019 Page 6 of 8 If you wanted a cup of coffee and you were dealing with a commodity, the way you would go in and purchase a cup of coffee is you would drive out to a big warehouse with a huge bag, announce your intention to get some coffee – you probably have a minimum size of a barrel and they would shove the into it and you would have to haul it to your house, store it and then grind the beans yourself when you wanted a coffee, and then have that coffee. You will be paying a lot for that cup of coffee. Compare that to a product where some enterprising person said, "Don't worry about coming to the warehouse, we will grind those beans for you, put it in a package that is well-designed and distributed to a store near you so you go to the store, get it off the shelf, take it home and put it into a coffee machine." You are doing less work for a product compared to a commodity and for that you will pay a premium. Again, some enterprising person said at some point – don't worry about the product and the store. You come into our establishment and when you order a cup of coffee we will go in the back, grind the beans, brew it and bring it out to you and then clean up afterwards. That is called service and you pay a premium. Pine and Gilmore were identifying these product categories to answer how Starbucks convinced us that five dollars was fair for a cup of coffee. Their answer was that Starbucks was providing an experience. You go into one of their stores and there is beautiful wood panelling and lovely light and hit music and abuse of the Italian language. And you have a cup of coffee and you pay a premium compared to the service. I raise this model in the context of the agentive discussion because I want you to notice something about the four steps on their pyramid. Which is every category or product requires your attention to extract value. And we don't have a lot of attention. Between sleeping and having a full-time job and tending to your loved ones and having a hobby and taking care of your health, the amount of attention you have to dedicate to a product or service is small. That is a competitive space. But my Roomba, Rusty? He doesn't take much of my attention at all. That is an opportunity space for companies that choose to enter into it. I call it post attention value, and if you search that term, you can see an article I wrote. Number four, Peter Singer coined a term called 'threshold technologies'. He was writing in the context of a war but I will abstract that term to talk about agents. A threshold technology is one that once a consumer or a culture is adopted, they are loath to go back to the old way of doing things. It is super first-worldy, but now I have Rusty, I find the idea of grabbing my vacuum before a dinner party to be really tedious. The food might burn on the stove while I'm doing that. For companies that adopt an agentive mode for their products, it introduces an advantage where your customers are loath to go back to your competitors because you are providing post attention value. That is really interesting. The last reason is – it's AI. We talk a lot about AI these days and I want to make it clear I'm not talking about general artificial intelligence. I am a super nerd about sci-fi and I have a blog about
  7. UX Australia 2019 (AUUXAU2908A) Main Room, Day 1 – 29th

    August, 2019 Page 7 of 8 it. Today I want to talk about AI in the real world. Full disclosure, I work for the aforementioned IBM. Super awkward, right? (Laughter) I wasn't around in the '30s. The AI I will show you is IBM's Watson. Super awkward... (Laughter) The AI you can take advantage of with Watson are the things you will need to use in order to have a product that could do something on behalf of a user. It needs to be able to have smart processing, to know your goals and your preferences. And to execute those commands in a see think do loop. You need to use AI in order to build agentive products. In the first part of the talk I tried to convince you that agentive tech was a distinct kind of technology. In the second part of the talk I tried to tell you it's interesting and you should get to know it. In the third and last part of the talk I will tell you I have been a little too simple in describing it. I use very easy to understand examples like the Roomba, like the self driving car, because they are easy to understand, it's easy to see the difference between this and other kinds of technology. When we take a look at what agentive tech really is, it's a mode of use. Think for a moment about a circle that represents all the things that an AI can do. Now think of a circle that represents all the things that a human can do. We would call the things that are human can do all the things that are human users, manual tools. And we would rightly call anything the AI does as automatic. It's when we combine those two and find the overlap is interesting. Most people, when they combine the notion of AI and human tools, jump right to assistive. I will do something and the AI will help me do that thing. Agentive technology throws a spanner into it. It helps distinguish between when we are helping the AI in the case of agentive tech or the AI is helping us in the case of assistive technology. While I often talk about products that only live in one of the zones, like the vacuum cleaner, the manual tool, you can't pull the handle off and kick it on its way, and the Roomba doesn't have a handle you can stick on and push it around the floor. A sophisticated, mature tool can exist across these modes. My favourite example is this example, it is a tool for you to input text, but it has some assistive capabilities. I think I know the ending to this phrase I am typing, so you can swipe right. Depending on where your attention is, it can act as an agent. If you have misspelt something, it will correct that for you and act as an agent. That is an example of a small piece of technology that exists across those modes and it is my assertion that you have to manage the mode within this spectrum of interactive mode with an AI.
  8. UX Australia 2019 (AUUXAU2908A) Main Room, Day 1 – 29th

    August, 2019 Page 8 of 8 So it's not as simple as I first represented it, I'm trying to be pedagogical. That's the last thing I wanted to assert to you about agentive tech, I speak about it as products but really it is a mode of use. And while we're talking about text entry, I will use that to talk about the last thing I want to spend my time on today, and that is that there is a pattern in the world where UX absorbs AI. I am speaking about agentive tech as if it is speculative, but it will be in your wheelhouse very soon. The example I want uses spellcheck. I don't know if you were around in 1988 using computers but at that time spelling apps were their own piece of software. You would use a word processor to enter a bunch of text and when you are done you check the spelling, you would close the word processor and open up a spelling piece of software and let it chug along. It only took one year before Word Perfect 6.1 incorporated spelling into the software. It was a revelation. You still had to invoke it, go into the spelling mode. And now that seems hopelessly outdated because spelling is now an agent that checks your spelling and what once was a giant company is now a squiggly redline. You would be hard pressed, or I hope you would be flabbergasted if a company talked about spelling as a feature that it was offering you. Artificial intelligence is somewhere between Word Perfect 6.1 and the squiggly red line. We are still talking about as if it is a market differentiator. Our users are getting used to it and so are our salespeople, we are maturing these concepts and technologies. But I think eventually we will absorb it and what we think of today is a special new feature is going to be as quickly redline in our interfaces. That's good. That gives us a bit of time to master new processes, to master the new resources we have at our disposal, and even to master new models. If we do, we will be making this team the best team it can possibly be. Thank you. (Applause)