August, 2019 Page 4 of 8 human interaction. If a human is to get involved with an automated system, the system is failing. Agentive tech as you have seen from the examples is all about humans. And it gives me an opportunity for stock photos. So, hopefully at this point I have convinced you that agentive tech is a distinct pattern in the world. Now what I would like to do is convince you that it is an interesting pattern and you should pay a great deal of attention to it. I have five reasons to share with you. The first is that it is "new". It's a new thing for you to learn and put into practice. Now, if you look closely and you are a reading person, you will see that there are air quotes around the word 'new', that is because of the image at the back of the slide. Can anybody identify that? I think I heard autopilot. And that is what that is – a piece of autopilot. Does anyone want to guess about when the first time an autopilot was demonstrated at a World Fair? No? You know it will be longer than you think. 1914. The autopilot is over 100 years old. And when it was initially released, it was a bunch of electronics, and came to be called cybernetics in the '50s, and it is a great example of agentive technology. But it's kind of an exception. Of the three dozen examples I go through in the book I read, nearly 90% of them were all launched within the last 10 years. That's because nowadays, you don't need millions of dollars and work hours dedicated to solving a particular agentive problem. People walk around with supercomputers in their pockets – they are connected to a global network, and there are KPIs of artificial intelligence you can tap into, and that is new. When I say agentive tech is new, it is kind of not, but it is new. Our opportunity to design for it quickly and that scale is new and interesting. The second reason it is interesting is because it is different. If you study interactive design formally, like I did at grad school, one of the canonical models we would talk about is a hammer. If you were to talk about the design of a hammer, you would rightly talk about its afford answers. How does a carpenter, a hammerer, know what this object is for? There is a handle, a hard part, a claw. You would talk about its constraints and its feedbacks. But none of that makes any sense when we are talking about an agent. If it's doing its work away from you, what are the avoidances for Roomba? What is the feedback mechanism? A better model than a hammer for an agent is a butler. I don't have a butler, but I have seen them in the movies. And the way I understand they work is that they know your goals, your preferences, and only bother you when you ask something of them, or when there was a problem. Or they're completing a problem for you. That is a better model for thinking about an agent than a hammer. And often I talk in workshops, like the one I ran yesterday on this topic, on what we're doing, which is giving users a promotion, from task users to task managers. That is a metaphorical