Upgrade to Pro — share decks privately, control downloads, hide ads and more …

The Design of Meaning for the Future of Humanity

UXAustralia
August 30, 2019

The Design of Meaning for the Future of Humanity

UXAustralia

August 30, 2019
Tweet

More Decks by UXAustralia

Other Decks in Design

Transcript

  1. UX Australia 2019 (AUUXAU3008D) Main Room, Day 2 – 30th

    August, 2019 KATE O'NEILL: Thank you so much, Steve. Thank you, everyone. I have one order of business to do before we get kicked off and that is I would love to welcome these five lovely folk to say join me here on stage. (Applause) Thank you for that applause for them. I will let them take their seats then I want to let you know why they're joining me here on stage. This is a program #upfront and the idea is these five lovely folks have goals to speaking to your group and other groups one day, hopefully soon, we'd love to hear their lovely voices and they'd like to just break down some of the of the barriers to that by being up here and they're observers just like you and they're going to get to see you and how friendly you are which is a cue to you to be friendly. Without further ado, we've got Marissa o Tina, and Alex and they're joining us here and they will be our upfront participants. So please another round of applause. (Applause) Thank you for that. I was supposed to have that slide up there for you. There we go. And there's the Twitter account if you want to look into the program some more. I think it's a wonderful idea. So, with that said, hello humans! And any robots that are in the audience, hello to you as well. Are there any robots in the audience? Raise your spindles. No, there are none. But you know it's not very far off, right? That's telepresence robots. Some form of robots are going to be sitting with us in events like this in the not distant future. So that's kind of weird. I actually sometimes make this joke about how I am very happy to be here partly because it means I'm still making my living as a professional speaker, which is a job that's going to be hard for robots to take away from me. But then, I got scheduled alongside a robot for a keynote coming up in a couple of months. So, yeah, there's me and there's a famous robot sharing the keynote stage. It's coming, people, they really are coming for our jobs. But seriously, that notion of how robots/emerging technology/AI/all of the different kinds of newfangled gadgets and technology are changing the word -- world around us is very much at the heart of what we're going to talk about this morning and the context of this is basic question that drives my work is how can we help humanity prepare for what, by all indications, looks like it's going to be an increasingly data and tech-driven future. I'd like to ask you, actually, as a matter of survey, how many of you worry from time to time about the future of humanity due to issues around technology like AI and robots and automation and things like that? That's a lot of hands. Just out of curiosity, how many of you worry about the future of humanity other than issues such as technology like climate change? That's every hand. It's depressing but it speaks to why this is such an urgent matter for us to discuss and why we need to talk about issue that is relate to user experience and technology and product design
  2. UX Australia 2019 (AUUXAU3008D) Main Room, Day 2 – 30th

    August, 2019 Page 2 of 15 and the future of humanity and how we're going to make it good for ourselves and for generations to come. So, I want to give you some context about where I've gone with my work and what I'm bringing to this discussion and that is that I have the opportunity to be an author and speaker. I've spoken for a wide variety of brands and types of organisations and have even had the lovely opportunity to speak at the UN with the city of Amsterdam, do some great media contribution work and what this has all done, not to brag about my credentials but I just want to show you that the ideas that we're talking about this morning are cross-disciplinary, they're important to everyone, everybody is dying to hear how we're going to address this issue of how the world is changing due to technology and due to everything besides technology and what we're going to do about that. So, what I find is across all of these different types of companies and industries, there's this one fundamental dilemma and that is this: Oftentimes when we sit in meeting rooms inside of companies, we talk a lot about what's good for business. We talk about hitting our growth goals, hitting our revenue numbers, how are we going to improve the business, what is the strategy, we talk about that. But often we're not asking the question of what's good for the people who are inside and outside the company that we're serving and that are part of the company. If we do switch to the other side of the equation, it's often in the context of a nonprofit or a big corporation or a special foundation or special project that a big company has done and maybe a tax write off or some sort of charitable outreach. It's not usual traditionally that we address those questions in the same breath and same context and that's why I found it to be a really important dilemma to unpack because this dichotomy is false and it's dangerous. It's increasingly dangerous the more power and momentum emerging technology brings with it and I will get to that in just a second. First, I want to talk about when I say an increasingly tech and data driven future, what do I mean by that? What does it mean for us to be facing this tech-driven future? One example I think is vivid, and you should be familiar with is this. I'm sure you have at home a thermostat or some smart device that sits in your space, an Internet of things type of device, connected device. Is this a physical or digital device? Are we talking about a physical or digital experience? Is this an offline or online experience? It's both, right? It's both. I think that notion that we used to design under or that we used to plan experiences around has gone the way of everything is integrated and that's step one, in moving us towards being able to understand how to prepare for the future is recognising that just about everywhere interesting that the physical world and the digital world connect, that connection layer happens through the data transact ed in all our human experiences. As a central principal of my 2016 book, Pixels in Place, and working around that with different companies has led me to the further realisations and studies that we did for 'Tech Humanist' but when I talk about the data that's captured in the human experiences, by the way, a lot of people find it very easy in the context of business meetings and reviewing analytics and reports to forget the fact that business data is largely about human experiences and another way of saying
  3. UX Australia 2019 (AUUXAU3008D) Main Room, Day 2 – 30th

    August, 2019 Page 3 of 15 that, a favourite short phrases is analytics are people which sounds a little green, I will grant you, but it's an important shorthand for remembering when we're looking at data we're looking at the needs and interests and motivations and desires of real people who are transacting with our business and our brand and the experiences that we create for them. So, the other - one consideration that I think helps to bring us all together and think about unpacking that dichotomy is to remember that technology always advances, or almost always advances to fulfil business objectives, right? There are the tinkerers and the makers and the gadgeteers and so on that sit in garages and such, or garages, whatever makes it easier for you to understand. Sorry for my hodgepodge American accent up here. But the tinkerers, they do create technology for technology's sake but the way that that reaches us, the way that that gets to the mainstream is through a business deciding that they want to invest some money into it and make some money back, right? So, it's always due to business objectives that technology reaches scale. And when we look at that, we can think well maybe the opportunity there, as was the premise of my book 'Tech Humanist' is if we can just figure out how to align business outcomes with human outcomes or objectives, that we can figure out a way that as business succeeds it will bring humanity with it and we'll be using technology to scale those opportunities. What that looks like in a Venn diagram is this. I realised recently, sort of blurted it out in the middle of a presentation, oh yeah, I was thinking Venn diagrams, Gant charts and mind maps. That's my brain 90% of the time. Anyone else have that sort of a graphical diagram sort of brain? Yeah, OK. You my people. Venn diagram is the nerdiest thing in the slide, in the presentation, so enjoy, my fellow nerds. Business objectives, human outcomes and technological capabilities, the sweet spot where these meet, I think, is purposeful, meaningful, human experiences at scale. The at scale part is the technology. It's using the technological capabilities to accelerate what's possible but it's the purposeful and meaningful part are the intersection of the human experience and capabilities. That gives us to say that dichotomy is false and how we're going to break that down is we're going to say we're going to use technology to both make the business better and make human experiences better. So that's how we're going to get there, and I will tell you what the process looks like. It's three parts. Three parts to this. We need to build our best technology, and I will explain what I need by best, we need to grow our best businesses, and I will explain what I mean by best, and we need to become our best sell self-s and I will explain by what I mean by best. Let's start there. We want to understand this relationship between tech and business and humanity, it's pretty good idea to start asking what do we mean by humanity? What is it to be human? So, what makes us human? I'm going to actually ask us to think of a word. Everyone in the room for a moment think of one word that, for you, epitomises the human experience. What one word encapsulates what you think it is to be a human? And I'm not going to ask you to shout it out or anything, I'm going to
  4. UX Australia 2019 (AUUXAU3008D) Main Room, Day 2 – 30th

    August, 2019 Page 4 of 15 try to guess it. Everybody got their word? Give me some nodding. Good. How many of you thought of something like creativity or problem solving or innovation? One hand. A couple of hands. Alright. Very interesting because that's usually a very popular set of answers. You're not feeling very creative this morning, early, understood, understood. Let's see if I can get a little more pinpointed here. How many of you have an answer like empathy or love or compassion? Oh, that's a lot of hands. You're a very loving group and I like that about you. But it still wasn't all the hands. Not all the hands. Let me see if I can get it on this one last guess. How many of you thought of taking a bus? No hands. No, of course not, because this is absurd. Even if you did think this was the human characteristic that really defines us, I've got bad news for you. It's not uniquely human. Let's take those answers, though, and unpack them a little bit. There weren't many people who gave this answer, creativity or problem solving or innovation, but it is one of the usual answers. I've asked this question of a lot of audiences, I've done surveys and it's one of the answers that people often give and maybe you all are ahead of me and you know that this isn't a uniquely human characteristic. As lovely a characteristic as it is, we know that nonhuman animals use tool and solve problems. I was even watching Blue Planet and they were showing a fish that uses a particular rock and throws mollusks and other shell fish to crack them open. So, fish use tools. You could say machines are capable of creativity and problem solving? What about this? Which of these modern art paintings would you imagine, would you guess, were created by a machine? All of them. I like that there was a murmur of all. Yes, all of them. This was a network which shared images of modern art paintings and created images among others that were meant to be representative of what had been shown. This is AI-produced art. Is it art? What's the difference between images that read as art and art? Hold onto that. Let's talk about this set of answers, empathy and love? It's a lovely set of answers. I love that you guys are a very loving room. It's probably the most endearing of the human characteristic but, again, I would say maybe not a uniquely human characteristic. We have probably seen plenty of examples, most of us can be convinced that nonhuman animals are capable of acting lovingly towards one another, showing something that at least approximates love and compassion. We've seen elephants and dogs and chimpanzees be protective of one another in cases where it would actually put them in danger. So that's maybe a close a approximation of love. What about machines? Could we imagine a scenario in which scenarios were acting lovingly or demonstrating empathy? I see a lot of shaking heads. How many of you are familiar with therapy bots? So chatbots that are programmed to interact with human interactors, to ask them questions as maybe someone is ashamed or nervous about beginning therapy, so this chatbot gives them an opportunity to begin that interaction in a way that's a little more low stake, perhaps, and this therapy bot asks questions that encourage the person to begin to perform more talk therapy like dig into the roots of the issue. Is that empathy? Or is it behaving empathetically? Behaving as if empathetically, and what's the difference? What's the difference between empathy and behaving as if you are empathetic? What's the difference between art and images that read as art? Well, you may have answers to those and I would love to hear them after, especially over drinks
  5. UX Australia 2019 (AUUXAU3008D) Main Room, Day 2 – 30th

    August, 2019 Page 5 of 15 or something, that's always the right time to have those philosophical questions discussed. But I have an answer that I think maybe it has something to do with a deeper truth or some sense of intentionality. I see some nodding heads. That's kind of the idea. I think just asking those kinds of questions and trying to get at the heart of that sort of thing is part of the characteristic that I would say is actually the one that is most human which is humans crave meaning. Humans crave meaning. We seek it. We thrive on it. We puzzle over it. When it's offered to us, we can't get enough of it. When I say meaning I'm talking about all kinds of meanings, meaning of meanings. I was a linguist by education so I come naturally by an interest in the semantic level of meaning and what we're trying to communicate with one another when we talk, what sort of package of communication that's happening in our interactions. But there's also, you know, these other layers, status, significance, purpose, truth, all these things all the way out to these layers of cosmic ponderings, like what's it's all about, Alfie? Why are we here? Right? I even think it's interesting when you take a word like significance and break that down that there's this word signify and this word significant which, you know, signify is about what something represents and significant is about what's important. So you actually have this kind of how and what that are part of the questioning too as well as within the idea of purpose, the why. There's a lot of really fundamental questions that are being asked within these embedded concepts of meaning. It's really deep. I think what's interesting about that is when you start thinking about how to apply the notion of meaning and its many layers of meanings to experiences you can start realising some interesting fundamentals about technology and about designing product. So I think, for example, that augmented reality is going to be one of our most powerful examples of exposing layers of meaning for people. So we tell you that in my lifetime, I've been working in the technology field for about 25 years and I would say that I have had that scalp tingling, like, you know, crawling back-of-the-neck sensation twice. One was when I saw a graphical browser for the first time in '93, '94, and I was like "This is going to change everything", and it did. The next time was when I saw augmented reality for the first time and I was like "This is going to change everything" and it didn't. Not yet, anyway. Not in the same time frame but I haven't given up on it yet because I think there's an awful lot of interesting potential and I don't think people have Real realised what to do with it yet. This is a screen shot of looking at a Starbucks coffee cup of mine. I don't know how well you can see in the image that there are some words and a tag showing up around it, coffee, cold and medicine, so clearly it knows me very, very well. But these are labels, these are words, these are concepts that describe this coffee potentially. It could describe its relationship to me, it could describe its relationship to its surroundings, to the context that it's in. Those could be all different attributes that are being exposed through the layers depending on what's relevant, what's significant at the time. What's significant at the time. Meaning is about what matters. It's about what's important in every case. So, maybe that's starting to make sense, maybe that's like OK, I see where you're going with this. But how does talking about meaning get us closer to helping humanity prepare for our tech- driven future, and I'm glad you asked because I will tell you. Meaning is about what matters but
  6. UX Australia 2019 (AUUXAU3008D) Main Room, Day 2 – 30th

    August, 2019 Page 6 of 15 innovation is about what is going to matter, or at least that's how I like to think of it. If you use that framework. If you ask yourself those two questions or use those parameters, as you're thinking about the technologies that you work with and the products that you are designing or the experiences that you're creating, what matters, what is going to matter, you're going to be in a great position to be able to set these experiences up for the most meaning to be embedded within those experiences. So I have a client, I've been working with Teleservices, the staffing company. Are they someone you're familiar with in Australia? You're nodding heads. They've been around for decades. The Kelly girl, that's their tradition. You would be calling them for, like a temporary secretary staffer in the sort of Madmen era. They've evolved and they're trying to figure out now, looking ahead, at the future of human work and the future of automation what's this all going to look like for them? What's it going to look like to be the Kelly Services that a CEO might call when they need additional capacity within their organisation? It's not going to be about human workers all the time. How are they able to make sure they're positioned well to address the big er picture. They've set up an R&D lab and trying to understand how to be the service that provides human and machine augmentation of the workplace. There's no easy answer on this but here at least thinking about what matters and what is going to matter. So that's an entry into how to think about our best businesses. Remember, our best tech, our best businesses and our best sell selves. Our guest best self is - tech is going to add capacity and scale like we've never seen before. That's the truth. I've been doing a lot of keynotes for organisations that focus on robotic process automation and I've seen some incredible examples of RPA that's in place already and has been for years within different organisations and they're doing phenomenal job of making automation very, very re -- reliable for organisations and they're reducing their need for human labourers in those roles and that's very, very unsettling. But it will, it will add capacity and scale like never before and I think what's important for us to realise here is that our actions and our decisions will have outside consequences. They will. Some of those are going to be unintentional and I think we need to try to anticipate as many of the consequences as we can to minimise those unintentional consequences. So I will give you an example, which I think is kind of a fun one. Amazon Go, you must all be familiar with it at this point. How many are familiar with Amazon Go? Pretty much. So it's a grocery store, a retail concept where you have this just walk out idea and to have a gate at the front of the store. You have an app that you sign in and it has the QR code and you walk in, scan through the gate and you're signed in, and it's a grocery store like any other so you just gather up what you want of the food that you want, figure out what it is that you need and then, you know, buy things you don't need because that's the grocery experience, right? Is that just me? Buy a tonne of things you weren't coming for and then just walk back out through the gates. You never stop at a cash register, you never interact with a cashier, which certainly you maybe thinking wow, that's what we need to have, what happens to all these cashier jobs as this thing scales out? That's a conversation to have but it's not the one I meant to have with you today.
  7. UX Australia 2019 (AUUXAU3008D) Main Room, Day 2 – 30th

    August, 2019 Page 7 of 15 The thing that I want to bring up is when you start your app for the first time you get this onboarding tutorial and it says, you know, because anything you take off the shelf is automatically added to your virtual cart. Don't take anything after the shelf for anyone else. Does that raise any red flags for anyone? Yeah. Right? I'm sure it must be common here in Australia, just as it is anywhere else in the world, that when you can't reach a product on a shelf you ask someone to reach it for you, right? Right. This is so common, in fact, that recently I was rewatching the movie Double Indemnity. Have you seen this movie? Great movie. Classic. There's a scene where a random woman asks to reach a product for her. What if this had been an Amazon Go back in 1944? You know, I don't do a very good Fred McMurray impersonation, but he'd have to be "I can't get that for you, this is an Amazon Go store." She would never have asked him, she's like "Of course, this is an Amazon Go store. We don't do that." I don't want to spoil it for you incase you haven't seen a 1944 movie but she's actually interrupting just as they're conspireing to cover up a murder too. So maybe she actually foiled their conspiracy. It could be. So when we ask each other for help in grocery stores we could be breaking murder cover ups. You never know. But my point is what happens if we get used to the idea that we just don't ask each other for help in the Amazon Go star. - store. You might think that's not such a big deal but how many are you thinking oh yeah, Amazon acquired whole foods too. I was thinking about that. 475 stores worldwide and then they announce, and Amazon announced at the end of last year that they plan to open 3,000 more Amazon Go stores by the end of 2021, which is right around the corner, right? So now we're talking about some scale, right? Now we're talking about pretty much the dominant experience that's going to be what you have when you go grocery shopping and what's to say that that's where that stops? It's probably going to be if not the future of retail, part of the future of retail. We can expect that, right? So now what I'm saying is we don't help each other in any store and how long do you think it is before we get conditioned to that idea and socialise to do that idea and we just don't help each other at all? Anywhere. Ever. So maybe that sounds a little hyperbollock, like Kate, you're re acting a bit, calm down, quit being so American. I understand. But my point is that's what happens when you decide what an experience is going to be. Experience at scale does change culture. Do you know why? Because experience at scale is culture. The decisions you make, the actions you take within the products that you design and the experiences that you're creating are creating culture so we need to be very, very mindful of those consequences that could come from that. And, of course, my Amazon Go example is meant to be a stretch but it's meant to be an illustration to help you think what could happen with the products you're designing too. It's an absurd example, right, but I like this tension between the idea of meaning and absurdity. Any artists in the audience? Yeah, got a few hands. You know. There's the tension between what is true and what is maybe weird, right? You can play with those ideas. Artists have done so for a very, very long time. I feel like anywhere that you have not defined what is meaningful, it leaves this void into which absurdity can flow and it's not an intentional kind of absurdity, it's the
  8. UX Australia 2019 (AUUXAU3008D) Main Room, Day 2 – 30th

    August, 2019 Page 8 of 15 kind of absurdity where we have the same meetings over and over again and nobody knows why or we use weird language at work that none of our friends would understand us saying, right? That's the kind of absurdity I'm talking about. We don't want that kind of absurdity. We don't want the kind of absurdity that creates experiences that distance us from each other. VR is super cool and when you're in a completely immersive experience, it's amazing right? But to everyone around you, you look like an idiot. And you're flailing around and crashing into stuff. That's not an experience that keeps you connected with the people around you and the place around you. Maybe that's OK now and then, but we have to know, we have to be very intentional about what we're create scompg -- creating and why and what the context is going to be. We cannot give absurdity a chance to scale because it will. It will create and scale and go everywhere. We have to be very clear about defining meaning and being very intentional about purpose and significant, what matters and what is going to matter. In that way we can keep that unintentional absurdity from entering into the equation. Having said that how many share my love/hate relationship with automation? OK, I feel you. It's like, I mean I appreciate that automation is super cool and powerful and do great things for us if we deploy it well but that if we deploy it well is kind of the catch. Right? Yeah. So I think Bill Gates really said it well when he said that automation applied to an efficient operation will magnify the efficiency and automation apply to do an efficient operation will magnify inefficiency. I you can substitute meaningful and absurdity here. The automation applied to an absurd experience is going to magnify the absurdity and that's what we need to be caution about. Even if you're not designing automated experiences now, the experiences that you are designing will soon be automated or you will soon be designing for automation. That's just how everything is moving. I want to show you the sort of other piece of the consequence of this, it gets us back to that Amazon cashier jobs consideration which is that there's a study that was done a few years ago which showed the potential impact of jobs that are likely to be highest risk of being automated, and I apologise it's the United States, that's where they did the study. I'm from New York so I zoomed in on New York and you can see the green bubble represents, it says 55% of jobs potentially automatable. You can see that green bubble is not one of the bigger bubbles on the map. Las Vegas, which I can't seem to point to, the really, really big circle in the lower left has got to be an enormous number and it is like a lot of customer service jobs, trucker jobs, things like that, we know what sort of jobs that represents and we know those are easily automatic. Think of the social economic consequences of this many jobs in short order being change ed as a result and maybe replaced by automation. I did find a map that pertains to Australia. You may have seen this type of study before but this was published through the Committee of Economic Development of Australia and they show the local government areas with more than 100 worker that is have the greatest risk, probability of
  9. UX Australia 2019 (AUUXAU3008D) Main Room, Day 2 – 30th

    August, 2019 Page 9 of 15 job loss through computerisation and automation. You can see where the red and orange areas are the highest risk. It's pretty significant numbers of areas. So this is going to impact all of us worldwide. We need to be thinking, we need to be very thoughtful about this. What I'm saying is not that it's a disaster, but I think what we need to recognise is that the main effects of human work, the main effects of automation on human work is going to be augmentation and augmentation just means change, right? You've all had your work changed by technology at some point. Think about what is the last tool that got inserted into your work flow, maybe a CRM or something . Whatever that tool that was changed your work flow. That's all I'm saying. It's going to change our jobs but there are significant number of jobs and types of jobs that are going to be displaced and replaced by machines, by automation, by self-driving vehicles. Truck drivers are going to be hard hit by this. We also know, particularly in the US we can identify particular populations and demographics that are most hard hit by this. We know that communities of colour and marginalised populations are most likely to be affected by this displacement and replacement. So we're talking about some major, major socioeconomic issues. It might sound like that's outside the scope of a US discussion but it is not. I think this all belongs in the consideration set of those questions that we ask in our meeting rooms when we think about what's good for the business and what's good for humanity. Let's have those conversations, let's be open, eyes wide open about this. We can't necessarily stop progress progress, but we can think about what these consequences are going to be and be having these conversations out in the open. If you want to point out this last word on the slide, created. Yes, jobs will be created by automation. There will be new types of jobs. I think things like imagine that there is nuance engineers or something or, you know, you might be somebody who is looking over curated sets that AI have pulled together and reviewing images that were gathered by image recognition and saying yes, that's the one, no, that's not the one and so on. There's lots of opportunity for humans and machines to work side by side. The future of the workplace is probably not going to look like this, although it's kind of cute. I saw this illustration in a company in a 'Wall Street Journal' article and I just loved it. I reached out to the illustrator and I was like can I, can I, can I use this illustration. I said 'what do you want?" He said just mention any name. It's done by Kiko. I think, you know, it's a hilarious representation of this flipped over world but I mean I think we do have to think about all the possibilities here. What is going to look like for humans and machines to work side by side. And I think it forces us to recognise that automation is a business and human opportunity for sure. There are plenty of good things that can come out of this. We just need to be very clear about the problems that it presents strategically, culturally and in terms of design. So as we think about how humans and machines can work side by side, not necessarily robots
  10. UX Australia 2019 (AUUXAU3008D) Main Room, Day 2 – 30th

    August, 2019 Page 10 of 15 looking at (inaudible). But what does it look like in the real sense? One thing that's interesting to me is when you think about how humans work best, what are the conditions under which humans thrive in a workplace and humans thrive on having a sense of meaning about our work. We like having shared common goals with each other, like, as a team and we like having a sense of fulfilling something bigger, than what we're doing. Like being part of something bigger. That's what works for humans. For machines, all they want is clear instructions, simple code, efficient, elegant. Don't put more in than is needed, keep it streamlined. You know what's very interesting about these two very different sets of requirements? You can meet them both by articulating a very clear sense of purpose. If your organisation, if your company, if your team, your project, is able to articulate what it is trying to achieve, at scale, it is in a much better position to be able to give its humans and machines the correct information to have meaningful discuss success within the work that they do because purpose is the shape that meaning takes in business. That's what I think anyway. When you think about meaning as it apply to say a workplace -- applies to a workplace, business is a human concept, it's something we created, and the way we make it feel more human is to have a sense of meaning or purpose behind it. Why are we doing what we're doing? And what are we trying to do at scale? I think a very easy, clearly illustration of this and a fun one is Disney theme parks because their purpose statement when you boil it down to three words is create magical experiences. Just that. Create magical experiences. Simple. And you think about the entire organisation, hire arkic ally across different job functions can all figure out how to solve a problem if it's brought to them, assuming they have the autonomy to solve the problem if they just think how do I solve this problem to create the most magical experiences. What's very interesting about this is that even when you think about, I don't know, beyond the sort of work culture or human elements of this, you think about digital transformation and how do you deploy a data model and a technological set that addresses this, well, they did that. They can actually justify their $1 billion investment in the my magic band wearables and infrastructure because it's so clearly represents creating magical experiences, right? It embeds all this payment information and room key and park admission, all sorts of information into the wearable, into the wristband and makes the entire visit feel seamless or magical. They know your name when you go to check in on a ride or you go to check in for your dinner and your reservations are all there, all of it is there. I guess the thing for you to take away is even when you think about the projects that you're working on, can you be this clear? Can you in three words to maybe five words articulate what it is you're trying to achieve at scale? Three to five words. It really helps. Because how often do you get this approach to digital transformation or deploying new technologies? Hey, what's our AI strategy? Anybody ever had an executive slap a news clipping down on their desk and go "VR, what are we doing about it?" I've had that happen. Yeah, something, something cloud, I don't know.
  11. UX Australia 2019 (AUUXAU3008D) Main Room, Day 2 – 30th

    August, 2019 Page 11 of 15 Obviously that's not the ideal way for us to be thinking about this. Our ideal way to think about it is how do we amplify the strategic purpose that we've already articulated? How can we be very intentional about taking what we know we're trying to achieve at scale, and using data modelling and using various kinds of emerging technology, whatever is relevant, to achieve that sort of alignment and amplification? All I know is that out of everything we do, humans cannot leave the determination of meaning up to machines. It is not their strong suit. That will be how we get to our worst tech, not our best tech. That's not what AI does well. AI, at this point, has not been particularly great at determining nuance. This is a pretty famous image recognition problem, muffin v puppy. It's funny because we can feel a little smug about it. I can easily see the difference and you can because you're use ing emotional memory. You know the fluffiness of the puppy and the sweetness of the muffin and that's helping you recognise these things. There's sensory experience s to help you make these distinctions. Here's more, by the way. You're not having any trouble knowing which one is fried chicken. Humans are generally pretty good at nuance and it's because nuance is meaning. We are wired for meaning. We're good at this stuff. So it should be on us. It should be on us to infer what is meaningful about the experiences that we create. It's going to get harder, by the way, a few previous speakers yesterday also mentioned this increasing phenomenon of deep fakes. We're dealing with a big problem here because this is an example that you may have seen before that shows a model that can use one reference image such as the Mona Lisa here and create very life like, convincing animations that you would not imagine were not this person, right, making these gestures and facial expressions. It's going to be a problem. We're not going to be as - it's not going to be as easy for us to know what is meaningful, what is true, what is significant, what matters and what is going to matter. By the way, have you seen this one? I think it's kind of interesting. This is like landscapes that you can create. You can see the bottom colour bars is the design pallete of sorts. So it looks like a MS paint but there's this vocabulary at the bottom, if you draw in the brown that's rock and the teal blue is sky and it interpolates your instructions and does a composit that looks like a real landscape. This kind of thing is going to be incredibly problematic because we're going to have a very hard time knowing the difference. So far we don't have quite yet the so fistation for machines to even know the difference. We're getting there. There's some work happening in that space of being able to use machines to detect a deep fake but it's always going to be staying one step ahead. We need to be aware this problem exists because we're back to this. Not everything is what it seems. And we have to be aware that interconnected meaning and absurdity that's happening and be very sophisticated about it. Here's a fun example. >> SPEAKER: Me ten years ago would have played along but me, now, ponders how this data can be mined on age progression and age recognition. Well, Kate O'Neill, who wrote that, joins me now from New York. KATE O'NEILL: Everybody remember the ten-year challenge? Yeah. Did you play along? A
  12. UX Australia 2019 (AUUXAU3008D) Main Room, Day 2 – 30th

    August, 2019 Page 12 of 15 couple are like oh, yeah. I don't believe necessarily that this was a con -- conspiracy that one engineered. I had this tweet that went viral and I unpacked it in a series of tweets and that went viral and Wire asked me to write a piece to explain the thinking behind it, that went viral. I ended up on all these news channels all over the place and it was because, as I said earlier, everybody cares about this stuff. We all can recognise at some level that we're in danger and we need to understand what is happening behind the scenes. Here's the article, like I said, was super popular. It was referenced top of Reddit and the Daily Show referenced it. A huge phenomenon. The point of that is this is a really powerfully important area. The area around which, you know, humans are being duped on a regular basis and we now have very little way to distinguish between when something is just for play, and when something is going to have some insidious motive behind it. So we have to be on the look out. We have to be thinking about this interplay of meaning and absurdity. We have to be thinking about what we do when we participate online. We also have to be thinking of creators of experience. How can we make sure that we're building trust into the experiences that we create and we're using the most integrity and applying the greatest sense of affect to the work that we do and using human data respectfully. Am I describe ing dystopia as I'm talking about this? I don't think so. I think we have to be intentional to use the technology to create something better. We can create our best tech. And it starts with understanding that, and I think a few other people talked about the idea of leaving behind this term of user experience. At some level of extraction it doesn't do us to think about user experience and patient experience and so on because it's compartmental ising the human experience, we need to integrate the human experience. We need to be very holistic and intentional about it. And so I propose in Pixels in Place, I have a model called integrated human design and it looks that integrated and on line, offline, all blurred together, and human the integration of all those roles. We're all those roles all the time at various times throughout our lives, or most of them. Experience which I define as the intentional layer of interactions and transactions that we create and design, which I very intentionally define as the adaptive execution of strategic intent. In other words, that you have some kind of strategic intent. It should be derived from your sense of purpose, right, your sense of why you're doing what you're doing, what are you trying to achieve at scale, what matters. And then approach it in an adaptive way. You're going to try to get better and better as you go. And then I also built a model to build on that. My friend Jeffrey refers to that human design experience I had. Talk about the idea of creating meaningful machine-led human experiences. So if we assume that more and more of the experiences we create for humans are somehow going to be data driven, automated, or somehow led by machines, we have to be very intentional and clear about how to do that very well. And so here are some of the principles there. I think one of the things, like number one here, when we talk about automation, oftentimes we talk about automating the menial things, like the idea that we're going to automate things that are routine and get in our way so we'll just be able to do the meaningful work, and that's fine. I
  13. UX Australia 2019 (AUUXAU3008D) Main Room, Day 2 – 30th

    August, 2019 Page 13 of 15 think that's an important piece of making ourselves more productive and having a more fulfilling job for ourselves, but I think if you take that to scale and imagine a world where everything that can be automated is automated, and it's all menial, and meaningless, that's a really horrible dystopian image. Like we're surrounded by meaninglessness. So wouldn't it make sense to try to infuse some of those experiences with more meaning? Some of the things that we automate try to make sure that they have some sense of what is meaningful about the interaction that we're creating and about the relationship between the company and the consumer or the brand and the customer or whatever the relationships are, try to create some connection. And then automate that sense of empathy, not empathy in the sense that it was unpacked yesterday afternoon, where it feels like condescending, but empathy where you're trying to feel what the person in the situation is going through, when you're trying to bring that into the consideration because that will help create the wrong context for automating meaning into those experiences. And then use human data respectfully. That's so critical. We have to be aware that the data that we're collecting, we are - it does represent actual people and we have to treat it with respect. I agree with this idea about minimising the data we collect, be very conscious about what we're collecting and how we're using it. And then as we gain, this is a piece that I offer to executives when I speak to corporate leaders. As we gain efficiency and profit through these automated tools, we invest some of those gains into your human resources, into humanity and into human experiences. It won't always be about taking the efficiencies that we gained and creating new jobs. Sometimes it might. But it might just be about making those experiences nicer for the people who are outside the organisation because ultimately, as this kind of progresses through time, I think it's easy to imagine we're going to enter a world that is increasingly automated, that is increasingly whatever the situation looks like, whether it's universal basic income or however it's structured, we're going to have less and less to do with those jobs and we need the experiences that we have that are created by machines to be meaningful. We can't thrive without meaning. So when we talked about using human data respectfully, by the way, one of the ways to do that is to recognise that relevance is a form of respect, offering relevant form when relevant but discretion is a form too. And then protect human data. I love this screen shot I took from Twitter, today's selfie is tomorrow's biometric profile. I don't think it behooves us to be frightened by the future but I think we have to be very, very conscious of the data we're collecting and what that does not only for the people that are the constituent that is we serve but for us and for the people we care about. So use our data. Absolutely use it. But just use it to make meaning when you can. Create meaning in the work that you do. So can tech help us create more meaningful experiences? I think it can. I was having a conversation during yesterday's break with my new friend Adita. Will you raise your hand? I was telling him about this time shifter that has helped me from here in Sydney, I live in New York,
  14. UX Australia 2019 (AUUXAU3008D) Main Room, Day 2 – 30th

    August, 2019 Page 14 of 15 and obviously that's a big time difference and I was telling him about this app that has helped me with travel to go Hong Kong, Mumbai and so on and I've never had jet lag because of this app. He's like that's technology helping us have a better human experience. I'm like you're right. Here's that app. That's my husband's hair in the background on my home screen. You set it up and you say this is where I start and my flight time and arrival time and it gives you a plan to follow in terms of the sleep to get, when to get bright light, caffeine exposure and so on and if you follow it really closely, in my experience, you don't have jet lag. I'm standing here in front of you telling you I don't have jet lag. It's amazing. If we have the data, if we have the opportunity to use even just simple technology tools like that to make our experiences better, why wouldn't we do that? What if we could use data and emerging technology to transform experiences around what makes humanity thrive? I think we can do that. I think we can take that opportunity. There's one more road map for us which is I mentioned as earlier I had the opportunity to speak at the United Nations, which was a thrill of a lifetime, but I've been vested with the work I've been doing around their sustainable development goals and I'm trying to help corporate leaders understand how to align the work of their business with these human objectives. This is what I talked about at the beginning. Align business outcomes or business objectives to human outcomes and use tech to amplify it so that as the business scales and succeeds, it will bring humanity with it. These are the ways to do it. These are very clearly articulated set of 17 goals that will help everyone in the world have a better experience, have a better life. More quality of life. Because also you're not future ready if you're not thinking about the way your decisions impact the planet and the future of all the people on it. That's just the truth. All the conversation s you're having in your workplace, if they don't have at some level some recognition of remember at the beginning when I asked about where we worried about the future of humanity due to things like climate change and every hand went up? We have to have these conversations. We have to. I'm going to have to skip a bit ahead, I dragged a bit, I think. I want to make sure that we understand that machines are what we encode of ourselves. When we think about creating the new technologies that are going to create experiences for us in the future, we have to remember that we are deciding what's important. We are encoding our values and our biases into those lines of code, into those algorithms. Why would we not encode our best sell ve selves, our most egalitarian viewpoints? We do have that opportunity. You have that opportunity. I would incurage you to think about those types of best selves opportunities with every conversation you have around the work you do. What are you trying to do at scale? Ask that question at your work. For me that answer is I want to help humanity prepare for a tech-driven future. Longer than three to five words. I'm cheating. I want to do that by helping create more meaningful experiences. For me the way to accomplish that is speaking with groups like yourselves and asking you to do the work. I'm genuinely trying to help you make those decisions in a way that aligns better with the
  15. UX Australia 2019 (AUUXAU3008D) Main Room, Day 2 – 30th

    August, 2019 Page 15 of 15 opportunities we have because it's the tech-driven future isn't going to be strictly dystopia or utopia. It's going to be what we all make it. We have that power. We get to make those decisions. So please make them wisely because I really think that the power and the capacity and the scale that emerging technology bring, can bring us truly the best futures for the most people. That is truly powerful and an opportunity within our reach. It genuinely is. I don't want us to feel like dystopia is imment. I don't want you to feel that sense of anxiety that we hinted at with the survey question earlier. I want you to feel a sense of hope and an opportunity to make a difference, make it better, please. And please use the robots wisely. Thank you. (Applause) Thank you, my friends. I think you can join me off the stage now. I'm going to take my laptop with me.