Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Leisa Reichelt Transcript

UXAustralia
March 19, 2020

Leisa Reichelt Transcript

UXAustralia

March 19, 2020
Tweet

More Decks by UXAustralia

Other Decks in Design

Transcript

  1. 1 www.captionslive.com.au | [email protected] | 0425 904 255 UX AUSTRALIA

    Design Research 2020 Day 1 Thursday, 19 March 2020 Captioned by: Gail Kearney & Rebekah Goulevitch
  2. 2 LEISA REICHELT: Thank you, Steve. All right. Let me

    just negotiate the technology quickly. Give me a thumbs up if you can hear me. No? OK. Give me a thumbs up if you can see my screen. Fantastic. OK. Off we go. All right. I've got to say it's super strange to know that you're all out there and I can't see you and I can't hear you so I'm just going to imagine a lot of very kind of friendly, excited people at the beginning of this really exciting time I guess in many ways. It's obviously anxiety-provoking but I see how exciting it is to see the constraints we are living in has forced us to be more creative and more human in a lot of ways than what we've done before. I hope wherever you are, you are keeping well and looking after everybody around you. I was wondering when we do remote conferences, how the Welcome to Country works. I feel like I should give you a quick welcome to my part of the country. I'm speaking to you from Sydney which is the Gadigal people of the Eora nation and I want to pay respects to their elders past, present and future as well. I'm currently working from home. Since last Thursday my set-up is not particularly fantastic, I have to say. I'm sitting at my kitchen bench right now with no second screen or anything like that. I have my speaker’s note here on printouts in case I need them. I've even sent my sons, Reuben and Dune, to the dog park which should buy me half an hour of guaranteed quiet. You may get a cameo appearance at some stage. I apologise. I shouldn't apologise. I'm Leisa Reichelt. I know that particularly in Australia every kind of people know the name of Atlassian. A lot of people don't know what we do. It might be useful for you. We make software for teams who are collaborating together and the mission at Atlassian has unleashed the potential of every team which is particularly interesting in our current teams. You might know us from such software which are a small number of products from the overall suite. Today I want to talk about how can we unleash the potential of use research in our organisations and how can we address some of the places
  3. 3 where research can come a little bit unstuck, particularly

    as we are looking for opportunities to scale it. I don't know if it's like where you work but my experience has been since this guy started coming out and talking about customer obsession, things have started to change. Where once it used to be a complete battleground all the time to get everybody to think about potentially involving customers in our process, we have many fewer fights these days about that. Instead, I feel like in a lot of places we have started to shift to a new challenge and that challenge is how do we meet the demand for all the people who want to talk to all of the customers all of the time? And so what has started to unlock this new philosophical debate as to who should be doing this talking to customers? Whose job is this? There are some researchers and I have to admit I have a lot of empathy for them, they believe the talking to customers, the research work should be being done by people who are trained and experienced in doing that work. So perhaps researchers should do that work. In some part of the world, though, this is a redundant question. Partly because there is just so much demand that we cannot hire enough researchers to meet that need, to meet that demand. I think it's quite a different story in Australia, though. We have a different issue here, which is that we still don't seem to actually train or hire specialist researchers. I did a little bit of research because I wanted to try and put some numbers, some evidence behind this kind of feeling that I have. And so I thought where better place to conduct research than on Linkedin. I started with the town I lived in before I moved back to Sydney, which was London. Let's take a look at what it's like to get a job in London. This is posted in the last one week within 25 miles of Central London. And, for context, let's say London has a population of 8.9 million, it's quite a big place. 590 jobs came up which seems like an as tuning roles. 590 research jobs posted last week. Take a moment to look at the
  4. 4 kind of roles that we're talking about here. They

    are the user experience research leads, seniors, head. These are senior specialist roles. Let's fly over to another part of the world - not fly, let's magically transport ourselves in a germ-freeway to San Francisco. San Francisco is tiny, only 300,000 people. Let's zoom out to the bay area. This has a population of 9.7 million people. It has a lot of our best known tech companies, for example. Here we can see 602 jobs posted in the bay area in the last week. Again, take a look at the kind of roles that are available there right now. They are pretty nice looking roles and all specialist research roles. So now let's return to Sydney. If you didn't know, Sydney has a population of 5.2 million. So London and San Francisco have roughly around 600 jobs last week and populations about 8 to 9 million people. Then what's your guess of how many jobs there might be for user researchers in Sydney posted last week? Would you say maybe about half that? So maybe 300 jobs? Maybe a third, maybe 200 jobs? If you have been looking for a specialist research role in Sydney in the last week or so you will definitely know that's absolutely not true. 48. 48 measly research jobs. You can see we have been really quite flexible here on what quantifies as a user research role as well. So I think the reality that we have here is that still in Australia we're not really taking research particularly seriously as a specialist many. I know multiple people who have moved back to Sydney from abroad and have been told by people in our field here that you need to be able to be a designer in order to do research work as well. I think that's really disappointing. I hope we are at the tipping point now of moving away from that and towards taking research a lot more seriously than we have previously. The context here is really quite different, I think, than it is in other parts of the world. So we need to ask a slightly different question here. Right, in the US and UK there is a huge amount of demand. They are trying to hire as much researchers as they can and can't do it. They are
  5. 5 having to democratise in a lot of places. Here

    in Australia it seems as though we are democratising. Who should talk to customers, whether it's by choice or necessity, this is a redundant question. The question is not whether democratised research is good or not, whether product marketing managers, all kinds of people in these organisations believe that it is part of their role to conduct roles with customer users, and they are not doing their job if they are not talking to customers and users on a regular basis. Our starting premise is that people being closer to customers and users will lead us, in theory, to better experiences, better products and better services being delivered. But it's not so simple as just getting closer to customers, just talking to customers, of course. When we're doing that, we have two very important jobs that we need to get done. The first thing is our old friend empathy. Closeness to customers and users need to help us build empathy. And the way that I think about this is it needs to help us to have the ability to understand and share the feelings of our customers and users in various contexts, in a range of contexts that are really different to our own. This means that when we're thinking about design, we have this kind of database of context that we can refer to, we are more likely to avoid designing for ourselves, designing for our organisational context. But in addition to that, our interaction with customers also needs to provide us with evidence and evidence is obviously the facts, the information that indicate whether a belief or a proposition is true or valid. This is the knowledge that helps us drive forward. It helps us to make decisions. But my experience is that these two very different jobs require very different skill sets but they are very often conflated in our organisations. And this lack of appreciation for these two very different jobs that research has to do creates the preconditions, I think, for the five dysfunctions that I want to talk about democratised research or how
  6. 6 research gets wobbly when we try to scale it.

    Hoe let's start with dysfunction number one. I call this the speed dysfunction. I suspect this one will be quite familiar to many of you. I wonder how many of us consider ourselves to be working in an engineering-driven organisation? You know that you're working in an engineering-driven organisation because it's tacit knowledge in the organisation that the real work, the important work is done by the engineers or the developers and any work that precedes the writing of code is just basically adding time between when we've had a great idea and when we can do it. Speed really matters. Software teams around the world, designers are racing to feed an engineering machine to get the new features shipped to achieve product marketing. Although many teams will make the claim they are being data-driven, they are being customer-obsessed, how often are we hearing questions raised about the quality of the research and quality of the resulting evidence. I'm going to say not so often. S and so as a result teams are pleasing their stakeholders by taking the mass test and most effortless approach. And this means that we are making trade-offs. Where are these trade-offs happening and do they really matter? The first place that you will often see a quality trade-off is in recruitment, the selection of who bets gets to participate in the study. Quality research really relies on ensuring our research participants are realistic, they have the relevant experiences, that they are able to give the feedback and behave in a way that might be representative of many other similar users in our audience. Sometimes, as I'm sure you know, getting the right participants into your study can be really hard work. This is particularly true if you are working in a business to business environment. Recruitment can be very time consuming. Sometimes it can take several weeks to get the right participants lined up at the volumes you need for study. So unsurprisingly a lot of teams compromise and they might do this in one of two ways. One is that you simply relax the recruit brief, you are less
  7. 7 fussy about who you involve in your research. The

    more recent one is we are less fussy about where we recruit from. We have a lot of online pan with some of our online tools and these are panels are subject to misrepresent or people clever at making their way through screening successfully if that is not an entirely honest representation of where they are. Now, we can call this pragmatism. We can say, look, we will get to run the research more quickly if we are less fussy about who participates and that's a valid argument. Another part of the process that can be time consuming is recruitment and analysis and synthesis. We had a rule of thumb growing up as baby researchers. We were told you should be allowing two hours for analysis of every hour of research right now. Where if I could see you all, I would ask anybody to put their hand up and do a crazy happy dance if you ever actually get this amount of time to do your analysis. It's pretty unusual these days. So instead we see these much scrappier, much leaner approaches which will look like you have a couple of participants, and at the end of all of the sessions you do a huddle with the team, you debrief, you compare notes, you decide what to do and then you move forward. What are the implications of the trade-offs that we make for recruitment and analysis to help us move more quickly forward? Are we being thoughtful about what we are putting at risk when we make these trade-offs? Let's talk about recruitment first. One thing that we want to know about research participants, I'm sure you already know it, they want to play along. Like no matter what you ask them to tell you or to do, they're very likely to have a really good crack at it. And so this is something that I see alarmingly frequently from the team's point of view this is a wonderful way to save time. You've got questions for developers and product managers so you get a developer in, ask them the questions and then at the end you get them to imagine that they are a product manager
  8. 8 doing sprint planning because I'm sure they've seen it

    before in their team so they can have a pretty good crack at it. What am I doing with a product manager, how much should I be trusting this? I am going to say despite the developer trying their absolute hardest and doing their very best impression of a product manager, I personally wouldn't trust this data too much. I wouldn't put a huge amount of confidence in these responses. So you can see if you have a study that is populated with people answering questions that are intended for different people, you're potentially entirely compromising your study and rendering it effectively invalid. What about analysis and synthesis. What trade-offs are we looking at, we are not giving that the time and attention we would like to. The processes bring us many benefits. They bring great value. They want to make sure that we glean everything that there is to learn from that study so we are reducing waste and increasing it but they also play another really important role. These methods help us to counteract some of the more powerful cognitive like confirmation biasses. Cognitive biases, these are shortcuts, these are helping our brains go faster but they can have a counterproductive effect when it comes to processing research data. Because without taking the time to do this proper analysis, it can be easy for teams to distort their research data, to validate the assumptions and beliefs they had coming into the study. We can find teams paying attention to the most he will consequent participants or things that they discovered that are most palatable to stakeholders. So we find that by skipping or rushing the analysis, we can lead ourselves to really biased outcomes in the research. Now, none of this is to say there is no room for compromise. Of course, there has to be. But the important thing is that we are making these trade-offs, we are aware of the compromises, we are aware of the risks, we are taking informed decisions.
  9. 9 So what can we do to mitigate this dysfunction?

    There are three things that I've tried that work quite well. The first is planning. I think it's easy for us to assume that people are deliberately short cutting the processes when in actual fact a lot of the time they don't understand what good looks like. So helping teams to really understand what does a really thorough recruitment process look like, what does a thorough analysis process look like, what are the stats, that can help them have the option of including that as a part of their process. What I found is that when you really step people through what this looks like, then they will scale down from a much larger point than they were before. Instead of going, well, we only need a day for this, you might need a week, for example. So helping people to make sure that they understand what this process is in terms to plan it and providing some place to have project planning is there. Make it easier, give clear guidelines about how we want people to work. With things like analysis it can be as simple as do we have enough space to do this properly? And then finally education. So this is really all about helping teams to make informed decision, helping them to understand the risks that they are introducing and take those risks knowingly. This can be really important in helping to sort of empower them to take appropriate risks and knowing. OK. Dysfunction number 2. This one is called the silo dysfunction. I'm going to guess this one might be familiar to many of you as well. We will illustrate this one with a picture. I wonder if you - I know you can't - but if you could, I wonder what you would tell me this scary sea animal might be? Who guessed duck? Nobody ever guesses duck unless they've seen it before. This is a really important piece that I try to keep in mind all the time when I'm doing research, especially when I'm feeling the pressures of teams wanting to have greater acceleration and greater focus. I want to ask myself are we getting enough of the picture to know what's really going on to be able to make a good decision. Are we looking
  10. 10 at the show rather than looking at the duck.

    Are we looking at the cause or the effect? So imagine that your team is tasked with designing a chair. Would it be one of these beauties? Or perhaps one of these looking as empty as they would be right now? It could be a chair for this or this. Or maybe one of these. These are all chairs, right. They all have to let you sit in them. But the context that they're in demands this whole secondary set of requirements. That could render them next to useless if not considered. It's exactly the same when it comes to features. This is another kind of more eloquent way of saying it. Always design a thing by can considering it in its next larger context, a their in a room, a room in a us who, a house in an environment, an environment in a city plan. The context we are designing for is important. It's important that designers and people in our teams care about this context as well as the feature they've been tasked with. Even if they can't control the context, even if that context belongs to other teams, we need to care about it and be interested in it. Noah who is our senior product guy, he says you are working so hard designing that light switch but you are not sure if you are designing it for a room or a car. It keeps us focused on things and get it wrong because of that huge amount of focus. When we are working with the Dura team, we are saying this is your room to the chair, this is your duck and not the shadow. Because it matters what people do when they are away from the screams and not using the products. It's easy for the team to forget that many of their customers and users have this kind of stuff all over their walls and there is a relationship between this data and their software. It's important to understand that the teams using this software have these kind of meetings and that there are no laptops out and to understand how these kinds of meetings in these situations relate to this product. This is a thick, messy, imprecise data but it is important. It helps us to design better products.
  11. 11 Why is this relevant to research? Well, because of

    human behaviour, because of the query effect. Jakob Nielsen says people can make up an opinion about anything and they will do so if asked. You can thus get users to comment at great length about something that doesn't matter and which they wouldn't have given a second thought to if left to their own devices. We know that people will give us information and so how we frame our questioning, how we frame our research, the breadth that we give it, whether we focus in on the feature or look at the broader context is going to influence what information we get back from the people we're doing research with as a result. This is really, really important, particularly for feature teams who are often really tempted and sometimes even encouraged to just focus on their knitting, focus on your thing, trust that everybody else understands everything else. But that laser focus can bring real problems when it comes to research. We find people who will happily tell us about the future and answer our questions but they may only be talking to us because we asked, not because they care or think it's important. They often won't tell us that they don't think it's important. We can miss out on a lot of critical information because it's out of scope for our team. Let's take a look at the silo trade-offs we are making as part of the silo dysfunction. First of all we have this laser focus. We are focusing only on the things that belong to this team that they can do something about. I think a lot of times this is what teams mean when they talk about actionable insights. They don't mean this is an insight that can be acted on. They mean is this an insight that this team, that we can act on right now. Research teams are encouraged towards actionable insights, bringing things that this team can work on right now and this is often perceived as efficiency, as a way of helping teams to go faster. The other thing we often see with these teams, they are looking for definitive answers or another word for that might be validation. The team
  12. 12 are looking for signals they are on the right

    pathway into how to look to deliver the future. They are not in the market for new questions, they don't really want anything that can confuse the issue or extend the scope. They want to get the job done. The problem is that research trade-offs is on focused on tightly on our future increases the risk of a false positive result. That wrongly indicates that a particular condition or attribute is present. It helps us think that something is going to go right when it actually won't. It undermines the reliability of our evidence. We see plenty of this informative research but even more interestingly for me, I think, we see a lot in usability testing on tightly focused features we often tend to see more positive results than if we were testing that in a real context. They lead teams to believe there is a greater success rate or a greater command for the products or feature than is the case. On top of that, they believe that they are taking a user-centred evidence-based approah. So what is our mitigation. This is a guy called Liam maximum We'll, he was the CT of the UK Government when I was working there. He had a famous, amongst the UK Government circles, phone case that he would pull out when we were in the meeting that we were too far away from the need we were seeking to meet. I think this question is the best mitigation for silo dysfunction. If we can look to relief balance, this laser focus on the future with a rich understanding of the underlying user need, we can create a really effective counter balance. We can decrease the risk of our false positives and we can do this by introducing the expectation or perhaps the requirement for every team to be able to speak with knowledge and with evidence about the user feature. We want people to answer what is the user need that this feature is contributing towards resolving. And you have to be careful with the answer. You have to make sure that the answer doesn't have the name of the feature in it. Because that's a common cheat.
  13. 13 What does the user need that status updating is

    trying to meet? It's trying to meet the need for status updating. That is not the correct answer. There has to be an answer that doesn't involve your feature. And then the other thing that we always want to ask is have we actually seen this in the customers that we've met? Let's talk about where we've seen this need being experienced and the way we're looking to solve it might help these particular people. OK. On to dysfunction 3. And now we're going to talk about using research as a weapon. I wonder how many of you have experienced this? I think it was about a year ago, maybe about 12 months ago, my research operations team were reaching breaking point. In particular, the researchers recruiters were about to implode or go home and not come back again. They were overwhelmed by recruiting studies all over the organisation. We dug in and had a look at all this work, why were there so many studies being requested. We found this big chunk of work that was very micro focused. It was things like does this icon work better than that icon in the navigation. Does this date picker work better than date picker. So a very hyper-focused feature. I had a chat to some of our design managers. I wanted to understand why this work was being commissioned. One design manager explained it to me this way. She said the thing is, Leisa, the designers are not being trusted unless they bring data to the table. So we discovered this group of teams where there was this almost combative relation between the designer and product manager. They both had strong ideas about how the design should be executed and they were regularly doing battle to see who would win. For some teams it had come to the point where the main thing that they talked about when they talked about design was data, the research evidence. But there were some very familiar problems with this research. It had to be done quickly. Because it couldn't possibly delay the team. It had to be extremely focused on the feature at hand. So we had both the silo and the speed dysfunctions in play with all of the problems that they
  14. 14 introduced. But we also had another problem. We found

    that designers seemed to be losing the ability to talk about their design. We were trying to win design arguments with graphs and tables instead of being able to talk about the design decisions in a way that was understandable and defensible and deliberate. So our product design craft and our research crafts joined forces to try to address this problem. And here are some of the things that we do to try to remove this kind of crutch of research and make the team relationships a lot less combative. We use exposure hours. So we try to make sure that everybody in the teams is seeing the reality of what it's like to use the product on a really regular basis. We do design critique coaching. We are coaching designers to build this giving and receiving, questions on design feedback. We talk a lot about risk. Does this decision actually need its own specific research project. Are there other ways we can be reducing risk or is the risk so low we should ship and learn. Looking at heuristics and conventions. I was interested in that designers don't look at heuristics. Some of them said they were outdated and they were from the 90s and they could have a place in our practice. I'm sure you understand there is so much we can learn from and lean on and rely on from work that has preceded us over the decades prior. Finally, we talked about end journeys. We tried to put it in the context of a larger end to end journey. It is the most powerful mitigation against weaponising research. The reason for that is it helps us keep teams' eyes on the bigger picture. I found too often we would go to war over the finer details, so we are picking out one aspect of the design and really focusing on that. Meanwhile, incredibly important details and other aspects of the journey are causing enormous harm to our customers and our business and nobody is looking at or attending that. It is often in the gaps between the silos and journeys. Focusing on the end to end journeys can make sure we have shared priorities and we are clear we are focusing on and working on and prioritising the most important things. It
  15. 15 helps us keep connected to the customer purpose, the

    overall goal that our customers are trying to achieve. Now, one of the ways we've started doing this at Atlassian is change the way we do demos. Instead of being able to pop up and demo the particular feature you are working on and get lots of collapse, we make sure teams are demoing the end to end journey. We are talking about the user context, we are talking about the goals, what the user is trying to achieve in this process as well. And we do this even if it means we are demoing work that other people or even other teams have done, which is a pretty radical concept, surprisingly. What we found is there is a double win from this as well. If we are focusing on these end to end journeys and doing research on these end to end journeys, this creates the environment for us to do more efficient research, more realistic research and ultimately to get more reliable research outcomes. So there's a lot to be gained from shifting the focus from that specific tiny thing that teams are interested in to understanding it in the context of end to end journeys involved. OK, dysfunction number 4, quantitative fallacy. You may not have heard of quantitative fallacy before, perhaps you have. You have almost certainly experienced it. I want to guess Lord Kelvin who devised the absolute temperature scale attempts to explain it to us. Kelvin says I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it. But when you cannot express it in numbers, your knowledge is a meagre and unsatisfactory kind. I don't think Lord Kelvin would have got on with qualitative researchers. He would have pissed them off. There seems to be an intuition that knowledge that is able to be expressed numerically is less satisfactory. Quantitative data is attractive to organisations because it creates this sense of both reliability and expediency. And that is pretty much surveys in a nutshell, I think. There's an amazing essay that Erika Holl did on surveys. She described it as an attractive nuisance and a hazardous object likely to attract those unable
  16. 16 to appreciate the risk posed by the object. And

    surveys are extremely popular in organisations because they do two of the things that organisations love the best. They give numbers and they do it quickly. And so surveys in and of themselves are not bad. They can be an extremely valid research method. But doing surveys well is hard work and actually takes time. It requires an awful lot more than just agreeing on a bunch of questions and banging them into a var monkey form. Getting the question right, for example, is absolutely critical to the reliability of our data. But very few people put any real effort into ensuring that they do in fact have good survey questions. So for those of you who are involved in doing surveys in your work, I wonder whether cognitive interviewing is part of your practice. This is common in academia but extremely scarce in commercial design research work. And this is the kind of research work that we need to do to feel confident that our survey questions are being understood and answered in the way that we intend. So this is critical for data quality. Why do we tend not to do this work? Is it ignorance. Do we not know as much about surveys as we thought we do? Do we have blind faith? Do we believe that our survey respondents will make sense of the questions in the same way as us? Or do we actually not really care so much about the validity? Whatever the answer to that, the reality is that numbers hog the limelight. They get the attention of our senior stakeholders. I did a presentation to our senior leadership not that long ago. It was full of videos and all sort of nuanced insight and it had a number, two numbers in it. After that presentation every single person who came back to me and gave me feedback and wanted to talk to me about my presentation talked about the number. That was the one thing that resonated with them and stayed with them. Many people are much less comfortable with ambiguity than we are. They are typically - people are very typically much more comfortable with a boldly stated number than a verbatim or two. What do we do about it?
  17. 17 How do we reconcile this quantification with the knowledge

    of the richer picture is good for product service and design. How do we get organisations to care about that too? Well, my strategy with this one basically is just to roll with it. We lean on good practice in research and we use mixed methods. So in my research team at Atlassian, we've really invested in our surveying team, we have a good strong team of people who work exclusively on survey work. They have mostly people who have come from a market research background and they have designed surveys and are good at stats and numbers. Our surveys are some of the best-known outputs. They get the attention from the very top all the way down. This work has really helped to build visibility and credibility across the organisation. And we also work closely with our product analysts and our data scientist and make use of their numbers that we are trying to make resonate within the organisation. We use these numbers to open the door to a much more nuanced and complex situation that we can have supported by the qualitative work that we're doing. All right. Dysfunction 5, the very last one. And this one I call failure to thrive. I'm sure that many of you will know this guy, this is Steve Krug. He wrote one of the seminal books on usability, texting which is 'Don't Make Me Think'. That was written 20 years ago in the 2000s. How time flies. Now, Steve has spent a lot of his life encouraging organisations to take their very first steps into usability testing. He goes out of his way to make it seem easy to do and not scary and so his advice is testing one user is 100% better than testing none. Now, I know there are a lot of people in my survey team that would argue with my maths but we won't get into that. There are a lot of people in our design and research community who are encouraging people, no matter what their background or training, to try doing some research. There are plenty of people who think this is a terrible idea but the really important thing to understand here is the context. Because
  18. 18 Steve is not giving this advice to organisations who

    have a mature human-centred design processes. This is advice for organisations who are scared to get started. This is the advice that gets them on the first step of the journey. And the question is what happens next? I am pretty sure I can speak for Steve to say it's not his intention that doing feasibility testing with one person is the approach he would recommend for an organisation forever more. He is hoping that people will see the benefits, they'll get excited about the opportunity and they will want to continue to grow in their knowledge and their capability when it comes to research. When it comes to maturing research capability, I have a little model that I've been playing with to try to think about that. It started with what I think is our organisation's approach to human-centred design. I started with customer-led. So this is where we are listening to our customers' feedback and taking it literally. If they tell us to add this new widget to the app, that is precisely what we do. It is easy to get trapped in this stage. Because at the time you're doing it, it just seems completely logical. Like, what on earth could be any more user-centred than giving somebody what they ask for. Then a couple of designers joined the organisation and they press the organisation to mature into what we might call a customer-involved state. Here we start to see a more deliberate approach to research. Interviews, usability testing, other techniques. Basically we're starting to involve our customers in the design and development process. But we're still doing it in a way that is very much centred on our product or on the widget that the customer has requested. We haven't yet got to the final stage of maturity which is being customer experience led. This is where we are truly seeking to deeply understand our customers and their needs so that we can identify and design what is the best solution in response to that. All right. Then on our left axis, we've got stages of research maturity ranging through from ignorance, fear to enthusiasm and
  19. 19 capability. Let's look at how this plays out in

    combination. Pretty obviously, if you have a lot of ignorance about research, you don't know much about it, you are pretty likely to have not enough research going on. That's not really a surprise. What can be surprising, though, and we touched on this when we were talking about weaponising research, it's not uncommon for teams to progress up to enthusiasm without maturing their approach to human-centred design. What we can see here is a lot of research going on but not the kind of research that advancing our understanding and ability to deliver good products. You have to be conscious that the volume of research going on in an organisation doesn't necessarily mean we have a high maturity organisation. Instead, we really want to see this pathway through both increased research maturity and increased maturity in our human-centred design approach until we progress up to a gold star of research maturity. What we are looking for here is a balance, I think, of balance in methods and also we are looking for some bravery. We are trying to make sure we have got the right mix of methods being applied at the right times. But we want to hear strong and coherent opinions about the direction and priorities for the product. So if we have a high research maturity in our organisation that's doing human-centred design in a matured way we will see the empathy and evidence and these are crucial to supporting organisations who want to create good experiences and ship those. If you have been in a situation of trying to advocate for greater maturity in research approaches, you have almost certainly had this objection. "But Leisa, the research doesn't have to be perfect. This is not an academic study, I need enough to help me make a decision." Again, I think it's this mind set that kicks into play as well when we are thinking back to speed dysfunction. What do we do when we get an objection like this? It can be a really tricky one to answer. I think the mitigation here, the ultimate mitigation when it comes to the failure to thrive dysfunction is framing research around risk management. The matching method to risk. We'll
  20. 20 never get to do all the steps we would

    like to do and we will never get to do it in the way we would perfectly love to do it but the thing that is important is for us to able to understand when does it really count, when do we really need to try hard to emphasise doing things well and when can we take the pedal off. How can we reduce risk to the organisation and when I'm talking about the role of the research team at Atlassian to our senior stakeholders, a lot of the time that's what I'm talking about is risk mitigation which is often not what they are expecting. They are expecting the warm fuzzies and I'm talking about risk. It works quite well. I have this simple two by two I'm using to talk about how we think about the level of investment that we're putting into the research that we're doing. So we're really asking ourselves what is the level of risk that we're taking on. Like what will happen if we get this wrong, how bad will it be? We are also looking at knowledge. How much do we know about this? Is this really well flown area, and this could be not only what we know in the organisation, it can come back to jurisdiction conventions, do we as an industry know a lot about this. So we can plot the project we are talking about on to this two by two and ends up with very different research approaches. If we have something that is reasonably low risk and we have a reasonably high amount of knowledge about it, this is a situation where we could say look, let's make the hypothesis about what we think is going to happen, let eight ship it and learn from the product data and feedback we get. Completely adequate approach to research in that case. But then in other areas where the risk is really high, where we really do need to get it right but we don't know huge amounts about it, then we want to invest a lot more in doing a much larger discovery effort to help build the confidence and understanding that we need in order to increase the likelihood of getting it right. OK. Those are our five dysfunctions. Let's just go back over them quickly because I know it has been some time since the beginning.
  21. 21 Dysfunction number 1 was the speed dysfunction. We talked

    here about teams need to feel as though they're moving quickly but how by understanding them how to plan research, they are making good behaviours easier and by educating them to understand the risks that they're potentially taking, these can be helpful in helping teams make better choices about research. We talked about the silo dysfunction. We talked about the false positives that often emerge from overly focusing on a feature and how important balancing that with talking about underlying user need can be to mitigate that risk. We talked about weaponising research, we talked about how research can become a weapon in design decision making and undermine the ability for teams to have good discussions, to talk well about design and how by focusing on end to end journeys and the overall goal that our customers and users are trying to get to, this can help people indexing decisions and lead to much better quality research as well. We talked about quantitative fallacy, about how organisations love numbers, about how doing surveys is harder than it might first seem but how we need to win attention and credibilities with numbers and then that allows us to follow through. Finally we talked about how often research fails to mature in organisations. And how we can use a risk framing to make sure we are doing the right kind of work and the right quality of work in the right kinds of places. That's it. I think there are plenty of ways for scaling and democratising research to go very, very wrong. But I also believe in our teams we already have many of the tools that we need that we can put into play to help research be much more effective, much better quality and much greater value to the organisation. Thank you.