Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Sean Smith Transcript

UXAustralia
March 20, 2020

Sean Smith Transcript

UXAustralia

March 20, 2020
Tweet

More Decks by UXAustralia

Other Decks in Design

Transcript

  1. 1 www.captionslive.com.au | [email protected] | 0425 904 255 UX AUSTRALIA

    Design Research 2020 Day 2 Friday, 20 March 2020 Captioned by: Gail Kearney & Rebekah Goulevitch
  2. 2 SEAN SMITH: I would like to start acknowledging the

    traditional owners of the land and pay my respects to the elders past, present and emerging. I've been working for 25 years and am the owner of U1 and we've been around in a company called UsabilityOne as well. As Steve introduced, I'm going to be talking about a project we completed and talking to others it seems novel. We thought the wider research community might be interested in hearing about it. One day I was at the pub and I received an email from John Dirks and that was following a recommendation from a good friend of mine. It was about a project that Blink was delivering on behalf of a local client, a ubiquitous tech firm based in Seattle and involved a significant Australian component. In the email he sent there are a few things he stood out. The first of those was in Australia they were looking to conduct 300 remotely moderated interviews, 90 minutes in duration. They were running the same number of sessions in the US, a project looking to 600 remotely moderated sessions. The project was on a fast track and the sessions needed to be conducted in the first two to three weeks of June and suggested I take a deep breath. Before I took a breath I had a panic reaction, but took the breath. For a bit of context, U1 is a small business, we are a team of six. The scope of this project was really mind-boggling. My thoughts immediately turned to how could we possibly service this type of project. John from Blink which is a company of 150 people, he freely admitted that the proposed scale of the project was challenging for them. Despite being daunted by the scale of the project, we are really excited by it. Quickly got back to John, said yep, we would be interested but I was up front about the size of our business and some of the challenges that we saw we would encounter in trying to scale up quickly in order to be a reliable partner.
  3. 3 We teed up with a call with the team

    from Blink leading the project. From there things progressed over the course of a couple of weeks until they decided to work with U1 to deliver the Australian component of the project. That was exciting but scary at the same time. Whilst I can't name the client who was sponsoring the research or talk about the findings for commercial reasons, not so concerned about that. I think what I'm more interested in talking about is how we went about delivering the project and not screwing it up, spoiler. We didn't screw it up, which was great, plus some of the things we learned along the way. At a high level the client was interested in establishing a benchmark at the user experience at one of their product suites and that was within one specific sector across two key markets in the US and Australia. There were primary goals. The first of those was identifying opportunities for individual products within the overall product suite. They wanted to create a benchmark against which future iterations could be compared. Despite the scale of the project we are talking about with 600 participants, we are talking about a qualitative benchmark as opposed to a quantitative benchmark. Exploring whether the product is meeting user needs. They were interested in comparing the user experience of the competitor's product suits. All of this research would inform the overall user experience and product strategy for their product suite. So initially we were somewhat overwhelmed of how we might manage and resource 300 interviews over the course of two to three weeks. You can see how our mind was bubbling up from 300 people, how we were going to do that. So that's traditionally the busiest part of the year. We learned the intention was to chunk the project down to an exploration, including competitor products. We went from thinking about how we were going to do 300 interviews to, it got broken down into 10 individual, but
  4. 4 interconnected projects running simultaneous. That clarified how we needed

    to approach resourcing the project. It meant we needed 10 individual researchers who took on 10 products each along with a product manager to keep the wheels moving and quality assurance support internally. It was a bit of a loaves and fishes moment for us. We are thinking we understand now exactly how we need to approach it but we had to work out how we could turn our six people into. That was an approach mirrored between the US and Australia. That meant we would be able to pair a US research and Australian research for U1. That allowed for 60 interviews across the products. The intention was for the researchers in the US to start their field work a week and a bit ahead of us here in Australia. That meant we could benefit from any learnings that they had and then the goal for both teams to reach the finish line around the first time. As the approach was fleshed out, it became apparent the time lines would need to flex. In that initial email I highlighted they were speaking about two to three weeks to complete the field work. As they dove into the logistics with the client, it was agreed those time lines could push out to five or six weeks which was quite a relief for us. We couldn't manage the project with our permanent team. We had to go out to our networks to find additional people. I've been hiring researchers for a really long time and in my experience true researchers are the hardest people to find and quite rare. Compounding this, they are very much in demand. Our challenge was to quickly find at least 8 of these rare and in demand individuals who were available when we needed them and at quite short notice. To our relief we found that our networks did deliver on this challenge to the point where we became connected with way more than the minimum of eight researchers that we needed in order to deliver this project. Along the
  5. 5 way there were some people that we met that

    we were keen to work with but in the end we had to count them out based on factors I will touch on. Overall we found ourselves in a comfortable position in terms of resourcing which was great. From the start we were scratching our heads to the point where we started to realise that it was there. It allowed me to step away as one of the researchers. My initial thought was I could be a researcher and also be the lead on that. But, you know, as it turns out my hands were full just by being the lead on the project so I can't imagine how I could have delivered on that an the role of researcher. One of the confronting aspects of scaling the team quickly in adding 10 people to our existing tight-knit team of six, in the end we were engaging six people none of us had any direct experience of working with. The other four people we brought on to the team we had worked with previously. My colleague John is interested in the talk so he's going to come in and sit in. Apologies for that. I've got somebody to perform for in person. Whilst we felt there was risk here, it was mitigated somewhat by every one being known to us. Either someone we might have worked with directly previously or a recommendation from someone in our network. We are also comfortable that the internal processes that we have would be sufficiently robust to put us on solid ground along with an experienced and motivated project lead in me, who really didn't want us to screw this up. So after a lot of phone calls, email and coffee catch-ups we assembled our team. We had 10 researchers. So Keira is a member of our team but the other nine researchers were all contractors. As I mentioned, some of them we had worked with them before either
  6. 6 individually or as a business as U1 but there

    were a number of people we hadn't worked with previously. We put together a quality assurance team. That involved me and two of my colleagues and we engaged a project manager Sid who was integral to delivering the project successfully and then fulfil the role of project lead. One of the aspects of the protocol that did work in our favour was the decision to remotely moderate the research. We benefited from distributing our team. In terms of the physical capacity of our office space to accommodate 10 researchers all running project at the same time, we don't really have that kind of facility and it would have turned it into more like a call centre rather than a research vibe. We didn't think that would have been optimal. The ability to distribute the team was great. People were able to do it from home. Three of our team were also based in Sydney as well. So now we had our team. Before we onboarded them there was prep we had to do so once we kicked off, everything would be in place so we can hit the ground running. Our role was to apply an Australian filter to the recruitment filter and Blink and the end client. They did require some changes and tweaks and regarding recruitment as well. We had to sign off on the changes before we proceed with those to ensure they didn't impact on the consistency of approach or any of the intent of the research. So in order to mitigate our risk with recruitment, we decided against putting all of our eggs into the one basket and engaged KB Research and Farron Research, both companies we've worked with a lot. They were tasked with sourcing 360 participants in total. We ran pilot sessions for each of the two audiences and over-recruited in order to allow for no-shows so we wouldn't get held up if people didn't show up. We had someone else to go to in order to fill a time slot. The decision to run two recruit /EURS made things complicated because we had to duplicate the providers and keeping straight in our head the
  7. 7 communications but I would do it again. We were

    completely up front that we were splitting it across the two companies. In order to free up our researchers from admin as much as we could, our project manager 16789id managed the recruitment and interaction with the recruiters. They had to set their schedule with Sid and he was a master of the spreadsheets. He was clocking a lot of spreadsheets. The protocol that was decided upon in order to deliver on the client objectives, I mentioned earlier that the interviews were 90 minutes in duration, remotely moderated and we used Zoom to run the sessions. Each interview consisted of two main components. So a set-up and data collection phase. The set-up, typical thing in terms of introduction to the researcher, introduction to the objectives to understand what the purpose of the research is. Then because we were going to be sharing screens and we were getting the participants to share their screen, there was a bit of technical set-up we needed to go through as well. We wanted to get to the point where they were ready to share the screen and not exposing any personal private information and I will go into that in more detail later. Then we moved into the data collection phase of the interview and that was your traditional task-paced usability testing capturing the benchmark data that was of interest to the client. We also had a warm-up task or sample task that enabled the participant to get a feel of how the session was going to run. Once we were moving into the real stuff, the participant and the moderator turned off their cameras. That preserved band width. It also I think removed some potential for distraction and then the session recordings were really of interest to the end client as well. It meant that by turning off the cameras, nothing was including what was happening on screen within the recording as well, so it
  8. 8 was the activity on screen that was being displayed.

    I'm going to whizz through a couple of slides to show you some of the benchmarking data captured. I'm not going to go into that in any detail but there was quite a lot and Jeff Sauro's benchmark was used. Once all of the 600 sessions were completed, there were over 7,000 data points to consider which is considerable. We are getting familiar with Zoom if we weren't already. Each of our researchers had their own zoom meeting room where they ran sessions from. Those of you who have used Zoom would know you can create individual instances which creates a unique ID or have your own ongoing persistent meeting room. We went backwards and forwards about which was the approach. We decided to have one single meeting room rather than having a heap of different IDs for every session. You can attach a waiting room to meeting rooms and the researcher has to admit a participant and that removed the risk of a participant crashing into a large session because they had the same link because they had arrived early or might have had the wrong time. The participants were advised the name of the researcher and the waiting room was labelled with the researcher's name, so the participant knew they were in the right room. Our PM monitored all the sessions. They might have made a mistake in terms of their schedule and he could follow up with the recruiters without bothering the researcher. The recruiters were sent instructions and links on how to install the Zoom link so hopefully they would be ready. We used a digital consent form that participants had to sign. Our PM was able to row mind researchers to ask a participant to complete the form if they hadn't done so before we started the session proper. During the set-up phase the researcher asked the participant to exit all accounts, close any open folders, close any applications and browsers and to open a new browser session and to share the browser only in order
  9. 9 to maintain their privacy. I will go into this

    in more detail later. They were asked to open two browser tabs, the first of those was used to access the screen shot. We used the survey in order to deliver the task wording and capture the time on task, ease of use, confidence, et cetera. The second tab was used to access the product that was the subject of the usability test and associated tasks. So participant would read the task out loud and they would paraphrase so they understood what the intent was and initiate the timer, move to the second tab, complete the task, come back to the first tab, there would be a button that says task completed and they would end the timer and they would move on to the metrics questions. Each participant had a unique ID used to track their participation and the data collected by the survey tool. The team at Blink were collecting the data analysis which would be consistent across two different companies and 20 different researchers and we had a note taking template. That was customised because they did differ but otherwise they are identical and standardised the data collection across the 20 researchers. We were also provided by Blink with a findings document template and the Australian researchers completed that. So that standardised the approach to analysing the findings and our team were clear on what they needed to provide and how to focus their efforts. So recruitment was underway and we had our team of researchers for a period of five weeks working to the following schedule and I will go into each of these in more detail. We had quite a lot to do in five weeks and the first week was really crucial for setting the foundations for success. A key element of that was the head of research at Blink spent a week embedded with us in that first week and our relationship with Blink had got off to a really strong start. Having Tom with us solidified the connection between the two teams.
  10. 10 So we kicked off in week one with an

    all hands meeting in our offices in South Melbourne with some of the team connecting via Zoom from the same location in Sydney and I was up with the guys in Sydney. I was at Vivid so it is helpful that happened around the same time. So 52 the focus was to get them individually on the same page as their counterpart researcher in the US. And we started that process by getting them to watch recordings of three to four sessions that had been completed in the US for the products. I mentioned that the intent in the US was the guys would be starting ahead of us, so they had a number of sessions in the can. We wanted to get everybody familiarised with the tools and protocols being employed so prepared up our Australian researchers so they took a turn at being a moderator and a participant. That gave them the benefit of as well as getting used to the tools and protocols, it gave them insight into what the participant experience would be like. We thought that was really helpful. We scheduled a pilot session for each participant so we could get a simulated experience, I suppose, of what to expect when we moved into the live data collection. That was live streamed back to the States and that allowed the client stakeholders, because there were a number of different product teams interested in as well as the people who commissioned it and allowed them to watch the sessions but we also uploaded the recordings of the sessions as well. So the Blink US researchers watched those sessions and gave feedback to the Australian researchers so they connected and had a chat about what they had seen and things they could look out for to ensure we were maintaining consistency in approach across both of the locations. But also that the live streaming and the recordings in giving the stakeholders back in the US the opportunity to watch those helped build
  11. 11 their confidence, I suppose, and trust that what we

    were doing here in Australia was very similar to what was happening in the US where they were much closer to the process and to the team. The Blink team did a great job of sharing any relevant feedback from stakeholders. We could filter that back to our team. There was a lot of feedback but they picked out the things that were useful that we filtered out to the team. At the end of the week we reconvened as a team here in Australia just to were there any questions, how was everyone feeling, anything we needed to tick off before we got ready for data collection in the following week. What was good during this week is that the Australian and US researchers started to establish a bond. There was a lot of communication through Slack and Zoom and that rapport grew over the course of the project as well. Week two, this was our first week of data collection. This was a busy working. Sid was working the spreadsheets and recruiters. At the end of that first day I checked in with every one individually to check how was every one going, any concerns, anything that was unexpected but overall things seemed to be going quite well and felt the prep we had done set a good foundation for the week unfolding. Amazingly, we completed all of the required 150 sessions in this week and we found ourselves catching up with the US team who started a week and a bit ahead of us and we were doing the same number of interviews but they hadn't experienced as good a run with recruitment as we had. Week three, our team had data analysis and completing their documents. More communication between US and Australian researchers as they conferred about their findings. We also had a pilot session for audience too in this week as well and again as with the previous week, those recordings were uploaded, there was an opportunity for feedback where it seemed appropriate as well.
  12. 12 But, yeah, at the end of the week our

    researchers submitted their findings documents to their appointed U1 QA person and Sid was busy ensuring every one stayed on track. Week four was our second week of data collection. Our QA team here completed the QA process, handed that over to our researchers and in between sessions they were able to respond and action any feedback we had and send them on to Blink in the US. What was interesting with this audience, their availability differed a little bit. That meant we had to do more evening sessions. They struggled more with the technology as well. So as a result we didn't quite fit all of the 150 interviews or we didn't complete 150 interviews in the Monday to Friday. Some of them spilt over to the Saturday but still by the weekend, we had finished all of the sessions as well which was quite amazing. So again the Blink researchers in the US hadn't finished. So we got into our final week of the project in terms of our engagement with our researchers. So they turned their hand to doing their data analysis for audience too and then by mid-week they were ready to hand that over to us for our QA process. So there was a lot of similarity in the findings which meant they could do things quicker in terms of turning it around. We quickly turned around our QA process so our researchers could action any feedback and they submitted those to the team at Blink and that was the end of our researchers engagement and they moved on to their next gigs and next commitment. So how did we do? We think we did pretty well. We might say that but definitely as with everything, there is a little wrinkle here and there. Overall things did go really well. There were certainly hiccups and issues along the way which I will cover a little bit now. Some of the small things occasionally participants closed the survey tabs we were using to capture metrics and the task wording. It wasn't a big issue. They could open the
  13. 13 survey again and we had to stitch the two

    data files together again. They didn't always click "start" not a huge deal but it meant we had to manually calculate what the time was on task and up corporate that into the data file. This is something that sat with me. I made the mistake of putting too much into single communications. Inevitably people overlooked what I was asking them to do. I had to bring it up with them individually because I noticed they weren't following instructions to a tee or asking questions I covered in my communications. While I found that initially frustrating, I was keen not to be a dick about it and realised they were busy and not say I told you about this. I had to relax and be content to repeat myself. The goal was to make sure everybody had what they needed and they were being consistent in the approach they were applying. It was important to remind myself these guys did have a lot going on. There was a lot of channels and communications flying around, things moving quickly. I think in future, I would look for strategies to mitigate that. One thing I might consider is using check lists, for example. So people could work through and tick things off as was appropriate rather than relying on them reading things or listening to me talking, which you guy are unfortunately having to bear the brunt of right now. Some of the bigger things, internet reliability in Australia is worse than the US. That something that the client and Blink had not been aware of before committing to the remote. Based on this client and other clients out of Australia, there is an assumption the internet is good. We had a ranking of 62nd in the world in terms ever internet speeds which is surprising for people outside of Australia. But, fortunately, it didn't turn out to be as big an issue as we thought it might be but there's no doubt it impacted on a number of
  14. 14 sessions and, of course, frustrations for participants and researchers

    alike. This is particularly relevant I think give than the current situation. We are going to be leaning heavily on remote moderated research. I mentioned about the Blink teams starting ahead of us with the intention we would reach the finish line about the same time. It was that we could learn, they could problem solve and make decisions ahead of us in countering. We had a slightly more aggressive timeline, they started to fall behind on their schedule. We caught up and in the end we finished ahead of them. That ended up encountering issues and had to deal with ourselves which wouldn't have happened in an ideal world I suppose. Related to this is U1 didn't come into this project until it kicked off in the States and the Blink team were working with the client stakeholders and product teams on the protocol and creating the moderation guides, et cetera. We were at some distance from the product stakeholders. So the leadership team here, we met the clients but we didn't meet the product teams as well. But there was definitely a direct relationship between the Blink researchers and the individual product teams as well. So that did become a bit challenging along with the time difference when we started running ahead and our team encountered issues or had queries they had to go through their Blink counterpart rather than having a direct line to the team. We didn't have tolerance for the lag that associated with that and had to make our own calls on some occasions. I mentioned before that we were completing our findings document. That was our role rather than to write actual reports that the Blink team were writing the reports that would be submitted to the end client. One of the issues with that approach was that we submitted those findings documents after our first week of data collection and the guys in the US, they had a look at those and said yep, that looks good, they were roughly
  15. 15 on par with the things they were seeing. It

    wasn't until they sat down and started writing their reports at the end of the entire data collection that they started to realise they had additional questions and queries they wanted to clarify. Our team had all pretty much moved on to their next gigs. It went from being on tap in terms of communications to there being some sort of lag. Our team, they were fantastic because they were wanting to ensure there was success and made themselves available but the reality was it had to fit in around the commitments they had going on at that time. Everyone in the US was accepting of that. But it certainly caused them issues and some frustrations, I suppose. So definitely something that the Blink team have said if this were to happen again, they would get us involved more in the up-front process as well as getting us involved in the report writing. I've touched on a few things that were wrinkles. To reiterate what went well. We got ourselves a Sid. Every project needs a Sid. I would say hire him if you have a chance. We had a team of researchers motivated give the results. Whether that's a cultural difference or fluke, it went well. I can say that our recruit meant partners did an excellent job and we would happily recommend them. We had a great partner in Blink. This could have gone really bad but it worked. Part of that is that we were clearly well aligned in terms of our research best practice and also culture of the two businesses. We also had an end client that was clear on the complexity of what they were asking us and Blink to deliver and they were appreciative of the work and commitment that went into it. There is learnings I would like to share related to remotely moderated research which is relevant, I think, to the current situation. So the quality of connections was a concern going into the project. While it did impact on sessions, it wasn't as significant as we feared it might be
  16. 16 and possibly experiencing the issues because - there is

    somebody making noise next door which I have no control over. I apologise for that if that is disruptive. The technical level of competence and knowledge when it came to managing hardware and software. When they had to deal with webcam and mic, it was a significant challenge for some to make changes to those, loan sharing their screen. We could lose up to 25 minutes to get them to the point where we could start the actual interview. We learnt a lot as we went along and built that into our trouble shooting approaches but they always managed to surprise us in some way. As we lean more heavily on remotely moderated research, let's not fall into the trap of thinking every one is like us and has access to technology. Also some of the participants were using their employer's hardware which was locked down which meant they couldn't install a Zoom client or access the mic and webcam. Some participants were required to do homework in preparation for the session and instructed to install the Zoom client ahead of the session. The Zoom client is more reliable than using the browser plug-in. Many hadn't completed the preparations and had to be stepped through. We had to get on the phone to talk them through installing Zoom or give them time to do their homework which impacted on the time available for the research. We let our recruiters know that was happening and we would be cancelling sessions and they would need to source a replacement. It was clear they were keen not to lose participants. So they really were strong in their follow-ups with the participants panned making sure they were ready. That definitely helped. We gave our researchers licence to cancel sessions if they felt the participant hadn't done the required prep. That was rare but it did happen on a few occasions. It was important to the client that the experience for participants
  17. 17 was as natural as possible. They were strong on

    their desire for the participant to share their screen. The downside of this is that the risk of participants exposing personal information is much higher. It was a big contributor to the issues I mentioned about time lost in the set-up. We had protocols to preserve participants' private and personal information. We got to shut down as many things as they could, to log out and only share a browser that was in guest mode as well. And not their desktop. But they routinely ignored some if not all of those instructions. It wasn't wilfully at all but for the reasons mentioned previously regarding technical competency and I would say a somewhat blase attitude to their privacy. In the interests of this proceeding, we had to ignore this and we dealt with that by getting the researchers to take a time stamp of woe that happened and edited the recordings and that took a lot of timing and resources to address. One other privacy aspect is related to the waiting room. While waiting to be admitted to the meeting some participants wandered away from the computer because maybe our researcher was running behind. They might wander away from the computer. The webcam was on and it gave us a view into their home of which they became unaware. We were witness to activities they probably would have preferred not to share. Some considerations that came up for us. As we were scaling up for this project, we were running another project exploring a product they were launching into the Australian marketplace. An important thing for them was to ensure that the researcher was Australian and to them, that meant they sounded Australian and connected to Australian culture. Their goal was to ensure their product appeals alleged resonated with a local audience and didn't sound too American. Their concern was a researcher from outside of Australia might possibly inhibit feedback the participants might provide.
  18. 18 So this client was more than capable of doing

    this research remotely but they invested a lot in a partner in Australia that could give them a truly local read on the products and not influence the feedback. So that timing was really interesting. We met researchers who we were interested in working with but they had only been in Australia for a few months and so we felt that on that basis we couldn't necessarily work for them with this project. We would happily work with them on other projects but given their client was interested in cultural differences, we had to wait. That didn't mean we wouldn't work with people out of Australia, we had researchers from Bulgaria and the US. We are a multicultural nation so we have to expect that would be the case in terms of researchers as well. Flowing on from that, we had to ask ourselves does it matter if someone is Australian but not in Australia when it comes to moderating research remotely. The end client and Blink invested significantly in investing in a local partner. We spoke with some researchers again that we were interested in working with but they weren't going to be in Australia for some periods of the project. Given we were conducting the sessions remotely, we had to ask ourselves was there an issue using researchers based in Australia. In the end we decided against that. It was a difficult conversation to have with those guys and rationalise the decision. We could put it down to the vibe. We didn't feel right to commission research locally and use people based outside Australia. In the end Blink could have done that themselves. Last slide. Just a few reflections as well. Being honest with partners is really important. As a business owner on are a leader, there is a need to be a little creative, have a good poker face, yep, we've done that, no worries. But that wouldn't have served us well here. It was crucial to be up front about the size of U1 off the bat and also the concerns about our ability to scale within the aggressive timeframe we were looking at. So Blink indicated that honesty up front
  19. 19 was a significant reason why they decided to work

    with us. It's also important to be honest with yourself. As the owner of a small business you can be mindful of the bottom line. I originally thought I might be able to be one of the researchers and manage the project. While that would save money, but we would be working with one less person we didn't know previously. It became evident I would be exposing us to more risk because I couldn't have done justice to be the researcher, project lead, quality assurance and the other things I have going on as business owner without compromising one or all of those roles. Blink and U1, we took a leap of faith in working together. We both had a lot to lose if this went badly. We had an introduction between John and myself and someone we both trust. When it came to scaling our team, we relied on our networks to find people. A recommendation from someone in our network allowed us to trust the individuals they were introducing were the right people. Process are important. Internal process stood up to massive test in delivering this project. Process is important but so is instinct. As we met people that might work with us on the project, I relied a lot on instinct and feel. Despite not having worked with people before, getting a good feeling about them was important. If we had the right sorts of people, we would back in our processes. From a personal perspective, every now and then I reflect on the relevance of someone in their mid to late 40s - me - to an industry geared toward youth and innovation. That is implicitly associated with over throwing the old things including old people. This industry has amazing people closer to the start of their career than I have. But I have been doing this for a while now and the longevity of U1 and my grow hair all help Blink to have confidence that we knew what we were doing and were able to deliver. I mentioned instinct before but the experience feeds that instinct. Finally, size. Doesn't
  20. 20 matter. Small or large, company or project research is

    research. So we were working with a company 25 times the size of ours demonstrated when it comes to our practices, researchers, we are facing the same challenges, whether that be managing client expectations or delivering quality research. And that's it. Thanks for your attention and I know that I spoke quite quickly there. I apologise for that. We would love to have a chat and out of interest, U1 is here and we were supposed to be across the way today but things got in the way of that. So thanks every one.