Planning for a Usability Study | Cory Lebson | Skillshare
Drawer
Search

Playback Speed


  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

    • 1.

      Intro

      1:00

    • 2.

      What is a usability study?

      1:43

    • 3.

      The Big Picture

      2:56

    • 4.

      Remote or In Person?

      4:26

    • 5.

      Moderated or Unmoderated?

      1:40

    • 6.

      Finding a Venue

      6:39

    • 7.

      Drafting a Screener and Recruiting Participants

      7:41

    • 8.

      Developing a Script

      4:24

    • 9.

      Setting up equipment

      4:31

    • 10.

      Note taking options

      4:36

    • 11.

      Observation Room

      3:36

    • 12.

      Analysis and Reporting

      4:12

    • 13.

      Conclusion

      0:21

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

194

Students

--

Projects

About This Class

What kinds of things should you be thinking about as you plan for a usability study? 

In this course, user researchers Cory Lebson and Amanda Stockwell cover all the basics!

  • What should you be testing and when?
  • Should you do in-person testing or remote research?
  • Is moderated or unmoderated research best?
  • Where can you plan to test and what tools should you be using?
  • Who should your participants be and what tasks should you give them?
  • What other kinds of things should you consider?

Meet Your Teacher

Teacher Profile Image

Cory Lebson

Freelance UX/Research Consultant

Teacher

Cory Lebson, author of The UX Careers Handbook (CRC Press, 2016), has been a user experience consultant for over 20 years. Through his company, Lebsontech LLC, Cory is focused on user research and evaluation, accessibility, UX training, and mentoring. 

Cory also speaks frequently, has been featured on the radio, and in addition to his recent book, has published a number of articles in a variety of professional publications. 

Cory has an MBA in marketing and technology management, as well as an MA in sociology and a BS in psychology. Cory is a past president of the User Experience Professionals Association (UXPA) International and is also a past president of the UXPA DC Chapter. 

Find Cory on LinkedIn and on Twitter @... See full profile

Level: Beginner

Class Ratings

Expectations Met?
    Exceeded!
  • 0%
  • Yes
  • 0%
  • Somewhat
  • 0%
  • Not really
  • 0%

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Intro: Hi, everyone. Thanks so much for joining us today. I'm Amanda Stockwell and I run a little strategy a company called Stockwell Strategy. I'm Corey Webson and I've been doing user research for decades now. You sound old when you say that. Decades. Cool. So today we're going to be talking about usability testing. So when we talk about usability testing, we're going to talk about what usability testing is, how you get started, how you find the right people, and how you conduct the test, how you conduct the research, and how you analyze that data once you get it and create actionable information that can help your team. We'll give you an insight, backdoor peek of all the details and logistics that go into creating a really successful usability tests. We're both consultants, so we have a lot of experience, but you don't have to be. You might work internally already in a company, you might have lots of experience with your UX, you might be brand new. This course is for anyone who's interested in running their own usability test. Hopefully this course is for you. 2. What is a usability study?: You might be wondering what usability testing is. Before we really delve into what usability testing is, we should talk about what usability is. Well there's an official ISO, ISO 9241-11, which says usability is the extent to which a product can be used by specified users to achieve specified goals with three things; effectiveness, efficiency, and satisfaction, in a specified context of use. Usability testing is simply that. Can users use what they are expected to be able to use? Can they accomplish the goals that they are able to accomplish in efficient time and feeling good about what they did. You might have heard of lots of other sorts of UX research, trying to figure out who users are, what the problem space is, or what stuff you should be building. All of that is awesome, but it's not usability testing. Really what usability testing is, is the stuff that helps us at the beginning, when we have maybe something very basic to show, something very formative. It could be halfway along, something that we've partially developed and are ready to show to some users as is, and even at at the end, which can also be at the beginning of a redesign. All those times we can figure out, can users use what they're supposed to use and do the tasks that we want them to do. It's not marketing research, usability testing isn't for assessing problems faced. It's really specifically for assessing how well users are able to accomplish the goals that we have for them. Really we should now think about, now that we know what usability testing is, how we can plan for that usability study? Yeah. Is that right for you? We'll find out. 3. The Big Picture: Usability tests is pretty simple in concept but there's actually a lot that goes into doing one successfully. There's a lot of stuff you need to figure out right at the front and then figure out along the way. One of the first things you need to figure out, is really, what are you going to be testing? Do you have a fully freshed out product? Or you going to be looking at competitors? Do you have a prototype? If so, what fidelity is it? Who are you going to be talking to? What users are you trying to target? Are they already existing users of something you have? Are they potential users? Really those are the first two things you need to figure out. What are you testing and who are you testing with? Then you've got to figure out, are you going to find users locally where you are? Are you going to go to them? Are you going to do something remotely where you're talking with them in real time but they might be located around the country or round the world? To that end, you can do something called moderated testing which is traditional usability testing where someone like one of us is sitting and talking to a person. But you can also do unmoderated studies. Now there's lots of tools where you can set up tasks ahead of time and not even be present and participants can answer asynchronously. We'll dig into the details of that stuff in the middle. But this is all stuff you need to figure out right away. Then pretty soon, you're going to have to figure out where you are going to do testing, particularly if it's local or somewhere that you go. Are you going to find a venue? Is there a place that you can go to do the testing? Are they going to come to you wherever you might be? You've got to know that. If you're going to be doing things remotely or unmoderated, doing things like picking the tool, there's lots of online tools that you can use or even just the logistics of figuring out. Are using a video conferencing tool if you're not going to be in the same physical location as someone?. All those logistics you need to start getting sorted out. Then of course, beyond the venue, you need to figure out who exactly you want. Because when you do usability testing, you want either actual users if that's the study you want to do or you might want representative users, people who are like actual users but are naive to the product that you have. Of course it can be said that you should be building something that's easy enough for anyone to use. But especially depending on what you're testing, they'll be specific contexts that some people might have that other people's won't and you'll really get your best feedback when you talk to either real or representative users. You got to figure out who they are and then also where to find them. As recruiting is progressing, that's already in motion, are you have time to develop a script? The script will go through, what exactly you want to talk with them about? What tasks you want them to do exactly? What makes for good task and a great question that doesn't lead participants and what makes are not so that usability question. Then think through the technical logistics as well is pretty important. We'll dive into all of that. Yeah. 4. Remote or In Person?: So one of the first things that you really need to decide with Usability Testing is will that testing be remote or will it be in person? Remote usability testing has some great advantages. For one, you don't have to go anywhere, there's no travel time, there's no travel cost. Simply from your desk, you can do the entire usability study. You can interact with people from all over. They could be local to you but they could also be other places in the country, they could be other places in the world. It's all good. Granted, give some time zone issues but in general, it's really easy to conduct remote usability testing. But the problem might be that your data's a little more shallow. Sometimes you can't rely on the participants having a webcam, you may not see their face. Sometimes their face provides a very rich dataset up, it's wonderful or I really don't like it or this is pretty bad. You lose a little bit your data, you lose that in-person interaction, that synchronization that happens between a moderator and a participant. It requires a level of technical sophistication for the participant. We say of course we want people that can use the web, if we're testing something on the web, we don't necessarily need the ones that are super proficient that can quickly start, hangouts or go to meeting or WebEx or what have you. All the more so, if you want them to share their mobile screen, that gets even more complicated. Being aware of that, could be a caution against remote usability testing. I just want you to think about every time that you've tried to start an online meeting, even with the people within your own company, you can spend five to 10 minutes trying to make sure that everybody can see and hear and all that stuff. There's certainly some benefit, especially when you're trying to talk to a lot of people all over the world and it can be a bit easier to convince people to sign on board for a 30-minute phone conversation. There's also some great benefits of doing moderated, in-person testing. One thing that priority mentioned is that, as a moderator, you really get to build a better rapport when you're in the same room with someone and you can read not just their face but their body language. If someone is frustrated or confused, you might be able to pick up on that better. That's really great. It's great to be in-person. However, there are some downsides too and a huge part of that is travel time, both for you and for the participant. If you're not talking to people who already live in the city that you live or whatever, either they need to travel or you need to travel and you need to figure out the logistics of where you're going to host these sessions. If you have a lab at your office, that's great but you might need to find a third party location like a facility. There are plenty around. We're in a hotel right now. [inaudible]. We'll talk about that in a second. But you do have to consider the pros and cons of that. But as you might have guessed, we have a favorite. I personally love in-person usability testing, like we're doing this week. I really appreciate the opportunity whenever possible, to really meet with participants, to see participants like we've been doing this morning in fact. You cannot say, really dig deeper, besides building rapport or in the session, you can talk to them beforehand about, has their day going, what's the traffic like? We both happen to be traveling right now, so I asked someone earlier for a recommendation for a place to get dinner later. It help ease the tension a little bit and you've really get that richer, deeper data and ability to read body language and face language. Of course, when we are doing mobile testing, life is just so much easier. If something is not a computer, it's so much easier and so much better. I'm in the user research for the in-person interaction, whether it's with the participants or it might be, in both our cases, for the clients, it's just a lot of fun to do that. You may not be able to choose all the time, you might have to go with remote sessions but we recommend when you can to do it in person but we'll give you some tests about how to make it the best either way. 5. Moderated or Unmoderated?: As we mentioned, we recommend moderating, and I mean being in person when you can. But there is an option also where you can do unmoderated usability testing, which means that you set the tasks beforehand and participants respond on their own time. There's lots of digital tools, setups that you can do this. It can be really great if you need to get results and data really quickly and especially if you're looking to get participants from all over the world because people can do it at their own time and they can answer at the same time and [inaudible] going to see. But of course, you don't get the ability to dig in and ask them questions or follow up on things. I find sometimes that client will say to me, ''We want you to do unmoderated testing so that we can get large numbers.'' There is something to be said for that quantifiable data but quantifiable data is great but also can be shallow. You can quantify but then if you really want to dig deep, you wonder why. What were they really thinking? Sometimes it's too late. Even if you were to contact a few people later, they may not remember. In general, we recommend moderating whenever possible. But if you need to do unmoderated sessions, make sure that you're extra careful and clear about how you write the tasks. Ensure that they're not leading and when in doubt, do a run through with clients or colleagues to make sure that everything is clear and you're making sure that you are asking the questions in the right way. You know what? Really, unmoderated or moderated is up to you. I personally, as a researcher, prefer the moderated testing. Whether it's remote or in person, the real-time interaction is still going to be a richer data set, even if it's not necessarily quite as quantifiable. Yeah. 6. Finding a Venue: As you're thinking about moderated usability testing, real-time usability testing, and you're thinking about in-person, that this moderation we'll be in person, you may say, where should I be doing this testing? The answer is, anywhere. You can do testing anywhere. You could do testing in any state in the United States. You could do testing in any country. It's all good. There's somewhere for you. To some degree, it's really going to depend on your budget, how much money do you have for a space. Yeah. But really the most important thing is that you want to try to find a high concentration of your users, because either you or they or both of you are going to have to travel somewhere if you're going to be meeting in-person. Hopefully, you have a nice big budget. If that's the case, what we recommend is going to a research facility. These facilities are already designed and set up to both help you take care of some of the logistics around recruiting in the space. They might already have recording setup, probably are easy to find, or have good signage because they're used to having people come in all the time, and they'll be really attentive to your needs in terms of what the space looks like. There's often a room both for the participant and the moderator, and also an observation room, that traditional glass you see in the focus group commercials. All that stuff is already in research facility, you don't have to do it. Often, they have extra perks like snacks and drinks and stuff that makes us have a comfortable experience. Yeah, they bring you food. While we could go get our own food, it makes clients, it makes stakeholders pretty happy to feel like someone is attentive to their needs. Then in some ways it creates this aura around the research, like, oh, this research study is robust. It's in a robust facility with that one-way glass with this great observation opportunity. Yeah, and it feels more polished and it gives a bit of power to what you do. So if you have a big budget and you're able to, we really recommend that. A lot of times those facilities can also help with recruitment, which we'll talk about later, but they can be really good for making sure that the logistics of the study go smoothly. Yeah. Now, if you have this medium-sized budget, maybe not quite as big, you may say, well, where else could we go? Well, we are in a hotel right now getting a usability study. You'll see our equipment setup a little bit later on. But for now, we chose this facility. We got two rooms, one room for us doing the moderation and interact with participants, one room for the observers and it's perfectly nice. Now, we don't have one-way glass anymore, but we have virtual one-way glass by streaming the sessions and still it's a fairly robust, fairly in-person experience. They're still setting up some coffee and pastries daily for the participants, for the stakeholders, and a good portion of the way there. Now, of course, when we're here, not all hotels, not all alternative facilities may have really solid Internet connections, and that's pretty important if you're testing something on the Internet. You don't want the usability study to fail because the hotel's connection is bad. Even once in the hotel, I had the Internet connection go out. Yeah, I was there. Yeah, that's true. That wasn't so good. But basically, you just need a place again that's comfortable where participants feel good coming and going. Yeah. To that note, we want to be clear we aren't in hotel rooms, we're in conference rooms within a hotel. It can be slightly uncomfortable to walk someone into a room with a bed. You want to be sure that it's an appropriate place and in someplace that has at least a desk or a table setup so that it still feels professional. Your goal is to make the participant and the stakeholders feel comfortable. Yeah. Actually, I did a test once where we had a murphy bed. They walked in a hotel room and one guy walked by and he said, "I'm coming by a little early, I want to see if this was legit." There we were, the bed was up, so he felt it was legit. He came back later for his schedule. Well, so all that is good if you have a medium-sized budget. If you have less than a medium-sized budget, you still have some good options. Because both of those are paid for. If you work internally for a company, a nice apropos or a nice parallel thing to the two rooms in a hotel or just two conference rooms or two offices within a company. If you're lucky and you have a lab, that's awesome. But you don't need to be that fancy. You could just have two separate rooms within one place. Yeah. Even a small office could be enough for just two people, just the moderator and the participant. Yeah. There's also a couple of techniques that you can pull from. You might have heard of guerrilla recruiting or guerrilla usability testing, and this is the classic stake out and intercept people. Whether that's at a mall or a coffee shop, or I used to work for a big online retailers so I would go to their retail stores, and you intercept people where they actually are. Now, there are some things to consider there, you need to make sure that you are at a place where the people you want to target will actually be. So if you're looking to talk to, say truck drivers, maybe go to a truck stop or a gas station. If you are looking to talk to lawyers, you might not have such good luck at a truck stop. So you need to think about that stuff and you can't be just choosy about who the people are, but it's a great way to get quick feedback and totally free space. Yeah, I remember once, I was doing backer testing but in a conference because it was association members. The association gave us a little vendor booth and then were like, "Hi, we're not selling anything, but would you please participate? We have some gift cards for you." Hit or miss, but we got enough people in the end. Also, you know what, as consultants, you may say, well, we don't really have an office that we go to a really robust place, but clients have offices, and there's plenty of rent-in office places everywhere where you can just rent an office for a short time than a hotel, where fairly cheaply, almost free, you can get a place. Actually, another good option that I have previously used is a public library that had some meeting rooms that you could rent our for free. There's lots of options no matter what it is. But basically what you need to look for is a place that is fairly convenient for both you and for especially your participants to get you and an atmosphere that will make your participants and your stakeholders feel comfortable. 7. Drafting a Screener and Recruiting Participants: One of the things that we mentioned earlier is that you really want to try your best to get actual or representative users to be participants in your usability tests. It is great to design something that is easy enough for everybody to use, but that shouldn't be everybody, and you have to really think about the details of who these people are, what's important to them, how they're using what you're going to be testing, whether they're truly the right participants. For instance, right now we're doing some usability testing on a really specific piece of software that if you are not in a certain profession, will make no sense to you no matter how well it's designed. It's important to really think through that stuff and the answer for usability test really never should be absolutely everybody. Many times it's because that's what I hear from a client, well, we want, everyone, we're for everyone. No, you're not for everyone. Basically, I guess, how do you get these people? The way that you should really think through it is through a screener. What's a screener? A screener is a set of rules saying, this represents criteria that for participants we do want, and this represents criteria for participants we don't want. If a participant has this, they're good, if a participant has this attribute, however, we don't want them. Maybe they have a certain knowledge, maybe it's part of their profession, maybe it's some background that they have, some experience that they have, some other software that they might have used. Now, I often hear from clients things like, "Well, it really matters to have gender criteria, it really matters have race or ethnicity." maybe, but doesn't really, or are they saying, well, we need to exactly replicate their demographics. Probably not. It's not about replicating your demographics, it's about finding users that can use the system from different perspectives and different lenses and getting the universe of those users not having every demographic category. In fact, if you construct your screener in such a way that you've got a laundry list of demographics, you make it really hard to do the recruit and find the people that you need, the people with the core education, the people with the core knowledge, the core background, the core experiences, because that's what matters, that is your topic of the day. Yeah. So you really want to pay more attention to behavior and experience and knowledge less than those other things like where are you from? How much money do you make? Some of that stuff, it really depends on the context of what you're doing. It might be important, but when in doubt, aim for a shorter screener, a shorter number of things that you want to pay attention to you and look for, and also always try to highlight the things that you don't want first because by virtue of the fact that someone is participating in your screener, they want to be in your study, so try to screen people out early on rather than asking a whole litany of questions, and that is screening them out at the very end. Sometimes in your screener, you want to construct questions that aren't just saying, do you do this? Yes or no? You want to give them a list of five options and say, which of these do you do, and maybe only option 3 is the one you want, but that way you prevent people from just saying, "I want to participate, sounds like so much fun," and then you get participants that aren't giving you the data that you want because they don't understand what you intended for them to understand. Yeah. You also have to be just a little bit aware of the professional usability tester, or the serial participant. Both Cory and I have had experiences where people have showed up who we've seen before, I was not quite sure that they're really the right fit, but they've said yes or figured out the right way, so if you write the screener really effectively, you can make it much harder for people to guess at what you're looking for, so that you really truly get the people who fit criteria that you mean. Yeah, for sure. Of course, that question becomes, how many participants do you really need? You may hear anything like three participants, five participants, eight participants, 100 participants, have an hour each, Don't do that to yourself. Good luck with that. But ultimately, it's up to you to consider, do you need the big picture things? Do you need every granular bit of data? Are you trying to balance the big pieces of data with getting too much repetition now that you've recruited 20 people to come, where those 20 people by participant 10 or even eight, you already know what the issues are. Thinking about that, thinking about how many unique tasks you have, do you have different user types that actually might get different permutations on tasks where you have two word things just a little bit differently because maybe you want more participants then because you divide them up. Yeah, another thing to consider is if you're doing a one-shot usability tests or doing iterative testing. If you're intending to do one set of testing and then maybe fix or change some things and then do another set of testing, you can get away with fewer participants each round, because you know that you're going to be validating some of your previous assumptions and continuing to learn along the way. In general, we always recommend doing iterative testing because that's just good, smart thinking, but you can get away with fewer people, and so each individual discrete usability tests is a bit cheaper, a little bit less resources when you have fewer people. As a matter of fact, this week what we're doing is 10 participants and we don't necessarily expect all 10 to show up. In this case, we probably say, well at least eight will show up and that will give us enough information for this iteration. Now, we're going to wait two months and then schedule another 10 participants to come back, a different set of participants. Again, hopefully we're going to get eight, and then in that way we have our iteration and we've got two months because that's what the development team feels like they need, which is totally fine, different development teams work at different speeds. There will be different complexities of things to be fixed. While we schedule 10 session, so 10 one hour slots with 15 minute intervals, you could also say it when to schedule floaters, and a floater might be over three sessions. They get paid a little bit extra to be over the three sessions and probably just bring a book or bring their laptop or their iPad or whatever and just do their own thing. They might not be used, however, they're there should someone not show up. But I would say, personally, I prefer to schedule extra sessions and not deal with the complexity of floaters and having to spend over over three sessions. Yeah. I would say a general rule of thumb, take this with a big grain of salt, is that for every type of user group per usability test, aim for about 5-6, at least, minimum sessions, and I like to schedule one extra participant for about every five that I hope to include in my study, because even with the best recruiters and the most reminders, stuff happens and people occasionally can't show up. Yeah, and actually, to the best recruiters, they call people the day before, they e-mail them the day before and say, "You're coming, right?" They wait for a response too, and if they don't get that response, they'll flag it and say, by the way, I wasn't able to get in contact with so and so just be cautious that I'm a little concerned, so good recruiters will follow up multiple times depending on when that initial scheduling takes place. Cory is talking about using the third party recruiter. If you want some more tips for more detailed information around writing your own screener and doing the recruiting yourself, I actually have another Skillshare class. 8. Developing a Script: When we think about usability testing, we're really thinking about task-based sessions, and those tasks have activities that users might do in real life. In fact, when they're recruited, they should be recruited to understand what those tasks really are and why you're asking them to do. Now often you may give them fake data. You're not going to necessarily say, well, this is all personal, but at least you want to have them have an experience that mirrors Real-Life. As you give them tasks, you're going to say, "Okay, try this, this is what you're going to do, here's our scenario, go for it." You watch them go for it. Once they're done in one individual task or the whole set of tasks, you can talk with them. You can debrief them on a per task basis, or you can debrief them at the end or both. But the critical thing you need to be aware of is that when you develop a script and you list all your tasks and maybe you have your probing questions, nothing you do early on is going to bias what happens later on. You have and you should make sure that the tasks are related to goals that the team has, so you don't want to ask people to find something just for the sake of finding it, you want to make sure that you're really asking questions and developing tasks and the questions around what your core goals are. For example, if you're working for an online retailer and they're having trouble with people bowing out before they get to add something to the cart, you'll want to create goals that are centered around those things to make sure that there aren't problems with that. Try to think about the context and why you're asking those specific questions, and then you also want to make sure that you're asking them in a way that is clear and concise but not leading. When in doubt, you want to take out any references to positive or negative emotions, and really just ask set the context for, imagine you're trying to do X, Show me how you would do it. You definitely want to balance what you say, so what do you like or not like about them? What is good or not good about this? You want to balance you're never bias. I mean, I've been in or are observed studies before where someone might say, "Oh, yeah, what do you like about it?" That's a big no-no. Then the participants is, "Oh, you want to know what I like? I like it all it's wonderful." But then in fact, you're pushing them only to the likes and not the dislikes or vice versa. It's an important thing to know that people will automatically without meaning to you and tend to skew more positive when they're talking to you, so it's important to write those questions non-biased, but then to really observe what they're doing. They might say that they love something after clicking around for 10 minutes and not being able to find it and getting really frustrated and then finally completing the task much later. Write the questions in as neutral as a way as possible, try to order them such that they don't bias the further questions and then also make sure that you set the contexts so that they really understand the core of what you're trying to do without giving away any hints. Then always when in doubt, say nothing and just to watch. I like frame their conversation from the very beginning saying, "You know what, I'm not the developer. I identify if you like something awesome, if you don't like something, still awesome, we can keep learning." One other thing that I think about with usability testing and writing scripts is, are the client teams where, they come to me and they say, "Okay, we're ready to write a script, but we have the script already, we used it for functional testing." I say, "Okay, so in your functional testing script, you basically said okay, the user will do this and then this, and then this and the next path is this and this, and this. "That script is different, we're not telling the user what to do. In fact, you tell them the minimum necessary that gives them the real context of what they're doing and then you set them off on their own, and then if we they say to you, what do I do next and that happened today, what do I do next. Three times. What do I do next, what do I do next? You say, "Well, you know what, I imagined, I wasn't even here and do what you can." If they get totally stuck that we say "Okay, we know they can't do that task." We might show them if it's not going to again bias those feature task. But, ultimately, we really need to get that authentic experience to whatever extent possible. 9. Setting up equipment: Okay. Here we are. We're very lucky today to be in a hotel in a real set-up for usability testing. In fact, we have not cleaned anything out. We have not changed a thing. This is exactly where we'd been all day doing our usability study. What do we have? Well, first of all, we have the participant computer. Our study today is on a computer, it's of a website, so the participant is using this computer. However, you'll notice this is a laptop. In fact, we didn't want the participant to have a laptop experience if they didn't use laptops. We then set-up this set-up here for the participant. They're sitting on this side. They're using a real monitor with a keyboard, with a mouse, and really having a full desktop experience. Now, this computer right here, this is where the moderator sits. The moderator, I will be sitting here, we'll be talking with the participant. The participant has, and the moderator will hand a participant task cards. We don't always use test cards. With this study, it's just a little more complicated. The tasks are a little deeper and so for this case, we are using task cards. That's especially useful when you have things like lots of different logins or passwords, which for this particular study we do have. But we always try to talk naturally to set-up the context and then give them the task card as a piece of reference, rather than just reading it directly to them. In fact here we have our script that we're actually using today and this script has the tasks and has some intros things to really warming the situation up. Besides that, we have the loggers laptop right here. The loggers laptop lets the logger record in real-time as sessions go on and of course, we'll talk about logging shortly. But suffice to say, I use what I have set-up here is the Morae software where you can log directly into the videos as they're being recorded. Meanwhile, this here is a basic Logitech webcam. I like that it has two microphones here, it has some good audio quality. Get both the participant and the moderator and then for mobile studies, although we're not doing that today, but I brought this anyway. I've got my Pivo, I've got a document camera. It goes up and then it can capture the device that a participant might bring just like that. In this way, we actually have participants bring their own device without the complexity of saying, okay, let's install this, let's install that, and so on. Well, we're using Morae today, it's not the only tool that you can use. You can really pretty much use any video conferencing or video sharing software, especially if you use a Mac, which a lot of us do, Morae isn't always the best choice, but it's a really nice one because you can take notes right in the video files as you go, but really as long as you can see what the participant is seeing as you're taking notes and as you're moderating while you're on the other side of the table. But another thing to notice with these set-up, you notice we are off to the side. We're not directly in front of that participant and that's so that we can see them with their body language and also glance up and smile once in a while and just be heads down taking notes. You ought to be conscious of the actual physical space that you're set-up so that the participant feels comfortable and you don't feel like they're being grilled or interviewed. You'll also note that we've got the two of us in here, but that's what you want to max out at. For sure, and in fact, sometimes it makes more sense for the logger to be in an observation room or be with a client or stakeholders, which also has some political values. Know what the conversation is going on in the background. It really all depends on what's going be best for any given situation. In terms of positioning of course, if it works better on the side or works better is sitting next to the participant, great. But one thing I like to think of is because I often have a chat going on the moderator laptop. I am weary of a participant seeing my laptop and seeing the conversation and getting clues perhaps to what went right, what went wrong. Yap. That's a good point. We mentioned an observation room and that's where we're headed next. 10. Note taking options: Now that you have got everything all set up and you are running the study. One thing we haven't quite talked about yet is why there's two of us here running a usability test, because you only need one person to talk. However, it is super helpful to have two people so that one person can be focused on moderating and building that rapport, and following up with a participant, and the other person can be taking notes. It's really important. One of two things is going to happen if there's only one of you. One, you're not going to get the level of notes. You'll get some notes, although you'd be galactic lagging away, or you'll have to go back and re-watch recordings and spend the same time anyway that a second person would have taken, except it would take longer because you've delayed during the analysis until you get the notes. Yes. There are lots of different ways that you can notate, but one thing to pay attention to is that you're not necessarily trying to transcribe every single thing that happened. You're trying to take note of the key things that are related to the tasks and the goals. Where they able to do the task successfully? Did they get all the way through, or did they fail miserably? Even if they got all the way through, was there other feedback that they have? Did they have some different expectations? You're trying to take notes of little tidbits of information that will help you as an analyst leader. Really, there are two problems I've seen with note-taking where it just causes all trouble as you try to analyze the data. One is notes that don't actually capture what happened. The other is notes that capture everything that happened, including every word that was said, every action that was taken. They say, "Well, I want to be thorough." The problem with being so thorough is as you try to create a report for a client, you just maybe overwhelmed with data. You have the level of depth that gets you the report that you need and that you can synthesized later. Another thing that's important is that, especially if you're not in the same room, but even if you are in the same room, it's important to have a ongoing channel of communication between the person who's taking notes and the person who is moderating. We have a google chat setup when I'm note-taking and he's moderating or vice versa. But if the person is not physically there, and they could be watching separately, you can have either texts or whatever, won't be too distracting. But a lot of times the person who is taking notes can observe things or maybe make suggestions that, you won't have the mental space to do or to notice if you're concentrating on moderating. I find as moderator, I know that I might notice something that may or may not be found as you're watching it. I'll sometimes say, "Hey, you know what? I just noticed this, this one thing, it was very minute, was very quick, and just having a little redundancy there. Even if there's dedicated logging, dedicated moderation, you still can share back and forth. As a matter of fact, conversely when we're in the same room, and either way can work, same room, different room. But the logger can say, "Hey, you know what? You missed this one probe that would've been really good. Not in the script, but it would have a really good thing for my logs and you as the loggers thinking about the analysis." Either way, you can play, you can be flexible, you can see how the interaction is like, but you just make sure that you capture the right data in a way that will help you to do that to the final report, to do the analysis, to talk with the stakeholders about what really happened. It can be especially nice especially when you are wrapping up, or at the end of something for the person who was quietly note-taking, if you're both in the same room. To pipe in, ask some additional questions, or if you're in the same room, also that note-taker can be the problem solver if something goes wrong, if you have some sort of connection issues. When in doubt, both for note-taking quality and just for overall smoothness of these very task, always recommend having two people on guard. Sometimes I'll hear from clients, from stakeholders, "Oh, the moderation, that's an important thing that experiencing. The note-taking, you know what? I've got an intern who will just take some notes." No, the truth is that they're both equally important. As a matter of fact, it's nice to have people that are comfortable doing both, and then you can switch, you can switch it up a little bit, make sure that you are taking notes in the same format of course, but basically you can switch it up, you can go back and forth. But both are really important to have experience in and to grow that experience in. Again, towards good reporting and good analysis. 11. Observation Room: Okay, so we saw where we've been all day. Doing the usability testing, meeting with the participants and so on. Now here we are in the back room, where the stakeholders or where the clients are sitting, and really this is where they are. We haven't adjusted a thing. This will really show you what's going on, what's happening behind the scenes. Here you'll see a great big monitor. This is where they're watching. Here's a laptop, it's running [inaudible] which is what we told you we are using in the other room. The stakeholders are actually getting to watch Picture in Picture. They actually see a little picture of the participant. They see a big picture of the participant's screen. Now they don't get that one-way glass, but that's okay. We have a pretty authentic, realistic experience with only maybe a three second lag. You know what? They don't even know. No. They're watching here, they're clustered around, watching the big monitor, and do you think we actually traveled with the big monitor? No. We just said to them, "Hey, do you have an extra big monitor lying around. It'll be helpful," and they said, "Sure, no problem," and they brought one over. Now you'll see too, not only do we have the tech, we have the food. Food, in fact, we've got some awesome food here. We've got some cookies and brownies and all sorts of sugary stuff to keep the stakeholders happy and happy stakeholders make for a happy study. Especially for a long day. It's tough to moderate an all day, but it's also tough to watch all day. Most of the time our team members are not necessarily being kind of in the weeds for feasibility testing unlike we are. So anything that you can do to make the experience comfortable for your team members, whether that's clients who are paying you or you're on team that are just internal stake holders. Anything you can do to make them happy will encourage them to observe and participate, which is always good. Yeah and plentiful coffee, very important. Yes, caffeine is good. Yes. Again, happy stakeholders happy study. One other thing to note that in this room we have two have two lines of communication still with whoever is in the room. If the person who's taking notes and logging is in that same room, that's great. Or they can be sitting with the rest of the team. Which can be especially useful if you want to hear their reactions and the discussion that goes along, to see them react to the participant's reactions can be super useful. It doesn't have to be in another room right next door. People could be anywhere as long as you have that screen-sharing technology. They could be at their home, they could be at their desk, they could be really wherever. I personally have logged into a usability study from an airport before. I don't recommend that one necessarily. But it is especially also nice to have a room, to greet participants, especially if you get someone who comes a little early. You can do the initial setup stuff with them. Like if you've got a consent form for them to sign or if you have incentives afterwards, you can use the stakeholder room as a home base to make everybody fee comfortable. Yeah, actually right here. You can't see it so well. Is the signature form where they sign that they receive their consent and a gift card, visa gift card for each of the participants to receive. Overall, I feel like this has been a very comfortable experience being at the hotel. Maybe not quite like that one-way glass, like that Margaret Research Facility. That is full service. But you can make it too, you can make it too pretty well in a place like this. Yeah, you can get via the great usability test, pretty much anywhere. Yeah, for sure. 12. Analysis and Reporting: The final step of usability is to figure out what to do with all the data that you've collected and composited to some digestible format for your clients and your stake holders. With the data analysis, that's going to look different depending on what your goals are. Some usability tests are really focused on how long it takes for people to do it. You might analyze things like time to completion or time on task. You might be looking at something as simple as did a pass or fail, or do they get things right or not? You also want to look for other clues, the snippets that we talked about for the note-taking and look for trends so that you can not just report back what's wrong, but also some recommendations about how you might fix it, or some trends of other things that people might want to look into or anything that you can pull from that will be actionable for the teams that you're working with. Really though, the report itself can take any format. For example, you can have a set of slides or whether in PowerPoint or in Google slides, you could say, okay, here's, some slides, here's some images for you, some screenshots. Here's a little call out. Here is a little inserted video clip into a where you can see what really happened across three participants who said remarkably the exact same thing. The other thing that can be really helpful is if you're recording the participants faces and you capture particularly strong emotion, especially a strong negative emotion. That can be really powerful to convince and help your stakeholders and clients understand what to do and why it matters and how it's impactful. I have a separate story from an old job where we actually captured someone crying. We'd like a big fat tear screenshot and we changed our whole plan for months based upon that one image. It can be that detailed and it can be great to use emotion, but it can also be as something as simple as an e-mail with some bullet points. I also especially like to have debrief meetings, where I don't just send off a report or sent off some deliverable and call it a day. I like to talk through things, talk about the experience, say. This is what we noticed and give some context and some of their feedback. Of course, you should write or whatever you put together should be clear enough that people can have it. But it's really more powerful if you can discuss. For sure and sometimes there's a discussion immediately even before you analyze the data like, what just popped out in your minds, everyone? Then some time has, okay, now that we've had time to to go through to be careful. Sometimes I find clients ask for one when they mean the other, they say, "Oh, let's do a debrief right away and tell me the most salient critical points that will that span participants are synthesized the dating like, no, wait." We can talk, but we can talk, just give me a couple days to analyze, to synthesize, to really understand what we have. Conversely, we talk about these really quick reports, I've also had clients and sometimes those clients for official purposes, we want a Word document, we want paragraphs. While as a usability moderator, as a tester, it's always a lot of work. It is more work, it takes more time, but sometimes that's what they want. Being flexible and saying, okay, the client and the stakeholder wanted detailed report. You want the quick PowerPoint with annotated images and screenshots. You want just the bullet points and you can have this bullet points so quickly. It just understanding what your stakeholders want, what your clients want is so important. Yeah, the most important thing is to make sure that you capture the key takeaways and any actionable takeaways or things that your group needs to work on. It could be here or the things that went well. Here are the things that didn't go so well. Here are some things we need to test further. Here are the things that definitely need to change. But really what you want to make sure you do is whatever the goal of your team, your client, your internal team, whoever you're working with, you need to make sure that you meet their goals just like the participants can meet theirs. 13. Conclusion: Thanks so much for joining us for this overview of usability testing. We hope you learned a lot. Yeah, it was great to be with you today and get to share some things with you. Don't forget to fill out the activity sheet and upload it so that everyone can give feedback. Yeah, and feel free to find us on social media, on LinkedIn, on Twitter and the like, and follow us. Yep, good luck with your own tests. Great. Thank you.