Intro to UX: Fundamentals of Usability | Marieke McCloskey | Skillshare

Intro to UX: Fundamentals of Usability

Marieke McCloskey, Director of Research, UserTesting

Play Speed
  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x
10 Lessons (1h 21m)
    • 1. Introduction

      3:11
    • 2. What is usability?

      11:00
    • 3. Why does it matter?

      6:42
    • 4. Designing Digital Products for Ideal Usability

      10:59
    • 5. Types of Usability Testing

      11:20
    • 6. Usability Testing Best Practices

      5:48
    • 7. Reporting on Findings

      11:14
    • 8. Demo: Usability Study

      4:07
    • 9. Demo: Heuristic Evaluation

      11:21
    • 10. Final Thoughts

      5:39
188 students are watching this class

About This Class

Join UserTesting's Marieke McCloskey for an insightful dive into usability — what it means, why it matters, and how you can optimize your product, service, or business.

What: In today's competitive market, it's never been more important to make sure that your user experience is easy, clear, and enjoyable. This 90-minute class gives you the frameworks, tools, and tactics to create a stand-out user experience, with key lessons in: 

  • what usability is
  • designing digital products for optimal usability
  • ways to evaluate usability (with hands-on demonstration)
  • best practices for reporting findings and recommendations

Who: This class is ideal for designers, marketers, product managers, developers, and everyone involved in a crafting great customer experiences.

Why: You'll gain a richer understanding of usability as a discipline, empowering you to design, evaluate, and iterate on the user experience.

_________________

Explore more Skillshare classes by the experts and strategists at UserTesting, a user experience research platform for fast, real-time feedback.

Transcripts

1. Introduction: My name's Marieke McCloskey and I'm the Director of Research at user testing. So user testing has both platform that you can do usability testing on, we also have a services site. Basically a consulting company we offer and help to our enterprise clients in using our tool. So today we're going to talk really about the basics of what I think is just good design and so making something usable for everyone in any situation. So there's lots and lots of conversation about user experience now and how important user centered design is but today I want to focus just on usability and what usability is for products, why we should care about usability and also how to figure out if the product that you're building, designing has good usability. Usability in UX get used interchangeably a lot, there's also lots of different definitions for them floating around. Today I'll focus mostly on thinking of usability as a component of user experience and so it's one of the things that matters if you think about creating a great experience for the people that will be using your product. So, making sure that your product is easy to use in any situation is essential to a great product, and so even just a little bit of awareness can go a long way. So even if you don't start doing usability testing and evaluating your interfaces frequently, just knowing that this is a problem and knowing that this is something you should think about, will already improve products out there for everyone. So this course will be good for anyone who's at all involved in designing a product. If you're even on the planning side of where the business decisions that have to be made around what you're going to build? What features? I can also see this class being really great for the designers, or the UX researchers, anyone really involved with figuring out what goes where in the interface, either as a refresher or for people just starting out or if you have more of a physical products but not digital products, it'd be really interesting to think about how this applies to the digital space and what things to watch for there. No, you do not need to know anything about any of this stuff. Even if you've never heard of usability, which you hopefully have or heard of user experience. But you don't need to have taken any class before this is generally a topic that people really like when they have a psychology degree because they're so interested in people but it's really, really valuable for anyone from any background. So your project in this class is to conduct your own usability evaluation, and it's actually a two-part project. One where you look at a design that you've chosen and using guidelines that are out there in industry, in the field agreed upon heuristics we call them evaluate the interface for possible usability issues, possible places where users might get stumped or frustrated. The second part is actually to conduct your own usability study on any interface, any product that you've chosen. I'm really excited to have you join me today to learn about usability and how to create great experiences for your customers, for your users, for your clients, whoever you're designing for. This is really the essence of creating great experiences which is something I believe every person in the world is entitled to. 2. What is usability?: For our first lesson, I want to dive into what is usability? When I talk about usability, I'm really focused on making sure that products are easy to use for everyone in whatever context they're meant to be used in, so that it's really clear to people what they're supposed to do next, how to get the task done that they need to get done. So, if you're thinking from a design perspective, what are the top five most important things that people must be able to do on my app or on my website? You want them to be able to do that without much thought, without much frustration and be able to easily complete those activities or tasks. This does not apply just to digital products, it also applies to physical products, really everything has usability. You can think about the usability of products in any way. A camera, is it clear? How to turn it on? Is it clear how to work it? Scales are really funny products to think about because if you think of just like old analogue scale, it's really easy to get on, get your weight. Digital scales all of a sudden can be much more complex. Usability is much more frequently thought of and discussed when it comes to digital products. Digital products are more complex because they have a computer built into other products. There's a whole industry of people thinking about just designing products that are easy to use for people. If you think of industrial design, the whole design thinking, movement, and sort of philosophy, applies a lot to processes, to products, not just websites or apps. So, one of the questions I get a lot is how usability is different from user experience? Some people's definition of usability and definitely how usability has evolved over the last 10, 20 years has impacted the definition how we think about it. But before I give the definition of usability, just being how easy a product is to use and that's the core of usability. But when we think about using a product, there's a lot more that comes to play to creating a great experience, and this is why the term user experience has really caught on, and even more recently, people talk a lot about just the customer experience and calling it CX and not just user experience like a much broader than the digital experience, just how do your customers interact with your brand? So instead of the product just being usable, which is one of the components of this honeycomb, this is my favorite way to think about what user experience is and all the different components. There are actually lots of different info graphics out there. But usability is really the blue one in the top left about how usable product is. Equally important for the experience that your users have or your customers have, is whether or not the product is useful. Is there a reason that they should need to use this product? Then do they enjoy using it? So a lot of the empathy and the enjoyment is much more frequently talked about when you think about the experience rather than just usability. Although, in doing a usability evaluation and doing usability testing, you'll also uncover a lot of the emotions that play a part of that. As you'll also when doing usability evaluation, figure out if something is findable or credible. Typically though, those are also all components that are talked about when you think about the user experience and all touch points of interacting with a company. All of these things are important, not just can someone accomplish this task, but do they trust the company in accomplishing the task? The same with desirability, like do they want to use your site versus your competitor's site? If it's easy to use, you get a leg up. But if they really get some other joy from it, you stand out even more from your competitors. Usability is about how people use the product and interact with it and if they can accomplish the task that they have set out to accomplish on with a product. Accessibility is whether or not they can even get to the content and get to the features that allow them to accomplish something. So you could think of accessibility as a component of usability because if someone can't get to the information, it'll never be usable. It's also totally okay to think of them as two separate criteria that you need to evaluate in making sure that your product is easy to use for everyone. So, really simple, let's say you're using an iPad and there is a widget on a website that uses Flash, that's inaccessible to you because you're on your iPad, or if you're on your phone and you happen to have poor signal strength, you can't get to the app or the features or doesn't load fast enough for you to do what you need to do, which makes the app inaccessible. That hurts the usability because you can't even get there, but it is slightly different than just usability. I thought at this point it would be good to show you what an example is of poor usability. So, what does it look like when we talk about what usability is and people using an interface? You want it to be a smooth process and so I thought I'd show a video of where the user's trying to book a flight on the Virgin America website and they run into problems and this hurts the experience of them booking a flight. Connecting flights or non-stop flights? Okay. So, here's we're it's getting a bit confusing. I'm clicking over each of the, I see, that was a little bit confusing. I was clicking over the time and it was highlighting the entire bar and I thought that would be enough. But actually, there's three other boxes to the right, main cabin, main cabin select and first class. So, I'm guessing I have to click on that. So, I'm going to click on the main cabin. In the video that you just saw about a user trying to book a flight on the Virgin America website, the study was done right after Virgin America had relaunched their flight booking process and so they made some changes and he had actually had a fairly smooth process until this point and he had put in from where he was going to where he wanted to fly and the dates. But then he actually had to select a flight, and so he thought just by hovering over the times that he wanted, he would be able to select the flight, but he actually had to click on a price point. It just took him a couple tries, a little bit of trial and error, which is really common. You see there's frustration or confusion in the design. So, he figured it out, he was able to accomplish it, but that would still be considered poor usability because there was a moment of hesitation or frustration, confusion. I want to point out that this is a fault of the designers and so when you see videos like this or just happened to watch someone using a product and they're struggling, it's really the easy to blame the user and just say, "Well, he should have known better. It's so clear." or "He's just probably a novice Internet user." But really, the design or we as designers, in the broadest sense of the word, anyone involved in creating products, have to make it as easy as we can for people to use the products. Okay Google Now, navigate to Outback Steakhouse on Lindbergh. Navigating to Outback Steakhouse. That works. No, that's not the right Outback Steakhouse, that's the one on not anywhere near Lindbergh. Head north, on east, start to drive towards Lindbergh Drive. Stop it. Okay Google Now, navigate to Outback Steakhouse on South Lindbergh. Maybe there's two of them. In this video, you saw a user trying to use the Google Voice commands to find restaurants near movie theater. It's not actually returning the results and he doesn't know how to change that, how to adjust the search to then give him their restaurants not the movie theater, and he keeps repeating the same thing, which is also really common behavior. I know this must be doable and it must be possible, I'm going to keep trying. But at some point, they stop trying. You're lucky if you're Google and people will keep trying, or if they don't have a choice because they're on their phone and this is the voice command options they have. But it creates still frustration, which is something we want to avoid in the design. What I fall back on a lot in explaining usability especially if I'm explaining it to people I sit next to on an airplane, or if I'm explaining it to my grandparents, I usually boil down to what Steve Krug said many, many years ago and has continued to say is, "Don't make me think." That's really what usability comes down to and it's a really nice way to summarize it, is that we don't want to make our users think. So, if they're faced with an interface or a product, they need to know what they can do, what their options are, and how what they do next will impact what happens after that. But also depending on their goals, what should they click on, what should they tap on, which button should they press? It should be really obvious. They shouldn't ever be stumped about what to do, where to go. So, really clear examples if you think of just any website and knowing what you can click on or not, so what's clickable. I get a lot of designers who'd like to then argue and say, "Well, users will, they'll just look for it. They liked the exploration and the hunting for where they could go or what it might mean or if you use terminology that sort of different and stands out?" I have had many discussions, not just with designers, but engineers, marketing people too, where they want to use a cool term, something that's a little different. But if users are stuck and stumped and say, "I don't know what that means. I don't know where that will lead me and what that will get me." They actually won't click. So, they won't get there and especially if you have a lot of competitors, will just go and use another website or they'll redo their Google Search and go somewhere else if it's not obvious where to find the information that they're looking for. One of the things that comes up a lot in user experience discussions or customer experience discussions is the user journey, especially if you're thinking around how the people that use your products are interacting with your company on multiple different touch points. So, usability fits into that. Let's say your customers typically touch three different channels. So, they might go to your store and use your website and use your app or maybe they also call customer service. There's all these different touch points with your company. If they encounter poor usability at any point that negatively impacts the experience of that journey and it could actually interrupted so much that they just stop and they don't continue. So, you could lose someone along your ideal customer journey because you have poor usability. 3. Why does it matter?: In all of this and thinking about usability, we obviously have to convince others why this is important. So, if you're just designing your own thing, it's easy to convince yourself or maybe it's not, and I should convince you why usability matters. But a lot of times, this comes up in context of larger companies where you have to convince someone why should we even be thinking about usability, why should we be doing usability evaluations, and it really comes down to there being a lot of competition out there. If you're designing a website or an app and you have competitors, then users are going to use the one that's easier to deal with and work with. So, creating a product that has a smoother experience and a smoother interaction is going to be more pleasurable, more interesting, and people will come back to that. People encounter usability problems because of just human nature. So, going into a design and building a product thinking that no one's going to have a usability problem is really, really unlikely, and it's probably the wrong philosophy and approach to take because as humans, we just have limitations. We can do certain things really, really well, and other things, we're just limited. We're limited in our attention span, short-term memory, certain cognitive processes are just hard, and then add on environmental factors. So, if you're in a busy work environment, or you're on the bus, or you're waiting inline, you might have just had a really bad day and you're really stressed. All of that has an impact on how your mind is working and functioning. So, all of those will impact someone's ability to use a product. So, I think a much better approach to thinking about design is that people will have problems. So, thinking about how can we ensure that we're at least trying our best to prevent those errors from happening, one of the arguments that I've heard is that, well, my users are really smart. They have PhDs, or they are doctors, or I deal with scientists or IT professionals. They all know. They're really, really smart. So, they're going to understand the content on the site or they're going to be able to find this information because they're used to digging through lots of information. But think about these people's day-to-day lives in their work and if they're really busy, they are probably thinking about all the stuff that has to happen after that little bit of information that they need from your website or that one task they want to complete. It's also a really natural thing for people to encounter usability problems. We as humans just make mistakes, and it's okay that we make mistakes. So, as designers, what can we do to prevent those mistakes, but also help people recover from those? So, that's also a huge component of thinking about usability is not just preventing the errors from happening but thinking about helping people through possible mistakes that they might make in using a product. One of the concepts that has helped me a lot in thinking about design and psychology and understanding why we can't just design products that are easy to use for people is thinking about mental models, as humans have mental models about everything. Sort of any situation that you can think of, you have a mental model of what that might be. Another way to think about it is your expectation of what is going to happen, what something might look like. If someone says, "Oh, I bought you a new cell phone," then you have certain expectations about what that phone is. Is it a smartphone? What features? Maybe if you have an iPhone, then you expect to be getting an iPhone. What your mental model is is hugely dependent on your past experiences. So, we have mental models not just about things, but also about events, and activities, and processes. So, if you think about going out to buy a cup of coffee in the morning from a cafe, you have certain expectations about what's going to happen, what that's going to look like, how it's going to be set up, and it's all dependent on where you're from, and where you live, and what you've experienced in the past. So, if we think about that and apply that to the design of digital interfaces, people have expectations of how a website is going to work and they have expectations of how an app is going to work based on how they've used them in the past and how they've been designed in the past. So, as designers, what we want to do is map our model of what we're designing to meet users' expectations. So, that's really core of designing products to avoid usability issues, is making sure that we're mapping our model and the intention of the design to what people are going to experience. The hardest thing in that is that our users are coming from all different backgrounds and places. So, I'm not saying this is an easy thing to do, but it might help in understanding why you can't as a designer, or engineer, or product manager just say, "We're going to choose this type of design and this flow because it works for me and I understand that it's going to work for all my users," because they might just have a different expectation based on what they've done in the past. Because this idea of mental model, it's very conceptual, I like looking at graphics online and trying to find images that explain what a mental model is and why we have differing mental models. So, one of my all-time favorites is this one. It shows a conversation between two people, and they're talking about ice and something that's made out of ice. But one is thinking iceberg when they hear ice and the other one is thinking ice cubes. So, without enough context around what you mean by ice, people can interpret that to mean two totally different things. My husband and I actually joke about this because when we think of what a house is, we have very different mental models based on where we grew up because I grew up in the Netherlands where there's lots of tall houses that are all attached to each other, he grew up in Arizona with lots of one-story houses, single homes, too, not attached to the homes next to them. So, we have very different expectations of what a house is. The big takeaway in all of this is that designing products that are easy for people to use requires a lot of empathy with people, and understanding a little bit of basic psychology, and caring about the people who will use your product. So, you have to spend a little bit of time with your customers, with your users, thinking about who they are, what they need, and especially on thinking about mental models, what past experiences they've had so that you know how they're coming into this, what they're thinking about, how they might attempt to use your product. It's that human element, that human component that will make you a much better designer and a designer with empathy, which will create products that are easier to use. 4. Designing Digital Products for Ideal Usability: This lesson and the following lessons are taking what usability is and applying it to product design, and specifically digital interface design. So, best practices in design are standard design patterns and design elements that are used frequently. We see this a lot in websites but also in apse where, for example, having the search bar in the top right corner is a design best practice that evolved over time. Users have certain expectation of how website is going to work and so from all the many many websites that they've used before, they know that the search bar is going to be in the top right corner. If you then move that search bar, you're causing a moment of confusion or frustration and you're making your users think which is not what we want them to do. So, following design best practices, we'll just allow users to follow their mental model of how the website is going to work and can set you up for success. So, this is where looking at pattern libraries are lists of design best practices can help you just with a really good foundation. They'll just looking at what else is out there, what are other apps doing that are like my app? What are other websites doing? If you have something like a video player, what do other video players look like? Thinking about design best practices and just starting off with a good foundational design is definitely something that applies mostly to the actual visual designers, interaction designers, on a product. Maybe a UX researcher would get involved if you are on the outskirts of that core team of designers and you're part of the design process but maybe product manager, knowing what the standards are is still really helpful and could be brought in as a reason for potentially changing a design or suggesting an alternative. There are so many different ways to design a product. There's not just one way to do it and so looking at what other people have done is beneficial for everyone even in the concept stages of planning for a product or planning for a new feature within your app, it can be really helpful to look at what's out there and what the best practices are, what people's expectations are. This applies to everyone but probably most so to the designers who are actually going to be creating the visual look and feel of the final design. If you're interested in learning more about best practices and design patterns, I've included several links to articles and books on this topic in the class resources. There's something else that you can use and principles you can follow in designing to avoid usability issues. These are more guiding principles more of a mindset to take on and a philosophy to follow but it's really meant to be more of a mindset. These are typically called usability guidelines or certain set of heuristics and this applies obvious to many more fields than just design and usability and user experience. But it's things like show empathy for your users or avoid distractions so it's less of a specific word. Design guideline or design best practice would be something super specific. This is a little bit higher level and these usability heuristics can be really and nice manifesto for your team and say we're going to follow these principles in designing and that's gonna apply to everything that we do. This is also the basis for what I will talk about in one of the components of the project for this course is doing a heuristic evaluation. So, looking at the list of heuristics and saying do we avoid distractions for our users and what are possible distractions that could come up? It could be a really interesting and fun exercise to sit down with your team and create your own usability heuristics. I would recommend looking at some of the existing resources that are out there and just a quick Google search for usability heuristics brings up a lot a lot of different lists. The three that I tend to refer back to the most are Jakob Nielsen's, Whitney Hess's and Susan Weinschenk. Susan talks a lot more about the psychology, the reasons for usability issues, that's what I talked about in the previous lesson. But it can be really helpful reminder. So, if your team doesn't show a lot of empathy or doesn't have a lot of knowledge and the users and people's mindset it can be really helpful to think about why they're trying to focus on attention and that people have limited attention span. When Hess's and Jakob Nielsen's usability heuristics are much more specific to avoiding usability issues and we're in the design rather than the psychology behind them they all have the same outcome. Just some examples are Whitney Hess talks about presenting few choices as one of her usability heuristics and she actually calls it design principle. The idea behind that being that if we overwhelm our users with too many options, they don't know where to go and don't know what to click, don't know what to do next and that if we can narrow down people's trust it'll be much easier for them to pick one. Another one of Whitney Hess's that I really like is provide strong information sent and this isn't something she came up with more than someone else. All three of these lists of usability heuristics and I think you find out there the really good ones are based on years of research and studying human behavior and looking at what makes a good interface. So, when Hess want a strong information sent actually relates to a word or a phrase in an interface and whether or not people know what that's going to go to. So, if you have a link in your app and you want people to click on it, it has to be really really clear to them what that's going to take them to and where that's going to lead. So, when you think about how animals hunt for food, people, humans hunt for information and so you want to make sure that it's really clear where they're going to go. So, a link like click here has really poor information sent because it can go anywhere. Whereas a link like books is really straight forward. But this is also why sometimes you need a much longer link so you might need to link to download class resources and have all three clickable so that it's clear what you're going to get by tapping or clicking on that link. One of Jakob Nielsen's usability heuristics is visibility of system status. Really what this means is making sure that your system, the product that you're building gives feedback to users so that when they take action it's clear that the system has heard them and has understood that something is going on even if you need a moment on the back end to process that. So, if someone taps a button and the new page has to load, this happens a lot with very image heavy websites and apps, that there's a little loading icon, that something's happening. Just that little bit of feedback gives users such a sense of security and calmness that they don't panic because otherwise people end up tapping the same button many many times or clicking the link over and over again because they're not sure if something happened or not or they panic and think that the computer might have frozen. So, that's one that I tend to think about a lot and it's one that you can actually walk through the use of your system and make sure that at each moment is it clear to the user that the action was understood when they click on something, tap on something, do something, fill in information and hit Enter, is it processing? If you're considering coming up with your own set of usability heuristics for your team to follow, I definitely recommend looking at some of the lists that are out there already. Doing a quick search for usability heuristics brings up a lot of lists. I've also included several of my favorite in the class resources and then going through and picking and choosing the ones that really resonate to you or resonate to your team. Jakob Nielsen has 10 in his list of usability heuristics. Whitney Hess has 20. You can find lists of 30, there are probably 50 usability heuristics out there, sometimes are called design guidelines. But depending a little bit on your team size but also just their familiarity with thinking about user-centered design, so thinking about who's going to be the end user of a product, I would definitely try to limit the list of five or 10 that have an impact. So, another way to think about it as also if there's something that's been a problem for you or for your team in the past that you know you haven't done a good job at, maybe when people hit errors on your site they tend to hit a dead end. You don't do a really good job helping them recover from those errors and this is one of Jakob Nielsen's usability heuristics. So, providing them a way out, a way back to the homepage, a way back to other places would be a reason to include that heuristic over maybe one that you know you do all that everyone believes in and follows in their design. I think following an iterative approach to anything is healthy. So, say this is what we're going to start with, see how that works. It can be really good to use the set that you think you should start with and then six months later reevaluate. It's probably not something you're going to be changing on a two-week basis because you really want to instill this new way of thinking in this new philosophy in you're like broader design team. So yeah, like adapting and changing is probably a good approach but not too frequently. So, following design best practices and usability heuristics is a really really great place to start because it means you're not starting from scratch, you're following certain standards that are out there that users expect. But it's not the only way to ensure a great design and it's actually one that still cross lots of usability problems because there's many many ways to avoid distractions and an interface for example. So, really truly the only way to know whether or not you're system is going to work well for the people that will be using it, is to actually watch people use it. So, to put the design, even if it's a very very early sketch of something, put it in front of people who might be your end-user and see what they do, see what they think, see how they interpret the information. This also comes back to mental models like you intended for it to be used a certain way, you intended for that link to mean a certain thing, does it actually mean that to your users? This is really the basis of usability testing which just comes down to giving people activities to do that they would typically do on your product or using your product and then seeing what happens. 5. Types of Usability Testing: Usability testing comes down to seeing how your users interact with your product. It's really that simple. It's taking people basically off the street and giving them your product, giving them a set of realistic activities sometimes called tasks to do and then seeing what happens. Where do they have a great experience and where does the interface work exactly the way they had expected to and where did they hit moments of confusion or frustration, and then how does that impact the experience. Because you ask people to think out loud and share their thoughts as they're using the product, in these moments of frustration or confusion, they might actually tell you what they had expected to happen or what they think a link means. So, you actually get some ideas for how to improve the designs. Ideally you would be able to just watch over people's shoulder as they're using it in their natural environment. So, if you have an app for ordering food, you would want just like watch people at home on their phone or at work ordering food. That is a research method. It's more ethnographic. But even then if you're actually literally looking over people's shoulder, it becomes a less natural situation. Usability testing takes a little bit of that context away and says, now imagine that you have this activity, imagine that you have to order food, go do that. There are situations though where someone might actually be- they might actually be in the process of booking a flight or they might still have to book their flights for the holidays and you can bring them into what we call a lab and then actually just ask them to book the flight that they were intending to book all along. It's still not going to be on their own time when they want to do it, it might not even be on their computer. So, you're taking a little bit of that sort of natural environment away but still learning so much because if even in this somewhat more controlled environment, they have trouble, they're going to have more trouble at home with lots more distractions going on. I definitely recommend inviting people to usability sessions. Because if you watch your users interact with the product and they struggle, that creates such empathy for your customers and you want to fix the problem for them, that if you do that in isolation and you see that and you observe the problems and the people who either assign the budget for your team or for usability testing in general don't see it, they don't see the benefit of it or if the designers and engineers that have to actually go back and fix the problems don't see what comes out of that. They might not believe you. They feel like it's just you saying what you took out of it and you can really create change by having them come see real people trying to use your product especially, if there's moments where they struggle or have confusion, it's one way to really change people's mindset and have them see how important usability is. One thing I want to clarify about usability testing is that it can be done both qualitative and quantitative. If you don't know anything about research, I'll explain what those mean. It comes down to whether or not you need numbers and sort of hard facts to prove a point. So, if you're trying to say our interface has 15 usability issues or this one problem, we expect 80 percent of our users to have this problem. In order to draw those conclusions, you have to run a study with a really large sample size so that you can be pretty sure that what you see in your study is representative of the whole world of your customers. Especially, if you think of benchmarking yourself against competitors. So, are we better than our competitors? How much so better? Are we doing worse or benchmarking yourself against yourself in a previous point in time. So, you can say we're going to run a quantitative usability study in April and then again in December after making all these big changes, we want to know what improvement did we see. So, if we want to be able to say it takes people 30 seconds less to complete this task and 30 percent more of our users are able to complete this task successfully, those kinds of statements require a much larger sample size. I'm going to just throw a number out there, at least 20 or more participants, to be able to draw conclusions like that. The type of usability testing that I'm much more interested in is qualitative usability testing, where you actually run the same study on a much smaller subset of users. Typically like anywhere between 3-10 depending on your situation and the interface that you're testing. You can't then draw conclusions like if you do a study with eight participants and five out of eight of them have a problem, you can't draw any conclusion about what all your users might or might not do. What you do know is that that's probably a big problem because five out of eight people had that problem. What I hinted towards before is getting the insights from the study. So, not only do you know that there's a problem but because people are thinking out loud, you're learning what they had expected to happen or what they think something might mean. Sometimes users even give recommendations. I usually ignore their recommendations if it's something like I don't like green I like glue better but once in a while users will actually say something like oh I really wish it had this feature. I really wish I could do this thing. It's amazing what that insight will spark either in a discussion with a design team. So, if we think about the goal of usability testing really being to drive change and create change in a product and to improve the interface, there are more frequently time points where it can be helpful to do a qualitative evaluation of your system so that you can figure out what are the big issues. The low hanging fruit, fix those and then keep going rather than doing large studies less frequently. You can run your own usability study. So, I'm going to say this to anyone, all of you listening all of you taking this course, you can do this, you have the resources and the manpower to conduct a study because at its simplest form, literally grabbing someone off the street or someone who works down the hall or someone at Starbucks and asking them to look at your design and even just tell you what can you do here or what does this company do or give them a question like show me how you would do X, Y, Z? Can be really insightful and really helpful and so this is typically called hallway testing. Where grab anyone, you ask them a few quick questions, definitely limited in scope in what you can ask and you also have to keep in mind that you don't know that much about who the participants are in the study. But it can still give you some really great insights into what works and what doesn't in your product. One step up from hallway testing is setting up some kind of lab. So, this is typically called-lab based usability testing where you could rent an office somewhere, you could use your office, quiet office somewhere. You could even rent like a true official usability lab, which is really the same labs that market research companies use for focus groups or things like that. But it's basically any set up like what I have here where there is a desk and you can sit with a participant at a desk. It's a little more structured than you would do hallway testing. So, you plan your task ahead of time and you have to recruit participants to come to you to the lab and then that gives you a little bit more control over who your participants are as well. Then you ask them to conduct certain activities, they conduct those activities, you watch to see what happens where things work well, where they struggle and you prompt them to speak their thoughts out loud. If there's something where you think that there's a little bit more insight they can provide into why that was really great or why that wasn't so great, then you can ask them to follow up on that experience. Both of those examples so both hallway testing and lab usability testing have the participant and what is usually referred to as a moderator, sometimes a facilitator in the same place at the same time. So, you're physically located with a participant. There's also a huge movement to doing usability testing remotely. Remote moderated testing, you don't have to recruit your participants to come to you. So, scheduling is a lot more flexible. You can have them take the test at work, at home, anywhere they happen to be. The only time they need is the time required for the study. Typically for all, for lab usability testing, for remote moderator usability testing, a session takes 60 to 90 minutes. Remote, I've also cut it short in the past where because you're not physically with a participant, it's hard to build a rapport and they might be distracted by stuff going on around them. I've done some really successful 30-minute sessions. Hallway testing can be anywhere from two minutes to maybe 10 minutes. The last type of usability testing that I want to talk about is remote unmoderated. So, you're not actually in the same location as a participant. You're also not communicating directly with them. So, you set up a set of activities that you want a participant to conduct and then they do that on their own time and record their screen, either their mobile device or their computer as they're attempting those activities that you set out for them to do. They think their thoughts out loud as they do them. Then what you get back as the researcher in this case is video recordings of them doing those activities and tasks. Huge benefit there is that you can actually distribute this to a wide range of participants in a larger group and have them all conduct the studies on their own time whenever they happen to be ready for that. So, all you have to do is wait for the videos to come back and then analyze those videos on your own time. So, it's much easier scheduling situation. The biggest downside of that is that you don't have the ability to prompt participants to speak their thoughts out loud more if there's something that you thought was interesting and they didn't elaborate on it. You also don't get to redirect participants if they might have misunderstood one of your activities or they stopped before you really wanted them to stop. It's really interesting to see where they would stop naturally but sometimes even in a usability study, you want to encourage them to keep going just to see what might happen if they made it that far. So, it's weighing the pros and cons a little bit your budget, also depending on where your participants are. So, if you have participants in the UK or that's where your users are so you want participants from the UK but you're based in the US, it can be really hard to set up a lab study because you either have to make your users travel or you have to travel. This is where remote testing is really really beneficial and also has gained a lot of popularity. 6. Usability Testing Best Practices: To ensure that you do run a good usability study, there are a couple ways to ensure success. One of the things that I encourage you to do is not to think too hard about the activities that you ask the participants to conduct. You don't want to intentionally trip them up or make it hard for them to do something, I only ever give activities that I know are possible to be done in an interface because I don't want to make users feel like they're being tested. This is really truly a test of the interface itself and so by far the best way to think about it is what are the top things that all our users must be able to accomplish, and can they accomplish those. Sometimes that means breaking things up between different sites sections or running a really long study, but it's just thinking, what is the user's goal? So users must be able to purchase products from our website or they must be able to look up this kind of information. Another way to make sure that your usability study is going to be successful is to make sure that you're testing with your actual users or your target audience. So I ran a study years ago in my last job where we were testing auto insurance website, and we had recruited people who made the decision about auto insurance in their household but somehow this woman had slipped through the cracks of our recruiting agency, and she sat down, and we started talking, and I gave her the first activity, and she immediately looked confused and was like, you know this is something my husband takes care of I'm not the one that makes this decision in my household. So we stopped the study short there and said, well thanks for coming in, it's really helpful. I actually did end up giving her one or two more tasks and asked her a couple of questions. Because I was curious for someone who has no idea about auto insurance what she would look at. But her insights and findings, she's not a user of the product, so she doesn't make it good participant in the study. One of the additionally hard things in usability testing and watching usability testing is that, people like to give their opinions especially if you, and as you do encourage them to think out loud and share their thoughts as they're working, you'll notice that participant and some more than others will have really strong opinions about the design. So they'll say things like, I really hate the color red, I hate all websites that use red or I really hate this image. What you really want to get at is, what their expectation was and what they had hoped to see, and why red is interfering with that or impacting that. Or why that image has a negative feeling on them not so much their opinion but what behavior that draws. So what did that prevent them from doing, did that distract them? Is there another reason for that comment that they made. So you're looking way more for the kind of feedback of, I wasn't expecting that to happen, I wasn't expecting to have to sign in now, I wasn't expecting that pop-up to appear or I was trying to click on that I don't know if my click was registered because this image appeared on top of that. I do tend to take notes of their opinions because it does impact like if for some reason people have a really strong negative reaction to the look and feel of a website, that does cause usability problems. But I'm much more interested in the things that impede their behavior rather than just not liking something. Moderating usability sessions can be really hard. It's really hard to take a step back and let the participant go and do, and have them try to figure it out. You definitely have a tendency to want to step in and help them especially if they're really nice people and they're struggling to use a system you want to be like, but just click that button, it's right there, just click, it's okay, that's the one you want. What I tell myself is to count to ten before interrupting before saying anything and so I ask participants to think out loud at the beginning and then when they don't or they're not sharing their thoughts, I still tend to try to wait because more often than not they'll vocalize what's happening. There are definitely situations where you have to prompt someone and remind them. So even just saying simple like Remember to tell me what you're thinking as you're using this. I like to give that reminder at the beginning of a new activity, sometimes helps to say that during an activity as well or even just any thoughts as they're working. Any thoughts, if they're not sharing what they're thinking, that one works really really well. Again, when you're observing sessions, moderating usability sessions, watching recordings I can't stress this enough but it's really important to note what's working well. That the point of usability testing isn't just to point out all the things that are wrong, because you're giving advice to the designers or maybe you're the designer yourself, as to what should be changed to improve the experience. If we spend the resources changing things that aren't broken we're wasting our time. So we should be spending that time fixing missed stuff that needs to be fixed and so that's why it's just as important to focus on what's good and what's working well, so that doesn't get changed. This is also really helpful in thinking of future designs because if you've done X number of usability studies and the way you design sign-up forums just works really, really well, then that becomes your design best practice and your pattern library to use in future designs. The reason for doing an evaluation of a design is to improve it. So to talk to the design team, whoever that may be and the engineers, and say this is what needs to be fixed. So when you're doing usability testing, specifically, you want to focus on those high impact changes. What design elements caused people to trip up, and then what did you learn from watching them use the system that can be an idea for an improved change? 7. Reporting on Findings: Once you've run your usability study and maybe even your heuristic evaluation, you have all this data, so what do you do with that? What's the best way to present that information? Something to keep in mind that I want to make sure to say is that there's really no one good way to present your findings. There is also not like a best practice for this in the field of usability. Also thinking about just keeping track of findings over time like you run a usability study and you might learn 50 things but only want to share five because those are the impactful ones. What do you do with those 50 findings? So, there's lots of Excel, Spreadsheets, lots PowerPoint, lots Word documents and none of those are the best. So, my caveat is that we don't have a great way to do this and I'm going to share the things that I think work really well and the best way is to do them but this is a place where we can innovate. So, especially if any of you taking this class have ideas on a better way to present findings, I'd love to hear them but we also as an industry need to change how we report on our usability findings. If you run a quantitative usability study so a larger sample size and you're doing that to draw conclusions about all your users. So, not just what participants in the study, it's also usually more around numbers, graphs can be great and graphs are really nice to convince other people of stuff especially if your x points better than your competitors, there's nothing better that shows that than a graph that shows how much faster people can accomplish a task on your site, how much happier they were about it, how much more successful or even between tasks which ones are easy to complete, which ones aren't where the more errors occurred. So, in a quantitative study, you have the benefit of being able to show graphs and numbers and show them in percentages. So, quantitative data is actually not what I want to spend a lot of time talking about. The qualitative data is much harder to present in a way that's easily digestible because you end up with lots of quotes, transcripts from sessions and videos. So, there's not like one way to do that and actually presenting qualitative data in graphs can have a really bad effect because you can't actually extrapolate three out of six participants and say 50 percent of users because it was only three out of six participants. Even if you show a graph where it's only three out of six participants, it might drive a senior executive at a company to make this huge sweeping change that isn't really what you had intended or hope to do. So, it can be a little misleading to use graphs for qualitative data. One of the benefits I see to qualitative data is that you get to show video clips. So, this is also an encouragement to record your video sessions, whether or not they were remote or in person because there's nothing better than being able to show someone what happened rather than having to describe it. So, you might be in a meeting where you're trying to convince someone of the benefit of having done a round of usability testing or maybe it was several rounds of usability testing that you did and you made changes in between, you want to show how that experience was improved for your users in a boardroom where people see graph after graph and PowerPoint presentation after PowerPoint presentation with lots of text and video can be such an amazing change for them and also just immediately get the empathy that you want from them where they're willing to drive change because it doesn't matter that only three out of six people had a problem if they see people struggle and it's such a frustrating experience. Even if it was just those three people, we should fix the problem for those three people. So, it doesn't matter that it was only three out of six and that we don't know how many people of all our users will have that problem, it's a big enough issue. So, I really like using video clips. At user testing, we do a lot to include video clips in our reports to give people a little bit of that sense of like what is the experience like for the people who are using our product. So, an example of using video clip rather than text or description is that I did an evaluation of the Stitch Fix website. So, Stitch Fix sends you clothing, they are online personal stylist. So, you tell them what your style is and what you're looking for, your size and then they ship you a box of clothes either every month, every other month whatever frequency you want it. So, in the usability study that we did on their website and we found that people had trouble using the sizing chat because they were looking for numeric numbers size, clothing sizes and then small medium, large letter clothing sizes and so I can describe that like I just did. I could show you just a screenshot of the sizing chart, much more impactful is to just show you what happened and even if I just show you one example that's representative of users interacting with a sizing chart and actually in this case, all the participants had problems at some point in time trying to figure out what size to select. So, what I'm actually going to do is show you what a clip is that I would show in a meeting where I want to drive the change and encourage people to improve this sizing chart. I'm not really sure what number size is so subjective. So, I'm going to leave it as- go to size chart is right there. Thank you very much. So, here is the weight, here is the height, so it's small, I don't see a number size. Currently, we're okay. Yes, but I don't the 0-14 indicator around here that's telling me what a small would be. So? My favorite example of sharing videos is someone who worked at a company where they did testing. Actually, every week on Thursday, they would do some kind of usability testing. So on Friday, he would send out clips from the sessions and so he'd send out no more than three clips of things he wanted to make sure everyone at the company would see and he would send this to the whole company. Probably more realistically is that you would send an email like that out maybe to a much smaller team or actually include the videos in a PowerPoint presentation. So, as much as I complained about PowerPoint at the beginning of this lesson, it is still a really powerful visual tool to present anything to a team and so I really also like taking advantage of screenshots and taking notes on screenshots. So, that's a little easier. The problem with videos is that someone has to sit still and watch what happens if they want to quickly remember what something was about. If the video is attached to a screenshot with maybe a small summary of the finding or what we learned from the participants, that's easily tied together in PowerPoint which is how I usually present these results. Totally different approach to making sense of data and results that come from usability testing is to do something that's called affinity diagramming. I've included a link in the course resources to really delve into what affinity diagramming is but it's taking advantage of having multiple people watch the same sessions that you're watching. So, if you have recorded videos, you might all sit down and watch the videos together. If you're doing in-person to live usability testing, you invite observers to come. What you do is hint everyone out has stuck a sticky notes and they write down each observation on a separate sticky note and then at the end of all the sessions you go around the room and place all the sticky notes on the walls and you group the related observations together. So, you put the groups, the first round just silently kind of grouping observations and then in following rounds you can refine them also while talking. What's really great about that is that the final round of affinity diagramming, you can give everyone stickers and it's basically a vote. So, I like to give people three votes and they get to go around the room and you might have ended up with eight observations, things that you learned from the sessions and everyone gets to vote only on the top three that they think are the most important. What happens is that you actually end up with a top three, maybe a top five that get the most votes. There's actually usually one big winner. So, not only do you have your top five, you have it organized and ranked and everyone in the room is bought in because they all had something to say, they all had something to say about what the insights were from the study and then which ones are the ones that are most important to fix. To really decide what should and shouldn't be changed, something to keep in mind is how much designer and engineering resources are required to make a change. So, something will require like a complete rebuild of something that's going to be a lot of engineering resources. You really need someone else at the company to help you decide what thing like one thing that might make the biggest impact for users might be really, really expensive to the company and just not worth it at the time. So, someone has to do that cost-benefit analysis. It could be a senior manager on the product team, engineering team, even higher up depending on your company. Something that affinity diagramming provides if you can get the right people in the room is that that happens together and everyone decides together that these are the things that we must change, especially with our goal in mind to impact change to get the design improved. You want to think about who your audience is. So, if you're working, let's say you're actually the researcher and not the designer or you're maybe the product manager doing some research and you're presenting what you learned to designers, I personally really like showing annotated screenshots. So, I'll take a screenshot of the different pages that the user went on at the different screens they saw on an app and I'll circle the things that either cause trouble or the things that worked really well and call it the specific design elements. For engineers the size showing the screenshots, I really like using clips because it can be, especially hard for engineers, to think about how users might be using a differently than they had intended. So, they might have had an idea about how something would work or what the interaction would be if someone clicked on something and it's not what the participants in this study experienced. So, just showing them that this is the experienced users had, will give them enough empathy to want to make the change. So, I will do screenshots and videos for managers or executives. I like to have an executive summary that's really short into the point. So, maybe it's the top three things that worked really well. I like starting with things that worked well that you shouldn't change and then the top three things that will make the biggest impact and change. In presenting your usability findings, the goal is to create the change. You have to indicate and show where the changes have to happen. So you have to indicate where there were usability problems or things that impacted the experience. That's not you being mean, that's you trying to be helpful. So, you have to take a step away from feeling like this is just sharing all the negatives and all the things that are wrong. So, start with the things that are really good. Like, what are the key things that are working great that shouldn't be changed and then where are the areas for improvement. I also like to call it that. So, the most successful presentations I've seen have that structure. 8. Demo: Usability Study: So, thanks Emmy for helping me out today with the study. My name is Morica, and I'm a Usability Consultant. What that means is that I look at how easy or difficult it is for people to use websites, and then I help companies make their websites but also their mobile apps and their software easier to use. So, what we're going to be doing today is I'm going to give you an activity to do on a website, and I want you to do it as if you were at home doing this activity. So, do what you would normally do. The one thing I am going to ask you to that's a little bit different from how you might use this at home is to think out loud as you're using the website. So, when you click on something, tell me why you clicked on that. If something didn't work the way you expected it to, explain that to me. If you really like something and something is really helpful to you, let me know. But, also, if something doesn't work, something's not working the way you want, let me know. Okay. It sounds good. Do you have any questions for me before we get started? Okay. Wonderful. So, I'm just going to be sitting here taking notes. You probably won't hear anything from me. Okay. I might ask you a question here or there. That's all part of the process. It sounds good. Great. So, if you could do me a favor and read this out loud, and then do what it says. Okay. Using the website allrecipes.com, find a recipe you'd be interested in making for dinner next week. Make sure it takes less than 30 minutes to prepare. I should just get started? Just go ahead. Alright. So, I'm on Google Search, and I'm just going to go to "allrecipes," which is actually right there. So, a pop-up came up, and it says, ''Calling all cooks. Create your profile.'' I actually just want to look for a recipe. So, I'm going to close that out. I am actually thinking about Brussel sprouts because I had a really good meal at a restaurant that had Brussel sprouts with apples so I'd like to make that myself. I see a few recipes pop-up. Roasted apples and Brussel sprouts. So, actually what I'd like to do, as I bring these up, is somehow be able to save a few of these recipes that come up. So, I can look at all the ones I like later. So, let's see. There's a heart on each one of these recipes. Yeah. I scroll above it. It says, "Save this recipe." So, I'll do that. Oh! It takes me to a page where I have to join. I actually don't want to do that right now. I just want to look at recipes. So, I just have a couple follow-up questions to that. Your feedback was tremendously helpful in understanding you liked, or you didn't like about the website. Anything about any of the pages that you went to that made it particularly easy to use the website? Yeah. What actually made it really easy it's just the photos. So, the photos of the different recipes gave me an idea on what type of recipe it was, so I didn't have to actually look into each one. I just want something simple with apples in it, and I didn't want anything more complicated than that. So, it made it really really easy for me to pick the ones that I actually was interested in looking at. That's really helpful. How about anything that might have made it more difficult than necessary, anything that got in your way of using the website? I think the save feature. Yeah. I mean it was just the one thing I wanted to do, but I didn't want to log in at that point. When I was on the side, I just wanted to really look for the recipe I was looking for. Yeah. That's really helpful feedback. Before we wrap up, any other thoughts, anything else you want to say that you really liked, that you didn't like so much. What I really liked though is at the top of each recipe that we looked at actually it said how many minutes the recipe took, and so, it was really helpful to be able to see that, because that was part of the parameters too and it would give me an idea how much time I needed to prepare the item. Great. Thank you so much. Thank you. 9. Demo: Heuristic Evaluation: So, I'd like to show you an example of what it might look like to do a heuristic evaluation. This is just one of the ways you can do it. I actually chose the Airbnb app because it's one of my favorite apps. So, it's not an app I've had a lot of trouble with. It's not one that has lots of issues, and yet still there are things that could possibly cause trouble for users. Which is really one of the reasons for doing a heuristic evaluation is just to go through, based on some principles in the field and say where might users get tripped up because the design doesn't comply with these standards or these principles that we've decided are important. I chose to pull up Jacob Nielsen's list of usability heuristics, and of the 10 that he has I selected five that I'm going to walk through and evaluate the Airbnb designed to those five. The five that I selected are visibility of system status, match between system and the real-world, user control and freedom, recognition rather than recall, and aesthetic and minimalist design. So, one of the things I like to do before really delving into the heuristic evaluation is just to look around and click on a couple of different things, see what I can do, see what the options are. Already here I'm noticing that it's giving me feedback that something's happening, it's loading, it's working and so I already off the bat see visibility of system status. So, these are listings in Los Angeles. I can look through different ones, and notice the little heart icon which I'm thinking I know what it means maybe because I'm familiar with the app, not sure if everyone would know what that means in that it saves it to your favorites. I'm also going to go back to the beginning. Seeing the heart just now reminded me of the icons that I had seen on the first screen, and this brings up match between system and the real world. This comes down to making sure that whatever you have in the design is clear and understandable. I talked about information sent in one of the lessons in this class which is very much related to this and icons are actually one of the things that tend to not have a great match to the real world. But as you look at them, the envelope we have a pretty strong association to male and an inbox so that one's pretty good. Of the other ones here like a heart is maybe something you might love. What these other icons do or where they bring, you might not be immediately obvious. So, this is one of the ones where I might make a note to say like could potentially cause problems because it doesn't exactly match the system and the real world, what we might expect to see. So, let's actually go back in Palm Springs. Seems like a nice place to go. So again, we see the loading icon. Depending on when you use this, you might not see it at all. So, if you have strong Wi-Fi connection, you might never see that but they've designed it in there which is really good so it's giving me feedback which as a user I know I really need. It's like I tap something, I want to make sure it's still working. It's not totally crashed and I don't have to restart the app. So, those are our first two. User control and freedom is another heuristic that I find really helpful to think about and it's making sure that I'm not forcing users. As a designer, I'm not forcing users to go down one path, I'm giving them a lots of different ways to use the app. So, I can select a city like I just did, I can search for a city, there's lots of different ways to do that. So, going back into Palm Springs, I can select a date that I might want a listing on, how many guests that I want and all of these are things that are giving me as a user control and freedom to search in a way that I want to search, and so I'm going to save those filters. Again, I see that it's doing something, it's hopefully listening to what I want it to do, and so I feel like I'm in control of searching. Airbnb could have decided that they just wanted to show me a bunch of listings or only let me search by a city and then I'd have to go through but actually narrows down the options, which is really helpful. As I'm going down here. This is actually another really good thing and I realized as I'm using the iPhone app, I've also used the Android app, and the iPhone app actually does something really interesting that the Android app doesn't do. On the Android app, I don't see a reminder. It doesn't remind me what filters I selected, whereas I'm seeing that here on the iPhone app. It's saying at the top December 3rd to December 12th four guests. If I did a shared room, I'm curious now if it's going to show me that too. It still just gives me some of that information. This actually relates to recognition rather than recall which has to do with not making users rely too much on their short-term memory. If users are making a bunch of comparisons or doing multiple tasks on different screens, it's really hard to remember what they chosen and selected on one screen when they're on another screen or if they're comparing multiple listings to each other. So, in this case, I made a bunch of filter selections and the iPhone app is actually doing a really good job reminding me. So, it's requiring me to use recognition rather than recall in what filters I selected after a certain point, right. So, it's telling me date and guests but it's not telling me that I did shared room versus entire home or what price range. So, Airbnb chose those two. My guess is that it would probably be worth doing some further analysis and evaluation to make sure that date and number of guests are the two most important ones that I would want people to know that they filtered by. I could imagine price being another really important one. Another thing when it comes to recognition and recall, like if I want to look at this rental, I can look at it, review it, I can go back. There's not actually a super easy way to compare multiple listings to each other which is requiring users to rely very heavily on recall because there's no way to easily recognize the differences between two listings. What we tend to see in usability testing when that happens in those situations if you were to just watch people in the natural environment, they tend to have a notepad next to them when they're doing this kind of research, where they take notes on the different listings that's in this a lot in reviewing hotel sites where people want to take notes on like price and does it have a pool and does it have Internet and breakfast. So, that's one where, it might not be too big a problem but I could see that being potentially an issue and something to consider is building in some feature where people can make easy comparisons between products. Then the last usability heuristic that I had pulled out one that I wanted to focus on was aesthetic and minimalist design. This really talks about not having a lot of unnecessary information on a screen. So, if you click on a screen, that all space is used wisely so that there's also a lot of whitespace that people don't get overwhelmed by the information. Especially a clear visual hierarchy and it's actually one of the things I think Airbnb does tremendously well. On the home screen you see the large images which are really nice, so it has a good emotional reaction to using their app and wanting to go different places. Also in here seeing images, focusing very much on the atmosphere at the rental rather than a whole bunch of information but still providing you the information that you probably want to know like what's the summary of the rental, where is it, how many reviews and then even when you click in you can tell that they've thought about using the space wisely. What I'm having a hard time in this heuristic evaluation doing is thinking about whether or not this is the most important information at this time. So, this is where for evaluating like listing detail page, I'm going to call it. I would want to do some more research into users and maybe either interviews, surveys, talk to people a little bit more about what they need when they're booking a hotel or rental but then also a usability study to see if they can actually find a listing that works for them based on the information that's provided here. So, what I've done here is actually write down the list of heuristics that I have and the ones that I wanted to focus on for this review and then made some notes as to what is and isn't working. Another way that I might present this is to take some screenshots of the app and then actually take notes on the screenshots so I could have even not had the live app and just had different screens, just kind of marked it as I was going. The easiest way to do a heuristic evaluation is to just once you've decided on your list of heuristics that you want to use in this evaluation and pick one and use the system thinking about that one and what is and isn't working or might cause trouble where the design might be not following this guiding principle. You might, as you're doing that, notice that there are things related to other heuristics and it's totally okay to take notes on that, but it can get really overwhelming to just start using the app and thinking about all the heuristics out there. So, going one by one is a really nice way to break this down. I think if it's overwhelming to look at the whole app, what you could do is just pick out a couple of screens or focus on maybe one or two tasks. So, I could have said for Airbnb, finding a rental that meets my needs is probably the most important. Another really important one is actually being able to go through the booking process but just reviewing different listing. So, I could decide to narrow my focus on just when comparing and looking at listings, does the design match up and meet these principles? Rather than thinking about everything. I didn't in this demo evaluation look at everything and all possible features. That's one way to break it down. Doing heuristic evaluation is definitely gets easier over time and there's actually quite a few usability experts in the field who don't think that anyone should do them. What especially helps me in doing them is that I've done a lot of usability testing. I've done a lot of research, I've done a lot of research just observing people using technology both on the phone and on laptops or desktops and then doing a lot of usability testing. So, I also know what typical behavior is when people are shown certain design elements or certain implementations of design elements. It can be hard to delve into this, but even without any experience of how people use technology, you can follow these principles and say, "I think this is doing a pretty good job." Even just you thinking that it's doing a good job, there might be ones where are like, "There's something here that's not quite right that I think we should look at." Even if all this does is bring up a discussion with your team, that would be incredible and be a really positive outcome of this. If you're interested in truly getting feedback from users and not walking through the system yourself either because you're the one that designed it and it's really hard for you to take a step back or because you don't feel confident in the type of feedback that you think you're providing through this heuristic evaluation, I encourage you to run a usability study. I have actually included resources with this class on how to run your own usability study, either in person, moderated with participant that you've recruited, either a friend or family or someone who actually uses your product or using the user testing tool. I've included a lot of resources on how to do that, some ideas for tasks to use for that which can be a great way to learn more about how your product is doing in terms of usability. 10. Final Thoughts: There is nothing else that you take away from this lesson. I want you to know that you were not the user and so, just the way you might use something and how you might use it is not how the end user is going to interact with it or not likely. A lot of this has to do with mental models, and you spent a lot of time thinking about the design. Have that empathy with your end users and spend some time thinking about where they're coming from, and then, spend some time watching them use your product. It's really really insightful to see what happens when they're actually interacting with it. One of the questions I get a lot I haven't touched on yet, is when to do usability testing. So, when is this important? So, every product has a usability. It's either usable or not, and some components might have really good usability, others might have poor usability. When you think about evaluating the usability of a product, the question I get is when to do this. When should we do this? Since the goal is to drive change, the earlier the better. It's actually really common for companies when they start thinking about this, to do usability testing after everything is done and built and designed. But then, it's really hard to actually go back and make changes. So, the best time to do usability testing is as early as possible and as often as possible. So, even if you have a sketch, an idea of something, you can actually do usability testing on that. You can have people walk through the system even if it's a bunch of drawn screens, and see where they would click and what they think would happen if they clicked something, even if they can't. Then, as you increase the fidelity of those designs,so, it can be a clickable prototype, you can include some visual design. You can keep doing usability testing. It's actually another reason to break up your budget. Let,s say you have budget to test with 30 participants in a quantitative study, you could break that up and do three studies of 10 users, or six studies of five users, and actually get a lot more data as you're designing, and learn as you go. Because you actually also get to evaluate your second, like your alternative design. So, let's say your navigation didn't work and you want to change the content, you can actually go make those changes and then test again, and make changes and test again. We've talked about a lot of things in this course. I started giving a definition of usability, of how usability is tied to user experience. It's really one component of the overall experience, but that depends a little bit just on your definition. That's okay if your definition of usability is much broader to be very similar to what user experience is. Then, we talked about the underlying psychology of all of this. So, why people run into usability issues in interacting with products. We talked about avoiding usability problems in your design. So, starting off with a really good foundation by following best practices, usability heuristics, design pattern libraries, and then conclude it with how to do a usability study, which is by far the best way to understand if there are, aren't usability problems in your product. Then, how to present the findings from that to actually get designers and engineers to make the changes that you want to see. If you haven't done the project yet, I really encourage you to start that. It's a two part project, one where you just do an evaluation of a design, one where you can actually run a usability study. If you're designing your own product, then this is a great opportunity to understand where there are, aren't usability issues, but you can run a usability study on anything. Pick any app or website that you use a lot, and that you either really like or you really dislike, and share your finding, share what you learned from doing this project. I definitely encourage you to take advantage of the opportunity, ask questions to each other. So, upload your project as you're going and ask for feedback from other students, or if you get stuck in a place ask for feedback, even if it's just coming up with which heuristics to use and which set of usability guiding principles you're going to go by, or which tasks you want to use for the usability study. These are great questions to ask each other and get feedback. I also encourage you to check out some of the other classes that we've put on user testing here on Skillshare, for a deeper dive into some of the topics that I talked about today around usability. I've included several resources with this class, some blogs that are really fun to follow, just like what's happening in the user experience field, who the big players are, what's being highly contested, some changes going on, but also just to stay up to date on what's happening. Then, also some books for beginners, but then also for more advanced user experience professionals. So, definitely check out the resources that I've included. Thank you so much for taking the time to learn about usability with me and learn how to evaluate your own designs for usability. Because of nothing else, what we want is to create great experiences for people, and one of the ways to do that is to prevent usability issues by testing early and often. If you want to delve further into the topic of usability and specifically user experience, I highly encourage you to look at the other user testing courses that we have up on Skillshare. If it's around the user experience and the customer experience, around how people interact with your company at different touch points, our course on User Journeys Omnichannel Experience is great to unpack how people move and transition between different channels. If you're really interested in the research side of it and the different ways to get user feedback, I might highly recommend Janelle's course on different feedback methods.