Intro to UX: Conducting Smart User Research | Janelle Estes | Skillshare

Intro to UX: Conducting Smart User Research

Janelle Estes, Director of Research Strategy, UserTesting

Play Speed
  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x
8 Lessons (48m)
    • 1. Introduction

      1:38
    • 2. What is user experience research?

      9:33
    • 3. What are your research goals?

      3:53
    • 4. Ethnography

      9:17
    • 5. Focus Groups, Surveys, and Interviews

      9:01
    • 6. Usability Testing

      7:45
    • 7. Web Analytics

      3:45
    • 8. Conclusion

      2:41
132 students are watching this class

About This Class

Join UserTesting's Janelle Estes for a straightforward look at how to gain smart, useful feedback about how users experience your product, service, or business.

What: Understanding your users' experience — behaviors, attitudes, and perceptions — helps you gain gain user empathy, see new opportunities for change, and prioritize product improvements. But: how do you gain that understanding? This 45-minute gives you an approach for 4 popular research methods so that you can truly understand your product's user experience.

Key lessons include:

  • defining user experience research
  • 3 core research goals
  • 4 methods for capturing and sharing data
  • FAQs like how to capture to feedback at different stages of product development

Who: This class is ideal for everyone involved in crafting great customer experiences: product managers, marketing managers, community managers, designers, and entrepreneurs.

Why: You'll gain a richer understanding of user feedback methods so that you can make smarter, more strategic decisions about the future of your business.

_________________

Explore more Skillshare classes by the experts and strategists at UserTesting, a user experience research platform for fast, real-time feedback.

Transcripts

1. Introduction: Hi I'm Janelle Estes, and I'm the Director of Research Strategy at UserTesting. This is a class on user experience research on Skillshare. So, UserTesting is a technology platform where companies can run usability tests or any type of UX research. We're going to talk about different user research methodologies and the pros and cons of those. We're also going to talk about when to use which methodology, and I can help you decide what the most appropriate methodology is for you to use based on what your business goal or research question might be. I'd love for you to use your own product for the exercise throughout the course. This could be a website, it could be an application, it could be a physical product. Basically, what I'd love for you to do is to define what that product is, as your assignment, as well as where it is in the product development phase. So, is it live? Are people using it now? Are you in the prototyping phase? Or is this just a concept or something you're thinking about? So that will help frame the exercises that we will do throughout the course, where we're going to talk about different research methods and then I'm going to ask you to think about the research questions that you might have for your own product or design that would relate to that research method. For those of you who might not have a website or design to use for the exercises, I'll be providing a hypothetical example and you can download that in the class resources section. 2. What is user experience research?: To start the course, I'd really like to break down user experience research, talk about what it is, why it's important and the value that it provides. When you think of user experience, really, it boils down to how people feel when they actually use something, and that something could be a product, it could be a website, it could be an app, it could even be an in-store experience. Every touch point that someone has with a company or with a product adds to the user experience that that organization is providing. The interesting thing about user experience and our own personal experiences that we often remember really great experiences and really bad experiences. So, that is sort of the spectrum of user experience. You can have a really amazing, delightful experience, you can have a really terrible experience. However, we have to keep in mind that there's still something in the middle where you can have an average experience or you can have a sense of complacency, and that's okay, it's certainly better than having or providing a terrible experience. There's two types of research that you can conduct. There's market research and there's user research. Typically, we recommend doing more user research, but there is a time and a place for market research and we'll cover more of that in a bit. Market research is very much focused on what people say. And so, a traditional market research technique is something called a focus group where we have a group of people in a room with a moderator and they're talking about whatever it is the moderator wants feedback on. It could be a design of something, it could be the taste of something, and really people are just talking about what they like, what they think they would do, and they speculate on their own behaviors and actions. With user research, we actually do more observation. So, we are looking at how people actually behave. So, instead of asking them, how do you typically order something online in a focus group setting, we actually watched them do that using a website and it could be on a mobile device, it could be a computer, whatever it is that we're interested in studying. There's a really interesting example that shows the power or difference between market research and user research. And so, it's a fairly simplistic example that was referenced in Malcom Gladwell's Blink book. The study asked coffee drinkers how they typically drink their coffee, and the coffee drinkers responded by saying things like, "We like our coffee strong, so we put very little milk in it and we don't put very much sugar in it and maybe no sugar at all." And then the researchers actually watch these people make their coffee at some point and they noticed that people were actually adding milk and adding sugar, and so they liked their coffee creamier or milkier than they originally said they did when they were asked. And so, this is a perfect example of how what we say we do is often very different from what we actually do, and this is something that's really interesting because not a lot of people actually realize that this is something that happens. Often when I give this example people say," well, who are the people that they included in that study? Don't they know what they're talking about? Don't they know what they do?" Oftentimes we think that we're the exception to the rule. That we don't engage in the behaviors or the irrational behaviors of other people but in fact we do, and there are entire books dedicated to this, predictably irrational is one. As research shows, we ourselves are terrible at predicting our own behavior, so there's no possible way that we could actually predict the behaviors of our target audience, our target users. The people who are engaging with us and our products. And so, that's why it's so important to get out there and look at what people are doing, understand their behaviors, understand their needs and their goals, so you can design an experience that meets those needs and goals, and maybe even deliver some element of delight. Getting feedback from users, understanding users at the very conception of a product or service is so important. If you don't take users into consideration or you don't take your target audience into consideration as you develop a solution, you might actually end up solving the wrong problem or answering the wrong question. So, that's why it's so important to really understand at the very beginning, what is it that we're trying to solve? What is the need that we are trying to meet? Without that information, without having a solid understanding of that, you may end up with something that is beautiful and works well but doesn't actually help people, it doesn't actually meet a need, and so it's not accepted. It's not adopted by users, and that is a huge problem with user adoption in general. There's user experience research that can be conducted at any phase of your product development process, or at any phase of service development even. The first type of research that you can do is more about concept, exploration, thinking about what is it that we're actually trying to fix? What are we trying to solve? What is the solution that we are hoping to provide? The second type of research that you can do is more evaluative. So, this is when you actually have something that's tangible. When you are starting to first come up with a design and you're iterating on that design and you're improving that design over time. It's not live yet but it's going through different versions and improving along the way, and there's research that you can do during that phase as well. The last type of research that you can conduct is about measuring, and you can measure things like the impact and the effectiveness of the experience that you provided, and you can find out whether or not it met the need that you had identified and whether or not it's easy to use. So, it's important to keep in mind that user experience research it's a process, but it's a process that often follows right alongside the process of developing a product or developing a website. And so, it's not a track or a separate effort that kind of lives on its own, it's really keeping up with the business and the internal development of what it is you're working on or creating. Regardless of the type of research that you're doing and where you are within your product development cycle, there are a couple dimensions are a couple of decisions that you need to make. The first decision is whether or not you want to gather information on what people say they do versus what people actually do. We really want to rely on observing natural behavior, but sometimes there's a time and a place to conduct research where we're asking for more opinions and attitudes and preferences, and we'll talk about that a little bit later. The second dimension of user research or user experience research is how big of a sample size that you want to include. So, of course with user experience research we need to test with our users, but how many users do we want to test with? Is it a small sample size of four or five people, or is it a larger sample size of thousands of people? Throughout the course we're going to be referencing these dimensions, and we'll be talking about the different research approaches that fit into each of the different quadrants, and why you might use one research approach versus another. One other thing I want to point out is the importance of including your target audience or your target users in any research that you conduct. Sometimes it's easy and faster to have somebody who works alongside you test out an interface or ask your neighbor or a good friend. You certainly can do that if they fall into your target audience but oftentimes they don't. So, if you have a specific audience that you want to target, it's important to identify what are those unique characteristics. Do they have to buy things online? Do they have to spend a certain amount of money online over the course of a year? Do they have to have certain interests? Do they have to have certain professions? These are all things that you should define up-front, and then you can use those characteristics as screening requirements as you look for target users for your research. So, with all of that in mind, let's dive into the different methods we're going to cover today and how those can relate to and impact what you're working on? 3. What are your research goals?: There can be three basic research goals. The first goal would be understanding people's attitudes and perceptions that in their opinions, this is when you might be looking for or exploring new ideas or identifying what problem it is that you're trying to solve. Another goal that you might have is just to improve the experience that you already provide. So, this could be as you're working on a prototype in iterating over time and you're trying to understand are we headed in the right direction, do people understand how to use this, is it easy and straightforward. Then a third research goal really is to measure the performance of something. So, how well did we do? How successful was the design or product that we created? It's important to consider what it is specifically that you're trying to learn. As you see from this chart and you see the goals that are overlaid in this chart, you can see that it starts to make sense that certain goals are tied to certain dimensions on the chart. So, for example, if my goal is to understand people's attitudes and perceptions, to explore new ideas, then I probably would want to get more information, including what people say, and this can be done in a quantitative nature through a survey or it could be done in a qualitative nature which could be a focus group or an interview. Another goal that you may have is improving the experience that you already have. So, you have something tangible. It might be an early prototype form. It might be live on the website, but your goal is to make it better. In this case, we typically recommend doing research with a small sample size, and we're looking at what people actually do. So, how do they use this thing, where are the problems, what's working well, what's not, where can we improve. The goal of this type of research is to quickly get insights, make improvements, and move on. The third type of goal is about measurement. So, this is typically when some things are already live. You can do both behavioral and attitudinal research for this type of goal, but it's largely quantitative. You can't really make too much of a point with a qualitative sample with three or four people. So, some things that you might do for this type of research goal would be to look and see on your website what are people doing, where's the traffic going, where are they dropping off. That's behavioral data at a quantitative scale. You could also do attitudinal data or what people say, other quantitative scale as well. You could do something like surveys again. You could also ask people for preferences between a couple of different design options and provide that and ask people basically or include a larger sample in that. As you go through your product development process and as you iterate and improve on your project or your product, you can use this or you can refer to this to say, "Hey, I want to learn more about measuring how well this thing is doing. Oh, okay. I can head into a behavioral study with a large sample size to answer the question of how well or how successful is this item I am interested in measuring." In the next four lessons, we're going to talk about each of these research approaches and the specific methodology that you would use, as well as how to conduct that methodology. 4. Ethnography: So, before we dive into each of the four methods that we're going to cover today, I'd like to actually just frame the discussion with an example. I'll be using this example as I talk through each of the methods, so then you can also think about that example as you work on your own assignments and apply it to your own situation. The example I'd like to use for each of the four methods that we all walk through today is imagine that I work for a large retailer and we have a website that people order home supplies from, and what we'd like to do or what we're considering doing is setting up a way for people to order their home supplies on a subscription. So, maybe they order a few things that are delivered on a regular basis, maybe it's once a month or twice a month or whatever they might specify. Keep in mind that this is something that doesn't actually exist yet on our site, it's something that we're considering building. The reason that we want to conduct research at each phase of the process is to make sure that what we're building or what we plan to build makes sense from a business perspective, that it's meeting a need, that it's easy to use and that when all is said and done, it has an impact on online sales. Before we design anything, we really want to understand what people need. We can't actually just go out and ask them what they need because oftentimes, they don't really know what they need or what they would want. So, this is why we would want to go to our users and see what's the problem. What are we trying to solve. How can we help. So, in the case of a retailer who's trying to come up with a subscription service for home supplies, I as a researcher, would have some questions. How often do people order their home supplies? Where do they store their home supplies? How do they remember to order their supplies? Do they keep a list? If they keep a list, is it on paper? Is it on their phone? How do they typically order them? Do they go online? Do they go to the store? What prompts them to order new supplies? Is it when they run out of supplies? Or is it when they realize that they're getting a little low? How quickly do they need their supplies by? Do they need them right away or can they wait a couple of days? These are all questions as a researcher before you develop something new that you would want some context around because you have to understand the context and understand the need in order to develop the best solution for users. Another question might be who is the typical user, right? Is it a single person who is responsible for ordering the home supplies? Is it a group of people? Is it a family effort? Are there different target audiences that we need to think about? So, as a researcher, with that sort of questions, I'd really want to make sure that I truly understand people's behaviors. So, I want to go visit them in their homes and see what is their home setup like? Where are their home supplies? As we discussed previously, we really want to get at that natural behavior because as we know, asking people what they do is not as reliable as actually observing what they do. This type of research method is typically called ethnography. However, you might hear it referred to as contextual inquiry or home visits. This type of method is when you, the researcher or a researcher, goes to the user. So, you could go to the user's home. You could go to their place of work. You can even follow them on say, their commute home. The idea is that you are going to them because you need context. You need to understand their environment. You really need to understand the story of the consumer and the problem that you're trying to solve. With this type of research, because it's fairly time consuming and can be expensive, you typically want to limit your sample size to probably about four or five per target audience. So, if you have a couple target audiences and they're very distinct, you would have four or five users, potentially, in each of those user groups. The goal of this research is that you're understanding the context and the environment for which you are solving within. With this type of research, it's a very rich, it's very valuable, it tells a great story but it also requires a researcher or a moderator to basically be alongside the user throughout the length of the home visit or the length of the observation period. When you're conducting this type of research, it's often difficult to keep track of all the data that you're collecting, of lots of things that you're probably paying attention to. So, really, my recommendation is to use whatever is easiest for you. I've seen people use pen and paper. I've seen people typing on a laptop. I've seen people, basically, just video recording the session and reviewing it later and taking notes then. I would recommend if you plan to take notes, to use whatever makes the most sense to you. So, pen and paper, laptop, voice transcription. I've also seen people record the sessions with a video camera and review the information or review the footage later and take notes at that time. One of the things that you may find surprising, especially, if you do a home visit is the amount of effort the participant wants to put into being a host, right? Because you're a guest in their home and so, they want to make sure you're comfortable, make sure that you have a glass of water if you want some, they show you around their house sometimes. So, it's always good to be polite but sometimes, you may have to move the session along a little bit and get past, we call it the host phenomenon. Another tip that I typically provide for researchers who are doing this for the first time is to encourage participants to just keep doing what they would normally do and ask them not to do anything different just because you are there observing them. After you conduct this type of research you'll probably have pages and pages of notes from your four or five visits, and so, the big question is well, what the heck do I do with all of this information? This information lends itself very nicely to users stories. So, it's really a narrative of what the flow or the experiences as you observed somebody in their home or at work or on the go. In addition to the narrative that you might be able to create based on your home visit, you might also start to see some trends in people's behaviors. So, in my example, where I'm visiting people in their homes, and looking into, say, where they store their homes supplies. Maybe there are some trends or patterns that I noticed over my five home visits. Maybe most people store them in a bathroom or in a closet or wherever it may be. You'll start to see some of these trends. While this information is rich and useful, I would never recommend just doing ethnographic research to inform your user experience. You would want to pair this method with other methods that we will discuss in the next three sections. For your project, start to think about your own product or service or a website, and think about, does it make sense for you to conduct some ethnographic research? If so, what are the questions that you have? What information do you need? What do you want to know? Jot those down. Also, think about where you might actually be conducting the contextual inquiry or the ethnographic research. Think about asking questions around the need and why there is a need. If you're struggling coming up with your list of questions, keep in mind that this method is on the qualitative side of user experience research. Qualitative research focuses much more on the why questions and how questions as opposed to the quantitative related questions which is how many? How much? Keep in mind that ethnographic research isn't going to be a good fit for every company at any given point in time. It's typically done in the beginning of the development phase before something has been developed or when a new feature or new function is being developed. If you're having trouble coming up with questions for your product or your website that you're using for your project, this might not be a good fit or this might not be a good research method for you to use at this point in time. In this lesson, we talked about how to gather data on what people do in their own environments. In the next lesson, we're going to talk about what people say, which can be done with focus groups and interviews. 5. Focus Groups, Surveys, and Interviews: So, if you're interested in learning about people's attitudes, their perceptions, and their opinions, there are a couple of ways that you can get at this information. You can do something like a focus group or an interview, which gives you more qualitative data, or you can do something like an online survey, which will allow you to get more quantitative data. The questions that you may ask or the questions that you may want answered in a focus group or an interview are different than the ones that you might want answered in a survey. So, if we think about the case study or example of the retailer who's interested in adding subscription home good delivery service to their website. In a focus group, they might want to ask target users what other subscription services do they use? What do they like about those services? What don't they like about those services to really get out and be able to dig at their emotions around their experiences using subscription services, so they know what they can emulate and what to avoid. In surveys, we'd want to ask more quantitative-based questions. So, in my case study, I might ask something like, "How often do you pick up home supplies, or when you purchase a home supplies, how much do you typically spend, or what brands do you typically purchase when you buy a home supplies?" In order for a focus group session to be successful, you really need a skilled moderator to run the session, and so somebody who is able to facilitate conversation, to get people who are quiet or not participating to speak up, and to also manage people who might be speaking too much or interjecting too much. The other tip I might give is to ask people questions and have them write down their answers on a piece of paper, and then go over the questions and their responses as a group. This can sometimes help you avoid things like groupthink, which is very typical of focus groups. When you have a lot of people in a room together, usually the loudest person or the person that speaks first, will have a really big impact on what other people say. So, if you can get people to sort of record their thoughts down on paper before you ask them those questions out loud and have everyone converse, you often get answers that aren't biased by other people's responses. Focus groups are usually limited to about an hour to two hours, and they're often done in the evening hours, so working professionals can attend them. So, typically, you'll have a recording of the focus group if you conducted in a formal lab or formal marketing lab. So, usually, they offer that as a free service. If you're doing this at your own office, maybe you're using a conference room, I would recommend recording the session if you can with a video camera, and going back, and listening, and taking notes as well. You can take notes in real-time, but it's very difficult to do that if you're also trying to moderate the session. So, for focus groups, the data really is much more anecdotal. So, you might pull some key quotes from participants to share with the team. You might also be asking them some questions about their preferences, or some of their attitudes, and so you could present that in a graphical form as well. When you're planning a focus group, you have to prioritize what it is that you want to cover. So, this goes back to your main research goals and objectives, what you're trying to learn, and that can help you scope the type of questions or the type of information that you want to get from your users. Focus groups is one way to assess people's attitudes and perceptions, but that's at a smaller scale. A survey or an online survey is really just a series of questions that you would ask people to answer. The types of questions that you can provide in a survey can be considered open-ended, which means, it's typically just an open field where people can provide their comments and feedback and answer the question using just their natural language. You can also ask closed-ended questions, which are questions like multiple choice questions or questions with a drop-down box, where people can select one option or select more than one option. You also have rating scale questions. Essentially, a closed question is one where people have to make a choice or make a selection, and that could be one selection or more than one selection. There are a lot of resources, books, websites, even online tools where you can learn how to write and create really great surveys. With survey questions, it can be tricky to ask a question in a way where you don't influence somebody's answer. So, as an example of this, let's say I wanted to know if people spent $500 on home supplies over the past year for my project. So, I could ask, "Have you spent over $500 on home supplies in the past year? Yes or no?" That could be a survey question. However, the way that that's written in the options that are provided is basically hinting to users that I'm looking for the answer to be yes. A better way that you could ask that question would be, "How much have you spent online in the past year on home supplies?" and give them a range, or give them options to choose from. Maybe it's under $100. Maybe it's between 100 and 200, 200 and 500, over 500. There are ways that you can craft the question, so it's not as easy for users to guess the correct answer. There are a couple of ways that you can implement a survey. You can send it to users directly maybe through an email. You can also ask users to fill out a survey when they visit your website, and you might be familiar with a pop-up that may appear on websites that ask you to fill out a survey. When crafting your survey and deciding on the length of the survey, you have to keep your research goals in mind. If you want to ask lots of questions, say upwards of 200 questions, you have to keep in mind that you might not get as many people to complete the survey as you might get if you asked 15 questions, for example. Also, the types of questions that you ask will impact completion rates. If you ask a lot of open-ended questions that require typing, you might get less completes than if you're asking close-ended questions where people basically have to rate something or choose something out of a list. So, survey data in itself is quantitative, and so it lends itself really nicely to graphs and visuals that can really tell a story. There are programs out there that you can input your data into that will produce a graphic. There are also survey programs that actually do that work for you, but typically survey information and survey data is displayed in a visual way. In terms of running focus groups and surveys, you can certainly run them at the same time. However, it may be a good idea to run one, learn from it, and then have what you learned, impact what you do for the other. One of the other side benefits of surveys is that, you have this set of questions that you've asked your target audience, and you can actually do or conduct that same survey again, maybe the next quarter or in the next month. You can compare what you get in that survey to your original survey to see what improvements were made or where some scores or ratings have slipped. In your project for this class, think about the perceptions, the attitudes, the opinions, that you might want to know about your target audience, and decide whether or not you want to run a focus group. If so, what questions might you ask in that focus group, or decide whether or not you want to run a survey instead, and decide what questions might you include in a survey. Or if you want to do both, think about what questions might you ask in a focus group versus in a survey. 6. Usability Testing: If your goal is to improve the experience, you need to actually put something in front of people and have them use it, so you can get feedback on how they actually behave and how they interact with the experience. You would do this with a smaller sample size because the goal is to get some insight, iterate on the design, improve the experience and then test again. If you're looking for behavioral qualitative data, the best way to gather this information is to conduct usability testing. Usability testing is when you put something, a design, a mockup, a screen in front of somebody, a target user and you ask them to do typical tasks that they might do using that experience and you watch what they do. You see what they like, what they don't like, what's easy to use, what's difficult to use and where you can make improvements to the experience. There are a couple of types of usability testing that you can conduct. You can do moderated testing or unmoderated testing. So, moderated testing is when you are testing with a participant and there is a moderator present. That moderator is leading the session, giving the participant tasks, asking them follow-up questions and really just managing the session in general. Moderated testing is typically done in person so a person the tester would come to a lab or they would come to an office space to meet the moderator and to run through the study or it can be done online where you might connect with the participant over a web conferencing software like GoToMeeting or Webex. The other type of usability testing is unmoderated. Unmoderated is when there isn't a moderator present. Typically these types of studies are done remotely, meaning the participant uses some technology to access the test and access the tasks that they're being asked to complete and what they do is they record themselves as they do these activities. Sometimes they think out loud, other times they don't. With usability testing, you typically have research questions or research goals and these aren't necessarily the same questions that you would ask a participant. You would never really ask a participant, "Do you think this design is easy to use?" You actually would give them an activity that they would typically do and you would watch them as they complete that activity and you as a moderator or as a researcher, would be observing. Is this actually easy to do? In my example, you can imagine conducting some usability testing on a rough design of the subscription signup process. Let's say my main research question is, is it clear how to set up a subscription using the site? What I would do would be to give people the task of setting up a subscription and I would observe them as they walked through that activity. I'd ask them to think out loud and tell me their feedback as they're conducting the task and from that information, I'll be able to gather some insights into what's working well, what's not and what we can improve. So, we can make some improvements and then eventually, test that design again. When you're conducting usability tests, there's a couple different types of data that you can collect. You can collect things like behaviors, you can also capture more quantitative elements in a sense that you could get an average time on task. How long did it take people to sign up for the subscription using the design? Or how successful were people? Four out of six people were successful for example. One great thing about usability testing is that you don't always have to be testing your own products. You can test your competition, you can test best-in-class experiences to get inspiration on what to include in your new design. Usability testing is one of the most effective ways to get feedback on your design if not the most effective way. You can get the feedback you need very quickly. So, if I put a design in front of three different users and all three of those users are struggling with the same problem, I know that I have something that I need to go back and fix and I need to iterate and make it better. The main goal of usability testing is to understand behaviors and to observe people using things naturally. But the other thing you really want to dig into is the why and get people to talk about why it is they behave the way that they do? Why was it that they had trouble interacting with a widget on your website? Or why was it that they didn't understand the information? Getting them to elaborate and explain their challenges and their thinking and what they like, can really provide valuable insight for you and your team as you go back and iterate on the design. Sometimes there's confusion between the difference between ethnographic research and usability testing. In general, ethnographic research is the process of you or a researcher going to a participant and observing them in their natural environment doing the things that they would normally do even if you weren't there. Usability testing is when you actually bring users to you and you ask them to do certain things or certain activities using your design. So, with all of these methods though I wouldn't recommend just doing one over the other, it's really the combination of these methods it's going to give you the most power and the most insight into your users what they need and how to create the best experience. With usability testing, it's ideal to do more than one test. You really don't ever want to do just one usability test because after that test, you're going to go back and improve the design, iterate on it and you're going to want to test it again. So, that's why we recommend smaller sample sizes. Let's say that you have a budget of 20 participants that you can test with over the course of your design process. You wouldn't want to bring all 20 into the lab and have them test the initial design. Because if you have all 20 go through the test, then you won't have any participants left to test the new design or the new iterations. In your project for this class, think about whether or not you're at a place where you might want to do some usability testing. Even though I reference doing usability testing throughout the design process, you can also do a usability test of a live design or something that already exists. If you feel like usability testing isn't a good fit for where you are, think about getting creative with usability testing. Maybe you could test some competitive websites to see where the opportunity is and where you can do a better job in differentiating yourself. Maybe you can look at websites or apps that are well-known or known for providing really great experiences and learn what what are they doing that is so great and what are users really attracted to and what are the things that they're struggling with. This can give you inspiration for improving your design. 7. Web Analytics: Once a website is live or once an app is live, you often have the question, how successful is it? How am I doing? Is it actually better than what I had before? A great way to get that insight and to get that data is by looking at your web analytics. One tool that you may have heard of before is Google Analytics and that's a popular tool that's used to measure web analytics. So, with web analytics you really understand what is happening on your site, how many people are coming to the site, where they're going, what they're clicking on, where they're converting, how many are converting, but what you don't capture with web analytics is the why, and the understanding behind, why are people clicking on certain buttons? why are people dropping off on a certain page? If you want to understand the why behind those behaviors, that's where it's perfect to bring in usability testing. So usability testing gives you insight into why people behave the way they do, and web analytics can give you insight into what people are doing. So similar to survey data with larger sample sizes, web analytics data lends itself very nicely to graphics and visuals. The tool that you use for web analytics will typically provide you with some graphics that you can share with others. The key question or the key thing to keep in mind is, what is the main research question that you're trying to answer by looking at the web analytics? Narrowing and focusing what it is you're actually presenting because lots and lots of graphs can be overwhelming. Oftentimes with web analytics, teams will look at them and they'll speculates reasons why things have increased, convergence have decreased and they won't actually do the research into why that truly happened and so that's why it's a great idea to pair Web Analytics data with usability testing to understand why some of the fluctuations might be happening in the web analytics data. For your project, if you don't have anything that's live yet, think about what are the metrics that you might want to track with web analytics. If you have something live already, what are the metrics that you're currently looking at? Which ones are important to you and your business? Think about, is there one metric that you could look at that could help influence your research strategy? So maybe if you see it fluctuate, you can think to run a usability test or run a survey to get more insight into why the metric or the number has shifted. In my example, I may want to track things like, how many people who come to my website sign up for the subscription service? Also, what is the average length of time that people sign up for the subscription service? So if I notice that people are signing up for three months subscription, when we really want them to sign up for a 12 month or a two-year subscription, I want to know why that's happening. My web analytics are not going to answer that question for me. So what I would want to do is some follow-on research like a usability test. In a usability test, I could learn is there anything in the design or interaction that's prompting people to choose three months subscriptions over another length of subscription. If I wanted to learn a little bit more at scale I could even do an online survey asking about the average length of the subscription and why they chose the option they did. 8. Conclusion: In thinking about the methods that we covered in user experience research, it's important to keep in mind that while this information and insights that you get are interesting and useful, you really have a main goal of impacting your business metric or your KPIs. This could be anything depending on the type of organization you are. It could be your conversion rates, it could be your average sale amount if you're on an e-commerce website. It could be lead generation or the number of leads that your website brings in. It could even be something like brand awareness and brand equity. So creating great experiences will ultimately impact your business metrics and the things that are important to your organization. User experience teams and user experience research can live in many places. It can live in the product team, it can live in the marketing team, it could live in engineering. It really just depends on the organization. More mature organizations have dedicated teams that are related to customer experience and user experience and sometimes are centralized and sometimes are decentralized. But it really varies from organization to organization. There's plenty of conferences and online courses that you can take to educate yourself about user experience and user research in particular. There's also a lot of great books out there that you can access. I also would strongly recommend networking and attending events related to UX in your area. There are a lot of groups out there joining the UXPA which is User Experience Professionals Association. It's a great place to meet people who might be new to the field as well. In today's class, we've covered quite a bit. We've talked about user experience and the importance of good user experience. We've also talked about user experience research and the various methods that you can use to gather insight on how to improve your experience. I encourage you to use the framework that we talked through today to help you create your strategy for conducting user experience research for your own organization or for your own experience. So, now it's your turn. If you haven't started already, I strongly encourage you to begin your project and as you work on it, share it in the gallery. So thanks so much for attending the course today. If I could ask you to take away one thing, it would be to remember to keep your user in mind or your target audience in mind through every stage of the design and development process.