Transcripts
1. Intro to UX Research: the most efficient way to create a high quality customer experience is to start research early in the design process and to keep testing every step of the way its customer experience. Decision makers. We could make better product choices if we know the basics of conducting newser research, but sometimes feels this way. You don't need to have a master's degree to effectively learn things about your customers. I think it's important that more people develop empathy about those they designed for. I hope this class inspires and encourages you to get out of the building and talk to customers more often In this class will explore what user research is, how it fits into the experience design job, family. Why it's essential and, most importantly, how you can start doing it effectively right away. This is a great class for you, XO. You why Designers, graphic designers, Web designers, product managers, developers, entrepreneurs, freelancers, anyone who makes decisions that impact customers can make better decisions if they learn about those customers. Well, look at strategies for analysing the data. You collect research, creating effective research, deliver bulls and making a case for research with your clients or at your company. There are probably hundreds of different research methods one could employ to learn various things about their customers, and we won't attempt to learn all of them, not by a long shot. Instead, this class focuses on building a foundation for customer experienced practitioners. My name is Carly. I'm the instructor for this course. I'm currently a user experience designer at Amazon before getting into design. I was a full time Irish dancer. That was my job. I taught to heard perform, did a lot of choreography, developed dance curriculum, and I absolutely loved it. And I still do a lot of dance in my life. I got into design because I was designing and writing for my own dance websites and realized that design was something I could do that would give me the freedom to be able to continue dancing after I didn't want to do it full time for my favorite people to do design work for our small business owners, especially people who are working in the dancer arts fields. What I love about doing designed for those folks is that can help them really grow their online reach in business in a way that they wouldn't be able to if they didn't have a designer working for them. Thanks so much for taking time to take this class with me. I am really excited that there are people who are interested in talking to customers and really trying to figure out what products we should build on, why we should build them. The best way to learn how to do research is to start doing it. I hope this class could you enough information to get started, then to go out on your own and start researching.
2. Experience Design: Welcome back to an introduction to you X research. In this section, we're gonna talk briefly about what experienced design is and how research fits into the experience. Design Umbrella Experience Design is the practice of designing products, processes, services, events and environments, with a focus placed on the quality of the customer experience and culturally relevant solutions. Experience design is a meta discipline. It draws from many other disciplines, including interaction, design, interface design, content strategy, customer research, visual design, an information architecture, there, many other disciplines that roll up under the umbrella of experience design as well as well . These are just a few of them. At the core of what experienced designers do is discover how people use and interact with technology products or services, and translate those discoveries into relevant technology products or services. If your designer and you're not basing design decisions on customer research, you're not practicing experience design. It's experience designed team's job to help other parts of the business discover how people use an interact with technology, which gets to the point of this class. UX research is incredibly important because the information we get from research is at the core of everything an experienced designer does in the next section will look at what experience design research is and why it's so pivotal to experience design.
3. Design Research: Welcome back, Teoh. An introduction to UX Design Research in this section will talk briefly about what design research is and how it fits into that experience design continuum. According to Interaction magazine, the research part of UX continuum is defined as an action where you investigate something systematically. A new X research. You apply various techniques in order to add context and insight into the design process. Research is needed to reach new conclusions, established, fax and find problems. In addition, you X research will help you understand the users and their needs and identify the requirements of the product. It's important to note that good research involves applying the right technique at the right time in the product development process. According to the Nilson Norman Group, spending about 10% of a product's budget on research will more than double your products. Desired quality metrics quality metrics could be things like self reported customer satisfaction scores, attached rates, conversion rates, average, spend on a retail site or engagement. It's also the right thing to do. Designing products that people actually want, reduces waste and can improve our quality of life, reducing customer frustration and time spent on mundane tasks is the ultimate goal of new technology. You can have the best technology in the world, but if it's hard to use, it's worthless. Not understanding your customers or their needs leads to things like spending time, effort and lots of money on products that won't work for people losing customer trust in your brand and or your product, misaligned team goals and general unhappiness. Next, we'll dig into the types of Europe's research you can do when you might use what method and what type of outcomes you might expect.
4. Types of Experience Design Research: Welcome back to an introduction to UX Design Research In this section. We're going to talk about the different types of user experience design research. There's gonna be a lot of definitions in this section, so prepare yourselves and bear with me. Keep in mind that a lot of experience designers don't understand this basic terminology, so you'll be ahead of the game after this lesson. The first thing to talk about is to understand two important terms. You'll hear a lot in the world of you user experience. Design research, qualitative versus quantitative research. First qualitative research. As a sociologist, William Bruce Cameron stated, Not everything that could be counted counts, and not everything that counts can be counted. Qualitative methods can can give you an in depth understanding of something in ways that are impossible to reduce. Two numbers. Qualitative methods of research provide you with more detail about a situation or an answer , because you can always follow up with the new question or go back to your participants for more detail. For example, if you're participants do not use fitness trackers. An interview allows you to ask them why not In a way that is open to all kinds of answers. Where is the survey response Typically used in a quantitative method, you have to list a limited number of possible reasons. Finally, qualitative research methods do have the advantage and that they often acquire less time and fewer resource is than quantitative research methods do. So they're easier to fit into an iterative development or design process. Qualitative methods get to the why. So when you're trying to understand customer behavior, you use qualitative methods. Toe. Understand why customers behave in a certain way. There are some limitations to qualitative research. When conducting qualitative research, we assume, and accept the fist facilitator becomes a co creator in the outcome because their personal biases are inherent and unavoidable. As a follow up to that of facilitators, personal biases can have a much greater effect on the research they're hard to control for and can influence the outcome. Numbers found in qualitative research can't be mapped Oneto one in the real world, for example, if six out of 10 participants clicked the orange button in your qualitative study, that does not mean that 60% of participants in we'll click the orange button if you launch the product. Qualitative research doesn't give us any definitive answers. It's hard to say anything with 100% confidence. Qualitative research is also hard to replicate. You can't go back and redo the same study and expect to come back with the same results. It also requires a practiced facilitator, a good facilitator. Wilson will be sincerely interested in understanding the customer. Listen, Sarah T. Might take them off script. Often, a good facilitator has to know when to follow an interesting train of thought versus one not to. That takes a lot of experience. Listening to hours and hours of recorded customer interviews can be tedious and time consuming. And trying to make sense of hundreds or thousands of separate customer thoughts and opinions takes a long time. Some research methods that you might employ when you're conducting qualitative research are things like customer interviews, focus groups, contextual inquiry, diary studies, intercept interviews, customer observation and participatory design. On the other hand, quantitative research is measurable. Data means it could be turned into statistics. This means it can be generalized into a greater population. Quantitative data formulates fax based on patterns and customer behavior or customer attitudes. Some limitations of quantitative research includes things like it doesn't allow for the unknown. Because of the structured nature of quantitative work, the researcher cannot allow for what they do not know about their customers. For example, in a quantitative survey, a respondent has to select a response or set of responses. If the survey isn't getting at the right problem, the customer has no way to tell us what the problem actually is. We generally can't understand goals and motivation from quantitative research. We can only see what the behavior is unless we were to do an in depth study and change one variable at a time. For example, in public health research, we could tell quantitatively that certain populations with a specific behaviour are at a higher risk for a specific disease, but we wouldn't know why they were at higher risk. Without further investigation in the product design world, we could use quantitative data to see that a certain color treatment on a button is not increasing conversion rates. But we wouldn't know why that color wasn't working without further work. In order to measure customers actual behaviour patterns, we have to observe and analyze what people are actually doing that means a fairly significant upfront effort, making it harder to fit quantitative data into a narrative design cycles. Some research methods that you might employ if you're conducting quantitative research are things like a B testing customer surveys, online polls, systematic customer observation, longitudinal studies and interviews. The best research findings will come from a combination of qualitative and quantitative research. The key to doing successful design research is to know what type of research to do and when to do it in design, neither qualitative nor quantitative research should stand alone. You should always try and do both. For example, if you were building a new product, you might design a large Internet survey to prove or disprove some initial ideas. Hypotheses who had. After getting the results, you might conduct some customer interviews with same population to understand what motivated them to answer in specific ways. The next two big terms to have an understanding of our generative versus evaluative product research. The goal of generative research is to look to the world around you to find opportunities for solutions and innovation. These solutions could be new products or experiences, or they could be an update or improvement to an existing one. The goal of the value tive product research is to measure the effectiveness of proposed or actual solutions. It's important to remember that you could do qualitative or quantitative research with either of these research types. For example, in order to generate new ideas, you might look at website traffic data to see what customers air clicking on, then conduct in person in contacts, interviews To understand why customers clicked on a specific part of your site, you might ask them what they were hoping to find, what they were looking for, why they left, etcetera. In the same vein, you might notice, quantitatively that one design, Lau is converting more people than another in order to find out why you could run a usability test with five customers to see what they say and do. Generative research is an attempt to gain a deeper understanding of who our customers are, what they need from our service, what problems we should attempt to solve for them and how we might solve those problems. In order to identify new and innovative solutions, you must define the problem you're trying to solve. This requires you to truly understand people's lives, environments, behaviours, attitudes, opinions and perceptions. Andy Young, a user experience consultant and a founding partner of Adaptive Path, believes you must feather your nest with generative research. This includes rich data about your target audience and their needs and goals. Before defining the problem and crafting a solution. Without generative research, you could very easily create something no one wants or needs. For a person who doesn't exist when conducting generative research, the most important thing to do is to keep an open mind. You don't actually know what problem you are trying to solve, yet you don't develop the best solution by doing lots of his value to research and refining your design. You develop the best solution by properly identifying the problem with generative research . On the other hand, evaluative research is a means to measure how well something is or is not working. We often talk about usability when we talk about evaluative research. Usability is a quality attributes that we can measure evaluative Research should happen at all stages of the design process. If you do it too late, you could build something people want but can't use evaluative research should always be part of the iterative design process. Getting designs in the hands of our users as soon as and as often as possible ensures that the experience will be shaped and refined to truly meet customer needs and expectations. Evaluative research can and should happen in all stages of the design process. It's not limited only to the end product. In fact, only doing of additive research at the end can cost us a lot of valuable time, and money in the next section will do a deep dive into generative research.
5. Conducting Generative Research: Welcome back to an introduction to UX Design Research. In this section, we're gonna do a deep dive into generative research when to do it, what outcomes to get from it, what methods you might use and what artifacts you might produce. Remember, the generative researches the process you might use to learn who your customers are and what their needs and motivations are. Generative research is very helpful at the beginning of a project before you've decided on product requirements or committed to a particular solution. Jarrett. If research can give you an idea of the product's future directions, specific pain points or customers might be facing what your customers want or expect from your product or ideas for new features for an existing product approached generative research with an open mind. You don't know what you don't know, which is what makes generative research so valuable. Remember to use generative research when you want to learn why something is happening or why a specific solution will solve a customer problem. There's several types of methods one might use to conduct generative research. Remember, the goal is to generate lots of ideas. The most straightforward way to conduct this conduct This research is customer interviews. That's one of facilitator. Sits one on one with us with a customer and accident. Asks them questions about the topic they're interested in. Learning more about contextual inquiry is similar, but in contextual inquiry of facilitator works with a customer in their specific environment and ask them what they're doing instead of observes what they're doing instead of just asking them what they're doing. Focus groups are another type of customer interview, but you normally conduct them with more than one customer at a time. Diary studies are something customers can do without a facilitator. They record what they do in regards to a specific topic over a specific period of times intercepts. Intercept interviews are a method of research where a facilitator stands outside a store or in a mall and intercepts customers from whatever they were doing and gives a quick interview. Participatory design is a method of gathering ideas from either customers or other people in your business organization to generate a lot of ideas about what product we should build or what feature we should add. Customer observation is another means of getting information about how customers actually behave so a researcher would sit and take detailed notes about what customers are doing in a specific situation As an output from generative research. Uh, you might see a lot of different artifacts. Artifacts could be things like personas, ah, storyboards, customer journey, maps, empathy, maps or research report. A large part of your job is aux practitioner is to help stakeholders and your design team towards making customer focused design choices. The best way to influence the design process is with research artifacts that make everyday appearances in your work. Visual artifacts are generally more compelling and memorable than a written research report . But depending on where you work, Britain reports, maybe the expectation the output of generative research could be any number of things. So don't feel limited by this list. There are some of my These are some of my favorite tools for telling stories about Janet of research findings, but there are many others in the world design. Find the ones that work the best for you and for the story you're trying to tell to go back to this list of outputs. A persona is a fictional representation of many different customers, a well built research based persona will seem like a real person, someone who might actually know in real life persona should be posted around your design space and referenced often when defending or making design decisions. You could help your team remember that they're designing for that person. The story board is similar to a persona, but it tells a story of a specific persona going through their day. The storyboard might describe a customer's problem or might describe your proposed solution . A customer journey map is similar to a storyboard, but looks at a customer's emotional state when they're trying to deal with the problem or when going through your proposed solution. An empathy map is similar but allows other people on your team to better understand a customer's feelings when they're dealing with a specific problem. A research report is more self explanatory. It's a written report that details all of the data and findings from your specific research project. In the next lesson will be diving into the first class products, conducting generative research and creating storyboards to communicate findings
6. Class Project Break: Customer Interview Practice: Welcome back to an introduction to you X research. In this section, we're going to take a break from me lecturing and start on a class project. Our class project is going to be customer interview practice in this project. We're gonna interview someone and create a storyboard toe, identify their problem. We're gonna use the five wise method. And the five wise method is a great technique for structuring an open ended customer interview because it congenital lot of ideas about a particular subject fairly quickly and without a lot of overhead. The five wise method is also a good reminder for facilitators to continue to push towards the root cause of a customer's problem instead of only the surface issue. The five wise method is called a root cause analysis because it attempts to help a facilitator understand the root cause of a problem. Rather than focusing on the consequences of that problem, it can also uncover problems that were potentially unknown. The five wise is a method of exploring the cause and effect relationships of a situation. It's easy to set up because a user interview script isn't needed. You just need a basic idea of what you'd like to talk about and you let the conversation happen naturally, it's a simple as it sounds. The facilitator continues to ask why, until they get to the root cause of someone's problem, Your task is to interview a friend or family member using the five wise Method. Your interview will hopefully lead to an idea for a product or service that might help this person solve their problem. So selected interview topic that makes sense. In that context, you might choose something like commuting, cooking, social life, etcetera. If you already have an idea for a product based your interview around that topic. But remember not to ask about the product idea you already have. Keep an open mind to learn what you can. It's helpful to audio record your interview so you can go back later and remember what you talked about. I also like to jot down quick notes as reminders to myself to listen to a particular piece of the interview. Ah, life All often write down time stamps of when my participants said something I want to be sure to go back to. We'll go into more detail about making participants feel comfortable later in the class. But for now, remember that the person you're talking to wants to be said it ease before they share their experiences with you. Introduce yourself. Explain that you want to hear what they really think and that your goal is to better understand their situation so you can make better product decisions. Be sure to ask your participant if it's okay to record their voice and reassure them that you'll only be using it internally, not publishing it anywhere. It also helps to throw in some easy questions to start the interview. What do they do for a living? What do they do for fun? Ask them if they have any questions for you. Once you dig into your topic. Encourager participant to talk by actively listening, nodding, looking at them. Don't be afraid to jump in in order to learn about something that sounds interesting to you . But be wary of interrupting your participant or finishing thoughts for them. Even if you know the person very well, use the five wise method to get to the root of the problem or experience someone is describing. They might say. I hate driving to work every morning. Ask them. Why? Because I hate sitting in traffic. Why? Because I find traffic really stressful. Why? Because I feel that I'm wasting valuable time. It could be working or reading or exercising instead. That might be the root of the problem. Always think that for their time and give them another opportunity to ask you any questions they might have had about the interview. The next thing to do is to document what you learned in your interview by creating a storyboard. A storyboard is a sketch that helps tell a story of a particular character. It's a design tool that helps communicate a customer problem or solution. In this case, we're going to use a storyboard to communicate the customer problem you identify after analyzing or customer interview. A storyboard can be a simple is a quick sketch or could be a beautiful piece of art that gets shared broadly. It's up to you and what the storyboard will be used for. Focus on one customer problem that you want to tell the story about the simpler, the better. You might have text on your storyboard, or you may let the visuals do all the talking. It's up to you as an example. This is part of a storyboard I used recently to communicate to a team of developers and product managers before leading a brainstorming session for improvements to a product. I created this sketch on an iPad. Here's another example of a storyboard I used that's lower fidelity and includes text. This is the whole story, and I created it with a Sharpie on a piece of paper. As you can see, the fidelity of your story board doesn't really matter. The point is to be able to communicate your idea in a visually compelling way. I'm excited to see what you come up with, please, poster storyboards and any questions you have about the process to the project section of this class.
7. Conducting Evaluative Research: Welcome back. Now that you've had a chance to get your feet wet conducting a user interview, we'll get into the other form of research. You might do a lot of evaluative research avail. Evaluative research is used to measure how well something is or isn't working. Designers use evaluative research to make better decisions in you by design, information, architecture, visual design, copyrighting page layout, content, strategy and more. It's something that should happen throughout all phases of the design process from the earliest product ideas toe after your product has launched. Not evaluating the effectiveness of your product can lead to costly error in the usability of your design. Remember that it could be measured both qualitatively and quantitatively most often. As a designer, you'll be asked to measure something's usability. Usability has impacts on other measurable data points like conversion, sales, engagement, etcetera. Evaluative research seeks to measure both how well something is working or not working and why it's not working. Usability is defined by five quality components. Learn ability, how easy it is for designers to for users to accomplish basic tasks the first time they encountered the design efficiency. Once users have learned the design. How quickly can they perform tasks? Memorability When users returned to the design after a period of not using it, how easily can they re establish proficiency errors? How many errors do users make? How severe are these errors, and how easily can they recover from them? Satisfaction. How pleasant is it to use the design? There are many other important quality attributes. A key one is utility, which refers to the dying designs functionality. Does it do what users need? That gets back to what we were talking about earlier? And you would have to decide that based on generative research methods, there are several different methods we might use to measure the usability of design. And as you might have heard, usability testing is a popular one. But it's not the only way to measure something's usability. Heuristic evaluations and UX audits rely on the expertise of UX practitioners to uncover potential pain points in a system. Usability testing is a technique used in user centered interaction design to evaluate a product by testing it on users. This means, rather than showing users a rough draft and asking, Do you understand this? Usability testing involves watching people try to use something for its intended purpose. A B testing is a means of measuring something's effectiveness. Quantitatively, two different experiences are built and launched normally, with one variable changed to see which performs better against the desired metrics. The best output a designer could make from evaluative research is a recommendation on how to make the design better bonus points if you can order the priorities based on customer pain, it would also help to talk with developers before making a recommendation. To better understand what changes would take the smallest amount of Dev effort compared to the potential impact on the customer experience. You might make these recommendations in various ways, depending on where and how you work. Changes could simply reflected in your next iteration of the design. So you create a new design and present that to stakeholders. Or they could take the form of a long document with research results supporting your recommendations. You could also markup screenshots to show stakeholders what elements are causing the most problems on a particular screen or flow. In the next section, we're going to jump into a project where will practice conducting accuracy, stick evaluation
8. Class Project Break: Heuristic Evaluation Practice: Welcome back to an introduction to you. X research. This part of the class project gives you a chance to practice looking at a website with an eye for its usability Ah, heuristic evaluations. Something that could be done independent of talking with customers, which makes it a really great way to practice understanding, usability. So, as I was saying, You can do this evaluation independent of customers, it makes it very low cost while still identifying key issues. Ah, heuristic evaluation is a method of inspecting a user interface and measuring the usability of the interface against specific heuristics. Ah, heuristic uses a practical method of measuring usability. It's qualitative because it's not guaranteed to be optimal or perfect, but sufficient for the immediate goal of determining how usable something is. The Curis six will be using are developed by Jacob Nielson. There. The 10 usability here is sticks for user interface design, and they're widely recognized in the U X designed world. I'm gonna go through them here, but if you want more information or you don't understand something, you can visit in and g dot com for more info about the heuristics. So here we go the 1st 1 is visibility of systems status. The system should always keep people informed about what is going on and should provide timely feedback to customers match between the system and the real world. This means the system should follow patterns and conventions that occur in the real world. We should speak the customers language, user control and freedom. This means not leaving the customer with inviting dead ends. We offer options to get out of lengthy tasks if a customer wants to. Consistency and standards, symbols, words and actions mean the same thing. Across the system. Air prevention don't let customers make costly errors without warning, either because it's designed to not let errors a court occur. Or there is some second opportunity to reverse an action if needed recognition rather than recall, customers should not have to remember key actions and lengthy flows. Options, actions and objects should be visible to the customer flexibility and efficiency of use experience. Users may see elements that accelerate their interaction. This allows users to tailor their own experience for frequent actions, aesthetic and minimalist design. Simple interfaces that give customers just enough information without overwhelming them. Help users recognized diagnosis and recover from errors. This means that error message messages should be in plain language and should contain an actionable solution helping documentation if help is needed. And hopefully in a well designed system, it's not. It's easy to search, is searchable by the customers task and lists concrete steps to figure out whatever problem there trying to solve. Now that you have a better understanding of what the heuristics are, you're gonna conduct an evaluation on a site of your choosing. One best practice for conducting a heuristic evaluation is for more than one expert to conduct the evaluation independent from the others. That means you compare notes later to identify all the usability problems you found. You can choose to do that today or not. Either take notes or record your voice and talk out loud as you're going through the experience. Keep the here six next to you as you go through the interface so that you can map your experience back to a specific heuristics. Go through the interface is a customer several times making notes each time clearly write or say what heuristic is not being met and where in the interface the problem occurs, the next thing to do after you conduct a heuristic evaluation is to clearly communicate your results. Here's an example of a part of a presentation I created to highlight specific problems in an interface I was using. Nelson or Ming groups usability heuristics. As you can see, I marked where the problem occurred and then included some text about what the problem. Waas When you're communicating your results, what's important to remember is to clearly write or say what heuristic is not being met and where, in the interface the problem occurs, you can put that information in Excel, file image or document to communicate what you found. Remember, you can communicate heuristics that are working well, not just the ones that aren't working. Once you've conducted your heuristic evaluation and created your presentation or Excel file uploaded to the project section of this class, remember, I'm always available for questions about the project or about the content of the course in general. Please post those questions to the discussion section of the class, and I look forward to seeing what you come up with
9. Research Design: Hi, everybody. Welcome back to an introduction, Teoh UX Research In this section, we're gonna go over how to plan, design and execute a research project. This includes how to ride a research plan, the necessary steps to setting up a research session, best practices for facilitating research and following up on your research. I've included several templates that will help with this section research plan, templates, usability, testing scripts, research report templates and a screener template. Please use them and modify them to your specific needs in style. Depending on where you work, a research plan may be very formal or not formal it all. No matter what fidelity your research plan is, it's important to have some idea of why you are doing research and what you hope to gain and learn from, said research. In some places, a research plan may also include a budget information on vendors you're using for recruitment, etcetera. There are probably hundreds of different methods of conducting research, and we've only gone over two of them in this class. Luckily, many of the principles of customer interviews carry forward to many types of research and understanding. Usability heuristics is instrumental in evaluative research with this foundation, you should find it easier to search for and understand other research methods and select methods that will work best for you. Designing your research plan should start much like any good design project. First understanding what you need to know, why you need to know it and what you're going to do after you know it. The key. The key to conducting actionable research is knowing what type of research do at what time . It's easier to figure this out if you know what you want to learn. First, do you want to learn how well something is working? How well a specific interaction will work, or are you more interested in what to build or who might be interested in a particular project? Maybe your product manager has a great idea for a new feature, and you want to understand of people might use it. Understanding What do you want to learn will form the basis of your research plan. The next thing to do is to devote to develop a hypothesis, you can better focus your broad research plan by clearly stating what you think. The answer or problem is with the hypothesis or hypotheses. Ask yourself what you think is going on right it down and develop research questions from there. The third thing to do when you're developing your research plan is to think about it as your results document. After the research is complete, you'll go back to planning document filling the results and be done after you have a broad idea of what you want to learn. Develop a set of key research questions You'll answer with your research. These could be things that relate to usability. If you're doing evaluative research like do customers first understand how to use the interface in order to make a person a purchase? Generative questions might be more like what our customer pain points in their current email client. Now that you know what you're trying to learn and what your key research questions are, you can decide what method or methods you can use to get at the questions you want answered , and you'll develop your plan from there. Once you start knowing what to look for, there are a ton of free resource is available for customer research for planning and documentation. I've provided some templates, but usability dot gov has a lot of templates as well. Before you start talking to customers, there are a few things to have in place, so things go smoothly. Depending on where you work and what you're working on, you may need different documents. It's important to understand if participants need to sign an nd A, and if they do, make sure you're using one that's approved by your legal department. Customers should also sign a consent form for participation. If your audio or video video recording you'll need to get consent, check with your legal team about how they would like to get customer consent, depending on what type of researcher doing. You want to have a script written up before the big day. Scripts help you and any other facilitators stay on track and insurers. Each participant is asked the same questions. A screener is a script or a survey that helps determine if certain participants are eligible for your research study. For example, if you're only interested in talking to customers who have shopped on your site in the past three months, a screener given to participants before they come in for an interview is crucial. A screener also helps determine if the customers you plan to work with are part of your target demographic and helps ensure a diverse study. Finding and scheduling Portis participants could be the most time consuming part of the customer research, but is key to getting the best results. In some cases, you'll have a vendor or other team members who could contact, screen and schedule the participants. But in many cases you'll be responsible for that tasks. For that task, it's important that your person participants meet the criteria you develop. For example, if you want to talk to people about producing a car, it's important that that the people you talk to intend to purchase a car or have purchased a car in the past. It's also important that your participants are diverse. Talking to people with similar key experiences from different backgrounds is essential to making sure that your research reflects the diverse population of people are designed to serve diversity can mean age, income, career, location, gender, ethnicity, language spoken and the list goes on. Participants in research should always be compensated for their time, generally in the form of cash or gift cards. Not compensating people not only means you'll likely have a hard time getting folks to participate. It reflects poorly on the company you are representing with your research. There is not a hard and fast rule for how much to compensate participants, but remember that time is valuable. Toe everyone. If they had to come to you, take time off work park. Spend an hour or more in an interview, etcetera. It's fair to compensate them between 75 $100 If you're conducting an online survey, something that gets e mailed to all your customers and might take them five minutes to fill out. Compensation can look like an entry into a small give away or coupon for the next time they shop. If you plan to show our test any prototypes, make sure those air ready to go working and is working as expected and set up before participants arrive. This goes the same for testing alive product. Make sure accounts are set up properly and the system is working as expected before participants arrive. Prepare any documents or methods of collecting the data from your research. This could be an Excel sheet that is ready to input answers from an interview or a means of audio. Recording your contextual inquiry conversation well before your participants arrive at least a day before do a practice test on a colleague. Inevitably, there's probably something in your research plan. Test grip prototype that isn't quite working out the way you thought it would potentially. It's a question that doesn't make sense. A broken button on your prototype or a piece of missing information. These errors air Best cop when you don't have a customer sitting in front of you, watching you tread water to try and fix the problem. The most important thing to remember when you're working directly with customers is that they're people, too. Treating people with respect goes a long way in order for folks to be honest with you, it's your job is a facilitator to gain their trust. This starts with respecting their time. Make sure you're on time set up and ready to go when they arrived. Tell them before they start how long you expect the session to take and make sure you ended on time. Even if you haven't finished getting through your whole script, it usually helps to break the ice with some small talk before jumping right in. So have some questions in your back pocket. Like, what do you do for work? How did you hear about this, Etcetera? Set expectations for your participants. Remember, they have probably never done this before. Let them know why they're here. How important it is that they're honest, that you're not testing them. I I always remind them that if they feel lost or stuck when interacting with the prototype for a life site, it's because there's something wrong with the design, not with them. I also always remind them that they are going to hurt my feelings. Remember to ask if they have any questions about the process. Listen actively and actually listen. Sometimes we can get so caught up in conducting our tests that we missed the most important parts. Let your participants talk, let them finish their thoughts and let them explore the interface without interruption. Silences okay. Give them time to think and breathe. This ties into the next point, which is don't lead. Remember that you want to learn what they think, not what you think or already know. Your questions should be designed in a way that don't give participants the answers instead of saying, How would you add that item to your cart? Say things like, What would you do next? Or what would you do if you wanted to purchase that item? Most importantly, remember that design research is about the design. It's not about you, the designer, developer or product manager. If someone is stuck in the interface, thereby uncovering the usability problem, it's OK. That's why you're doing research. No one designs the perfect interface the first time, and finding usability issues is a blessing. I can't tell you the number of times I've created something I thought was awesome. Onley tohave it completely fail in usability. Don't get caught up and don't get your feelings hurt. It's not personal.
10. Class Project: Conduct and Compile Research: no matter what part of the product designed processor in customer research can and should start now. There is something to be learned at every stage of the process, and your product will only improve with research. Remember that research does not have to be expensive, time consuming. Talking to even one customer is better than not talking to anyone. Any bit of feedback is useful and important. As I said at the beginning of this class, the best way to start learning how to do customer research is to start doing it. You've practiced two methods already. You could try one or both of these again, or try a different method. Your goal for this project is to create a research plan based on research questions and or a hypothesis plan and facilitate your research and create an artifact that helps your stakeholders understand what you learned. If you're not sure what to dio, I was just trying to plan for and conducting usability test. You can use a website or mobile app that already exists during research on a competitors product is a great way to come up with a new with new feature ideas or product improvements when the usability testing remember. The goal is to watch a customer interact with the system or product and try to learn what is or isn't working and why use the documents I provided to get started? And if you get stuck there, Aton of Resource is available online to help you along the way. Poster research plans, scripts, reports and or artifacts in the project section of this class help you feel more comfortable now with the idea of doing X research, especially the idea talking directly to customers. If your interest has been piqued enough to keep trying it, I encourage you to keep exploring the world of UX research and always available for questions or discussions, and I'd love to hear from you.