Building Better Products Using User Mindsets: Collecting Data | Jasson Schrock | Skillshare

Building Better Products Using User Mindsets: Collecting Data

Jasson Schrock

Play Speed
  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x
12 Lessons (31m)
    • 1. Introduction

      2:28
    • 2. Project Overview

      0:21
    • 3. Qualitative vs Quantitative

      0:43
    • 4. What to ask?

      2:03
    • 5. Writing survey questions

      5:15
    • 6. Finding the right users

      1:04
    • 7. Building a screener

      3:40
    • 8. Advanced concepts

      3:26
    • 9. Building and sending the survey

      4:14
    • 10. Analysing the results

      3:10
    • 11. Reviewing and presenting the results

      2:01
    • 12. Final thoughts

      2:21

About This Class

As a business owner, creative entrepreneur, or someone interested in solving a problem, it is essential that you understand your core user needs. When you discover the needs of your target users, you can build a product or service people will use.

In this class, you’ll learn how to collect useful data about your users and how to ask non-leading questions to ensure you’re gathering the right data. This quantitative data will help you define your product as well as develop more probing questions for user interviews.

Most user personas are descriptive instead of predictive. Even when they are predictive they often miss the core needs of all users. Before personas are created it’s critical to know the base needs of all your users. In this class, you'll learn how to break out of the persona only perspective and build your product's base with user mindsets.

No prior knowledge is necessary for this class — anyone who wants to create more robust products or services are welcome!

Transcripts

1. Introduction: Hello. My name is Melanie. I'm UX designer and art director, focusing on product decided moment. Sounds like a lot, but I'm from Austria originally and now living in Berlin, and I'm excited to do this culture class today. My name is Jackson, also a UX designer, and I've actually led multiple design teams all over the place now living in Berlin. But I'm originally from the States. Today we're going to talk about how to build better products by using use and mindsets. User man says. Our best case created out of quality and quantitative data, which means it's a combination of them, and we cut the whole class in three parts. The 1st 1 is finding the right requirements, which means you do some sort of briefing this briefing boot later be the basis for your survey questions Before you start with Terry questions. There's the second part, which is the screener disc Rina covers who you're going to talk to the certain neck mentioned. It's the survey, and then we will talk about how to find the right questions, what kind of answers you're going to to choose our use and, of course, how you evaluate and analyze your results. So one of the reasons we're looking at user mindsets as opposed to personas, which multiple people have heard about in the past, is that personas tend to be too specific. And they are often I love this term descriptive, as opposed to predictive so often their their wish lists of what you'd like individual types of users to be for your product on. They really don't predict what the user is gonna do with mindsets were actually taking in building a base and kind of finding the really key elements that will work for everybody. And from there, then you can really build up the product, and then you can start going into the different predictive personas later on Who should take this class? It's pretty much anybody that really want to sit improve their product because when you find your core user needs, you can actually develop the product in a way that will be enjoyed by everybody. And I think that's really important and beneficial for everyone involved. So the skills you learn from this is how toe build a survey, how to pick the right people to answer your survey and then take those final results and analyze them to get your key insights and get the base for your mind sets for your product and then you can build from there. We're really looking forward to doing this class, and we hope you enjoy it and learn. 2. Project Overview: it's critical to understand your users core needs to build the best experience possible for them. And to do that, I like to use user mindsets instead of personas, which going down to specific and get to detailed personas, uh, are good later on. But user mindsets help you build the core needs and find the base for what you need to do to build your product. 3. Qualitative vs Quantitative: quite a forces quantities data. We talked about it before its commercial use of mindsets to have both of them to make a choice. Qualitative data means observing user behavior versus quantitative data, which means focusing on numbers and things you just didn't have. Quantitative data is tracking and studying. All that uses have done so far, which means if you already have a product, you prepare it in best case before, so you combine it with the qualitative data. Quiet. If they don't means everything, uses Dude, it's their needs. It's How do you react to things If you do interviews, for example, you see their facial expression. That's for Dave Data. 4. What to ask?: when it comes to riding survey questions, it's it's really important to note if you start with what you want to get out of these corrections. Before I did a mistake, many times I just jumped into asking questions was I never really knew what I want to know . So if you start with basically two results, which tries to hold process around and you will have more valid questions later on, um, what kind of questions you ask or what you want to know depends a lot on your product and depends also not on where your product stands already, which means if you already have data, you can use it. You can evolve your product with the survey and make it better if you don't have a product yet, it's more or less much and wishes of attributes of things where you want to go to the future, and one of things that keep in mind is that you want to look at it to see is are there any things that air habits or things that you can turn into habits? If they're not habits, then you also need to kind of investigate why some other items that keep in mind or kind of performance eyes their an emotional connection to the product. And is the speed fast enough? And sometimes speed isn't always the right answer, but often it's, Ah, it's an important one. One of the other items to keep in mind is how many questions you should ask, and that comes up quite often. I would say no more than 10 for this particular section will have some of the questions that we use in something called the Screener, which will be coming soon. We'll explain that. But for this area, when you're reading the questions about the product, I would I would not go beyond 10 because just overwhelmed the users was always helpful is if you create more lessons, story off corrections. For example, if you have a product and that uses already use, you start with what behaviour do they have, and then how do they reacted a content, and in that case you will figure out the flow that would have new to to validate the data you have later on morning 5. Writing survey questions: so we created a worksheet for you to get started with this. If you look at the first page, it'll tell you how to get and make your own copy of the file. Then we also have some examples of questions to ask for a survey than also some screener examples as well, too. And we'll get into what a screener is shortly. We also have some examples of the Lakers scale, which is a way of measuring people's preferences for different things. Uh, once we have that down, then you can build your own survey from there. If you haven't done it yet, opening our worksheet, it will help you luck with finding the right questions and anti. So just do it. So if you look at the product questions, we have it broken down into three different areas. It's what do you want to know? What questions will you ask? And then what options will the user have available to them? Now it's really important that you only asked one item per question. So with what you want to know, you will put down your basic ideas for what you really want to find out what answers you're expecting to receive from the users. Then you go to questions. You will ask these air the actual survey questions, and you see here that some of them actually have two questions for what you want to know. It's because you have to break these down, where there's only one item. Her question like You don't want to load multiple items into a single question and then with the options, there's a whole variety of different things. You can try here more often. You want to make it so that it's easy for the user to pick something that fits them, and you don't want to exclude any options. After talking about how to find the right questions for surveys, we're going to talk about what kind of assets you could get, and therefore we have different options, which you're going to see in a slight. Now, Um, it's textbooks. It's a yes or no answer. It could be multiple choice or the opposite a check box, and the last one is a ranking skate. So textbooks is our I'd say the most difficult wants to evaluate at the end because you get open answers, but they give you most inside at the same time, because users. So you see how they react and how you're feeling west. They're answering questions If you just want to have yes and no answers, they're very easy to validate, but they don't have the range off context you get. For example, like with textbooks. If you have multiple choice answers, you offer them multiple options, but they use someone this much forced to figure out what's the best one for them. And the opposite is the checkbooks. You again have multiple items to choose from, but you can go crazy and take how many you want a rating scale with a lot of uses to sort multiple items according to the preferences. Additionally, there's the liquids K, which we're going to talk about later on multiple other options if you have any questions. According to that, we're very happy to answer them on schedule here, or Twitter or whatever later on. So when you're putting your questions together, one of things you really want to avoid are leading questions and leading questions are where you're trying to get ah, particular answer from your participants. Eso, for example, you might say something like isn't it true that you're happier now that you're using Product X. Okay is you're already telling the user that they're going to be happier because of that. That's just a really bad question to ask. Another potential example is, Would you prefer to use Product X because the prices are lower or product? Why, so you're you're forcing a detail in there about the prices being lower in the possibility that you really want people to pick product X or product? Why you really want to avoid asking questions like that? The question should always be neutral. It shouldn't given example to the user what they should pick. Something separate towns very obvious. But once you start creating questions, it's a trip. You get in way too easy, but it's very, very important. Do not ask leading questions. So, for example, here we have two questions that we aren't being asked the right way. They're very, very leading eso the 1st 1 Do you have any problems with your boss now that instantly shows that there are problems with your boss, so you want to reframe that by saying something like, Tell me about your relationship with your boss so that one is neutral. You can say it could be good. It could be bad. Could be somewhere between. It could be mixed, but you want to make sure that you're not framing the question beforehand. Now. Another example. We have here a zoo. Most people feel that $8 is way too much to spend for a simple burrito. Would you pay $8 for a burrito? Now, that last question at the end is fine. But that whole sentence beforehand is something completely removed because it's telling the user that most people wouldn't spend $8 for a simple prego. So you're already putting them in a group that if they're outside of that, they're bad or wrong or incorrect. So you want to remove stuff like that and just ask the questions as simply as possible. 6. Finding the right users: after a first, but which covered talking about the requirements, questions and answers. We're going to the 2nd 1 now, which is the question who you're going to talk to and hearing from the right users. Thats part is crucial to have, because it will on one hand, cover your target group, which you will need later on anyway. And you will find a representative sample of people that can answer your questions. And this is another important part here is that you need to make sure your data is clean, and to do that we need to build a screener so clean might sound like an odd thing for data . But there are actually things on people that you want to include and exclude. By doing that, you want to exclude your friends, your family, your competition. But at the same time, you want to collect data about the demographics of your group, their age, their gender, maybe their locale. But I think there's a bunch of things there where you want to make sure you include certain people and exclude other people 7. Building a screener: So if you go back to the survey work shape we created for you on you go to the screener questions, you'll see that we have two sections. Areas for users you want include and users you want to exclude on. Then you'll also see some similar categories, similar to what we had in the survey questions. But these air a little bit more precise because it's really about picking the right people to talk to. So if you look under, like include users, for example, you want to see who do you want to include. Then you have to go into this specific criteria for why you're picking these people. Uh, and then after that, you have to turn that into a question. And then after that, you have to give them the options to pick from, which is something we've already discussed with the other survey questions. So, for example, let's talk about say you want information from Onley existing users that have been using your product for a while, So you may want to say, as a question about have they been using this product for more than a month if they have been there in the include Bucket. If they haven't been there in the exclude bucket, you may also have one for age where you don't wanna have miners, say maybe somebody under 17 again. That's where the question would be under the exclude and the include area. Then you might want to also figure out maybe something about different devices they're using. This is for like, a mobile mobile app or something. You might want to ask what kind of platform they're working off of. If social networking is important for you as well, then you could also have that as a potential question. The types of questions are really about making sure that you're picking the people that are your true users. So, for example, if you look at the exclude we hear hears, you want to make sure you don't pick people that know you or people at your company directly cause they're always gonna be biased, and then you want to make sure you exclude competition, you don't want somebody to who is directly competitive. You giving information, false information into your surveys. You want to make sure that those air excluded as well. But once you put these all together you'll see that there's a lot of duplicate questions. You take the merchants together, and then you end up with your final screening questions. So let's dive into the worksheet little bit more and go to the Lakers scales page. Now on there, you'll see some examples. There's more than this, but these are the ones that you probably use the most. Eso we have here scales for importance, for frequency, for satisfaction, for agreement, quality likelihood. There's there's a whole bunch of other ones, but these you can use Aziz answers for for potential questions you have for your participants in your users. So take importance like the highest level would be a five. And that's very important where if you put a one, it's unimportant, or if you take quality five would be an excellent on then one would be very poor. I'm not gonna read all these to you, but these are some really good examples of what you can use directly for your questions on the answers users will have to give if you use a liquid scale, it's important to remember not to use too many questions because if you have too many questions, news is very likely to take the best in the middle and not give you a straight answer. You also have to remember it that users are less likely to choose the highest or the lowest points. They will only take it if they're really, really disappointed are super happy. Additionally, you should also offer an option for no answer or have abusive before so they can somehow get around answering the question. 8. Advanced concepts: so one of the important things to keep in mind is the order of the questions. So in general, you really want to start from the broad questions and go to more specific because the broad questions will help open the user up. And then you can get in the more specific details later. One of things you also want toe really avoid is biasing the user on putting them in a certain frame mindset or framing them so that they start answering questions in a way that they think they should. So, for example, if you put demographic questions like age and gender too early in the survey that actually there's been research showing that actually will convince the people that they have to answer the following questions in that particular age or input to that particular gender, it really forces them down a path that isn't true to them. So you always want to ask the question towards the end. Another thing to keep in mind is the order of the answers that are presented that actually convey I us people to pick one from the top or you know something toward the middle. Sometimes the bottom ones get lost in the whole mix. So a lot of these survey APS will let you randomize those. If that's an option, you should definitely turn it on Is that we get the best answer is possible. One thing you should think about, though it's probably one of the boring sites of doing surveys is privacy. It varies according to the country. If you take, for example, Germany. People are very strict here and not willing to give that much information. Quite the opposite is they use A. You can ask everything there uses a very willing to give you answers. Um, since we're in Germany, we have to get used to more privacy. But as soon as you're back in America one day you could go crazy. Um, another thing we almost forgot about ISS language handling. If you have uses with different languages and he wanted to do a survey with different languages, which can also come in different countries, it can get very complicated, especially if you have to evaluate results. Um, my suggestion would maybe to use the Lakers Ko because you still have the range of answers , so you don't need to translate everything but If you want to stick to open answers, it's going to be kind of difficult to evaluate everything and another advanced option. It could potentially do it like is using logic with your questions. So if you ask a question about how likely a user is to do something or recommend something , if they do, ah, high scale. That's great. Uh, but if they do a high scale, you really don't need to ask them what they need to be improved because they're already happy. Now, if they do say, like a two were one, you may want to ask a follow up question. Say, what could be approved todo to make this better for you or something along those lines, so that but you wouldn't ask that everybody. So then you might want to add some extra logic to your survey to ask only that question for low answers when example for that could be whole, Actually, are you to recommend a product? If you use is kind of unhappy and wouldn't recommend it, you can ask why. So you'd have the first question where you ask, Do you recommend it? The 2nd 1? Why don't you do it exactly 9. Building and sending the survey: And now that we talked about how you set up a survey, what kind of fuses you're going to talk to our people you're going to talk to. It's not always uses on, and we're focusing on what tools you can use to do a survey, Um, and how you can do it. If you're pretty fresh and doing service, you probably go for in APP or email surveys because you don't need to talk to people in person. If you talk to people in person, like in interviews or something, you have to get used to it because otherwise it's very, very easy to bias answers. So if you're new, you better stick to some nice tools we have. The Internet offers a lot of them. One we prefer the most would be Google forms or type form. They are both available free. I think type form you can upgrade to premium account. Um, the differences are basically you could always edit them, but type from gives you more options so you can really layout it according to your branding . And it's super modern and the office of nice interactions animations. It's It's a really nice way off talking to users in a modern way. Why school forms is nice. It offers you some pretty find options. Which type formed us, too. So it's really all about your preferences in the end for that one. Then the next part is how toe kind of build the survey. You're gonna have a lot of questions here, but you don't want to put them all on one big long sheet, and both Google form and type form break them up, allow you to break them up. So with Google forms, for example, you probably want to break them into multiple sheets, maybe three or four questions on each one best if they're clustered somehow. If you're gonna ask demographic questions at the end, make that all one section this way. It's not too big or too scary for the user. If they see this huge long page of questions and then once they get started, there's you also kind of have something called the some cost effect that kind of kicks in, which means that they've already put some work into it, and they're more likely to continue going on as opposed to having this big, long sheet. That's all right, in front of them. Now, after you have this all built and it's ready to go, you need to figure out when to send it. And that really depends on your product. Does your product have a set schedule? Eyes. There's something that is it based with certain events that you want together information from people before the event happens while the event happens, and then after. Or this is something that is used every day or on a random occasion. These are things you really have to keep in mind about when you're going to send it out because you really want to cover the entire space. So make sure that when you send it and how long, leave it running covers that time periods you cover everything. Eat there. Then, once you have that, how many results should you be looking for is probably the next question you have come in your mind. And to be honest, we don't have a solid answer there. But I mean, obviously two or three is not very useful. Uh, you know, a few 100 could be really helpful once you start getting into the multiple thousands, which is for some products very easy to do. But you start running into an issue there where you're really not gaining much, and you're just forcing users to go and do your survey for no additional gain. So once you reach a certain point, you're just gonna start averaging things out. It's just a lot of work on the other. Random, more work for you, for evaluating all of it. So it's really about testing the water and trying it and going from there and really see what number works best for you. If you don't manage to get a lot of uses answering to your survey, you just have to to find out how much value you give to the survey. It's always nice to get answers, but if you don't have enough people answering my, I wouldn't put too much weight on it. 10. Analysing the results: And now that we talked about, we're going to talk to, which was his screening and what you're going to ask, which was the survey itself, and you probably are going to have some results we mentioned before. It takes a specific amount of time to get results, but as soon as you have them, it's time to analyze and evaluate them. A Tous Point issue. Think about your quantitative data, which you collected in baskets before you already have. This could be analytics from Facebook data. You get everything that has numbers in it, and it's related to your product. If you start analyzing your results, you started best case with some default results you get from the tools we mentioned before and then dick deeper with some specific use cases. So one of the interesting things there is that once you have extra data from this survey and the quantitative data or something like that from local ITICs or any other source, you have, like maybe like, say Facebook, for example, is to see if they match up. So if a user says they really like a certain section, do the numbers match that if they do great If they don't, that might be something to ask in interviews like Iran for the second video that we're gonna put together for everybody. Another thing that look at is you really want to focus on the high points. Now when I say high points, I don't mean happy. I mean positive or negative. Like, what are the things that really stand out? Do you start seeing any core needs emerging when you're looking at those? Are there any roadblocks that you may have to address or look at? And I think one of the really cool things are Are there any surprises? Uh, now the surprises air. Great. Because you get to learn more about your product. You also run into the other direction where say, you have no surprises. Well, that's pretty rare. But that also shows that your insights for your product were pretty good. And then that's where you, uh, now you have data to back that up the show that you really didn't know what you were talking about. No. When you start looking at the data itself, you may see lots of numbers and to get trends, you really need to graft those out because most people you give them a block of numbers, they're just going to stare at it and just smile and nod. Obviously, really want kind of graph it out so they can see what's working, What isn't working. Another thing to keep in mind is, uh, that purse isn't precision of the numbers that you're gonna be talking about. Showing something that's 32 point 35% from a survey like this is kind of useless. You're better off saying like one in three people because it really is about 33%. Getting too precise with these numbers is just just wrong. And for you math nerds out there, anybody in the sciences, you just remember your significant figures and keep those in mind as you're you're working forward. But don't take the precise numbers too hard, too much like you. Just just look at them as a rough number to start from 11. Reviewing and presenting the results: so going back to talking about, uh, did it fit with your current ideas? This is a really important point. Distress. Now, if all your answers actually fit exactly what you were thinking, that's kind of ah, red flag to take a look at you may want to go back and look at your questions. Where your questions to leading. Did they actually lead the users to give the answers that you thought you wanted to hear, or did you not give them enough options to let them actually show what they truly thought? If you did, that's a problem. Your survey is tainted, and the results are not gonna be very useful. However, if you went back and looked at those and that wasn't the case, then you have data. Like I said before, that backs up what you were thinking. And that's really great, a possibility that could happen to which happened to me before, when I and a nice resets. I had an idea in mind that I wanted to seeing the results. So sometimes it's better to loop in a person who wasn't involved from beginning on because they they won't be biased still have already a clear thought about how it should be at the end exactly. And I think one thing that is really important as well to that we haven't actually discussed it all yet is we often will naturally put users into buckets. And when you're looking at my incense trying to remove that, don't think about people is being power users or average users or occasional users. You're really looking at trying to find the corn needs for everybody. So if you get a bucket in your head or someone brings up a bucket when you're talking about it, shake that off. That's a full step beyond what we're working on now. You're really trying to find the base for the beginning to work from, and if you get what you get those you're going to get to go toe branch off in all those different directions later on. Really glad you mentioned that, because that's probably biggest difference between the personas we mentioned at the beginning and using mounted, which we love to see more often exactly 12. Final thoughts: Okay, so we're already at the final thoughts now. And what? Our learnings? Maybe we should start with reviewing what we talked about quickly. So the first thing waas their requirements and maybe some sort of briefing you need to get a survey started. The second thing we talked about was, Who are you talking to? Which was the screener? And the 3rd 1 was to survey itself. What kind of questions you can ask You should not be leading in best case, what kind of options you have for answers with tools you're going to use how you analyze it . And that's probably exactly in high analysts. And then basically, from there, you really take the information and then just start sharing it throughout your organization , and you also use it as a base for later interviewing. You really users in person on and getting a really full picture from there. I think they're really cool thing. Here is the start getting data that actually you condone use right away to start improving your product, and this is where we would love to see your results and what you come up with. Um, obviously, for privacy reasons, you know you don't want to share absolutely everything. But it would be really great to see what you have built in what you've done. And if you have any questions for us, definitely let us know. Definitely. And look at the work shifts we have there. They will be the basis for everything you need. Probably exactly on. Uh, this may sound like it's daunting, but really the worksheet will walking through everything it z just about sitting down and building these questions out, sending it out to users. And most of the tools that we talked about will build a lot of the stuff by the fault, like it's not gonna be, you know, super deep analysis. But at the same time, it's gonna help you start and get a really good base for your product in your users. And we both think that anybody could do this. It's just about sitting down and spending the time and diving into it, and I'm sure you'll be able to do it without a problem. I have to admit I left during service because they're always a great starting point to making a better product, and I love to see your results. Absolutely