Questionnaire Design Boot Camp: An Introduction to Survey Writing | Jessica Broome | Skillshare

Playback Speed


  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Questionnaire Design Boot Camp: An Introduction to Survey Writing

teacher avatar Jessica Broome, Opinion researcher and research evangelist

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

10 Lessons (58m)
    • 1. Introduction

      1:41
    • 2. The Survey Lifecycle

      5:12
    • 3. Survey Mode

      5:10
    • 4. The Response Process Model

      6:13
    • 5. Asking About Facts

      5:45
    • 6. Asking About Attitudes

      5:06
    • 7. Asking About Sensitive Questions

      5:59
    • 8. Response Options

      6:50
    • 9. Scales

      5:45
    • 10. Survey Layout

      10:21
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

9

Students

--

Projects

About This Class

This class is for anyone who wants to know what other people think, feel, or do! Maybe you want to understand what design features your customers want from your product, why people are buying from your competition, or what your employees are saying about you.

Basic survey research is a common tool, but most people never learn the key principles. Survey design is an art and a science-- better surveys get better data, which help you make smarter decisions. This class will prepare students to write effective questionnaires.

Students should be prepared with an idea for research they want to conduct using a survey. The class will begin with understanding your research aims and writing questions to meet these aims. Specific modules offered include: asking about sensitive topics (drinking! drugs! mean thoughts!); picking the right scale (numbers, letters, labels?); and avoiding pitfalls (more common than most people realize). Lectures will include basic theory but will focus primarily on "real world" examples.

Meet Your Teacher

Teacher Profile Image

Jessica Broome

Opinion researcher and research evangelist

Teacher

I've written over 20,000 survey questions in the past 20 years on topics
ranging from shoe shopping to life in prison. Some of my clients include Pfizer, Ogilvy and Mather, the Michigan Department of Education, and Macy's.

I have a PhD in Survey Methodology from the University of Michigan, where I teach Questionnaire Design.

See my bio and some work examples at southpawinsights.com.

See full profile

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Introduction: I'm just stubborn. I'm the instructor for this class. I am a survey methodologist and the founder of a research company called South pot insights. I've written thousands of surveys about everything from dog food to health insurance to employee satisfaction. This course is for anyone who wants to know what other people think, feel, or do. Maybe you want to understand what design features your customers want from your product. Maybe you want to understand why people are buying from your competition or what your employees are saying about you. Survey design is an art and a science. Better surveys get better data. Better data can help you make smarter decisions. This class will prepare you to write effective questionnaires coming into the class, you should be prepared with an idea for a survey you want to write. Your class project will involve going through the entire survey design process, determining your objectives, outlining your content, crafting questions, choosing the right scales and response options, and finally, putting it all together. After washing the nine videos and completing the tasks that make up the project, you should have a cohesive survey and be ready to go out and collect your data. I hope you're getting excited to learn about questionnaire design. We'll start with the first lesson, which covers the entire survey lifecycle. I'll see you there. 2. The Survey Lifecycle: One of the most common problems I see when people try to write questionnaires, is there upfront approach? A lot of people will just sit down and start banging out questions. One of my commandments of questionnaire design is to start with the end in mind. So any survey you do probably has a research question or two and an objective or several objectives, something you want to get out of it. I strongly encourage people to think about this and actually write it down before you write a single survey question. What is your big research question? Why are you doing this survey? A helpful diagram of the survey lifecycle is from this book, survey methodology by Bob Groves. The key point here is that there's a lot you need to do and think about before you run out to start collecting data. Groves has as the first ship, first step, define research objective. And I think it's worth spending a bit of time here before you start trying to write survey questions. One reference I've used a lot around designing research objectives is a fairly old designing and conducting health surveys. The second chapter is called matching the survey designed to survey objectives. The author Luann, a day, gives what I think is a very helpful formula for writing clear and concise research questions. Your question is probably going to include a what, who, where, when, and why. She gives some really good examples from different surveys. So what are the knowledge, knowledge, attitudes, or behaviors related to aids of adults 18 and over in the Chicago area in 1987, I told you it was an old book. Prior to an aids health education program intervention. The research question has a clear what, who, where and when the why is the prior to an intervention bit. The researcher wants to measure attitudes, knowledge, and behavior before exposing people to the education program, I could take this research question and design a survey that would map back to each of these components. I know who I'm talking to and when and why. With the results from the survey, I know I'll be able to talk about Chicago adults knowledge, behaviors and attitudes about aids. A day outlines three types of objectives, and she recommends nailing these down before you start to write a survey. Usually a survey objective is to describe something, to compare different groups or to analyze something. In the Chicago aids examples, she lists two objectives. One to describe the knowledge, attitudes and behaviors related to aids, and to, to compare the knowledge attitudes and behaviors related to aids by racial and ethnic groups. This example does not have an analytic objective. If yours does, I would encourage you to think thoroughly about the analyses you want to do and the statistical tests you want to run before you finalize your questionnaire. Another thing to do in this upfront stage is to come up with a hypothesis. I sometimes work on studies that are more exploratory and don't necessarily have a hypothesis that we're testing. But in that case, I try to write or have clients right there ideal findings. So how do you want to be able to talk about these findings? In this case, a day does not give a hypothesis, but if this was my client, I would say, here are three things we might be able to say based on this study. I'm making these numbers up to 80% of adults in Chicago can name two ways aids is transmitted. Black adults in Chicago have greater knowledge about aids transmission than white adults do. Hispanic adults in Chicago are more likely to engage in risky behaviors than Asian adults. Are. These findings all map back to my original survey objectives. I really like to give some conscious thought to this and get everyone on the same page about the research question, the objectives and the sample findings. Before you start to write or administer a survey. This way, nobody can come back and look at the data and say, Oh, I really wanted to know about men versus women and what they think about hepatitis. You should have said that from the beginning, the research question and the objectives and the sample findings might change while you're working in a steady, they're not set in stone. But I would say you should always be able to come back to these and align your questionnaire with these pieces. That's it for now, the next video will be about different modes of data collection. 3. Survey Mode: This is again groves his diagram of the survey lifecycle, defining research objectives can be a process with a lot of back and forth between different stakeholders. But it's important to get it nailed down and not start designing a survey with only a vague idea of why you're doing it and what you want to get out of it. Groves lays out the next steps as choosing a collection method and choosing a sampling frame. This is not a class on sampling, but the term is important. A sampling frame is the list you're selecting your sample or your survey respondents from. So if I want to survey students from the University of Michigan, I'm probably not going to do a census where I talked to everyone, but I'll select a sample from a list of all students. That list is my sampling frame. So the information on your sampling frame is going to help you make the decision about your data collection method or what we also call survey mode. Is it a phone survey, a mail survey, a face-to-face interview? If my list of university of Michigan students has all their names and e-mail addresses. I'm not going to do a phone survey. But if the list only has e-mails for a third of the students and snail mail or postal addresses for everyone. This will inform how I'm going to be able to collect my data. I want to talk a little about different considerations around survey modes and give an overview of some different modes. First, a survey is either self-administered or interviewer administered. A respondent might sit in front of their computer by themselves or fill in a paper survey that's self-administered. Interviewer administered surveys can be done on the phone or by video or in person. There are pros and cons to each of these. Hiring interviewers to call people and knock on doors is more expensive than just blasting out an email with a link to a web survey. But it's easy to delete a web survey. It's a lot harder to close the door and someone's face when they're asking you to do an interview. People can look at paper surveys that come in the mail and get confused about what questions to answer or what does this mean? There's no interviewer there to help. Using interviewers can help with response rates and building rapport with respondents. But we also worry about things like acquiescence bias, where people want to be agreeable or say what they think the interviewer wants to year. And social desirability bias where people don't want to share things that they think will paint them in a bad light. Think about the technology involved in your survey. Is there some computerized component or not? If I send an interviewer out with an iPad to administer a survey and enter the responses. It can be quicker and neater and I can export the data to Excel in two seconds. But what if she loses the iPad? What if something crashes? Do we lose all our data? Versus if I send an interviewer out with a stack of paper surveys to fill in, there's no technology, but it's a different process. The interviewer comes back with a 100 pages of surveys. Who's doing that analysis? How are we doing that data entry? Who's doing that? What format are they using? You paying them. So you want to think about the channel of presentation of your questions, what you're asking, who your respondents are. Are you surveying a low literacy group? Maybe you don't want to give them a paper and pencil survey to answer on their own. Are you asking about a very sensitive topic? People might not want to tell an interviewer all their secrets. Do you want people to look at pictures or videos or some kind of visual as part of your survey. In that case, a phone surveys, probably not your best bet when you think about choosing a mode. I want to stress that often this decision is made for you. I might really want to do a face-to-face survey, but I don't if I don't have any money to hire interviewers, I might just have to do a web survey. I might have really sensitive questions and I think that a web server, we'll get the most accurate responses. But if all I have on my sampling frame, our phone numbers, you might have to be a phone survey. You want to make the best and most well-informed choice for your particular survey. Knowing that some trade offs will be inevitable. 4. The Response Process Model: In general, I try to keep this class more practical and less theoretical. But this is something I think is really important and useful when you're writing questions. The response process model was developed by Roger Tourangeau and discussed in this book, which is a good text if you're interested in how people respond to survey questions. The response process model assumes that there are four stages people go through and answering questions, plus an additional pre-stage. I call the stage 0 before they answer your question and will have to encode information. They have to know that something happened before they can report on it. So a survey might ask me how many milligrams of calcium I consumed at breakfast this morning? This is not a question. I can answer. It's not that I don't understand the question. I know what milligrams are and I know that calcium is a nutrient. I've just never paid attention to how much of it I'm consuming. So if the information doesn't go in my brain, it can't come out in the form of a response to a survey question. I have not encoded this information back. So the response process model, if encoding is staged 0, the first stage is comprehension. I have to understand all the words and what the question is asking me. You want all of your respondents to understand the terms you use in the same way. So they're all answering the same question. One example of a question that can be problematic at the comprehension stage is, have a safe do you feel in the area where you live? This is a pretty straightforward question with simple words, but the term area can mean different things to different people. Does this mean on my block in my neighborhood, in the city I live in, in the part of the country I live in. You can see how different people could understand area very differently. And therefore, different people might be answering different questions. After I've comprehended your question, I move on to the next stage, retrieval. In this stage, I'm recalling all the information and figuring out how to answer the question, assuming I've understood the question about how safe I feel is asking about my neighborhood. I'm coming up with my response. I feel pretty safe in the area where I live. If the question is asking me to recall something, this is the stage where I'll come up with an initial answer, maybe by counting, estimating or just forming an impression. Then comes the judgment stage where I'm assessing my initial answer. Is it accurate? Is it complete? Am I forgetting anything? I just remember that someone was mugged on my block last month. Maybe I don't actually feel safe. I feel pretty unsafe also at this stage to a wanna give this answer. I might edit my response here because maybe I don't want to admit that I feel anything other than a 100 percent safe. The final stage of the response process model is reporting. Answering the question. I have to map my response back to the response options that are offered. I've decided that I feel pretty unsafe, but the options I have to choose from are very Save somewhat save a little safe, not safe at all. So this is the stage where I figure out which response to give. I've presented this as a very clear linear five-step model. But in reality, this is not how it works. Most respondents jump around in the model, they may skip a stage or double-back. Maybe I've formulated my answer, but then I realize I misunderstood the question and I have to go back and reread the question and reconstruct my response. I want to give a few things to think about when you're writing questions through the lens of the response process model. Think about who's answering your question. Oftentimes I see medical professionals, right? Surveys that other medical professionals could answer. But that use terminology that a patient or a lay person might not understand. Just because you understand your questions doesn't mean your respondents. Well, think about how realistic the task is. And if you're asking me how many times I've purchased a soda in the past week, I could probably tell you. But if you asked me about the past year, I'd have no idea if a question is hard for you to answer, it's going to be hard for your respondents as well. Think about how your respondents view the concepts you're asking about. One of my first jobs was interviewing people in an addiction treatment center and I had to ask them to estimate the amount of different drugs they had consumed in the past 30 days. Most people would say $20 worth of wheat, $50 worth of heroin, hundreds of dollars of crack. But the response options that were written into my survey that I had to conform to or in grams. So since I didn't know the street value of all these drugs and these folks often didn't know a precise weight. We ended up at an impasse and they weren't able to get to that last stage of reporting their answers for all the questions that you write, think about these stages and ask yourself, where could the response process model break down? How can we avoid problems at each stage? That's it for this lecture. See you soon. 5. Asking About Facts: This lecture is about factual questions. So what are facts? Something theoretically verifiable by external observation or records? So if I followed you around with a video camera every day of your life, I would be able to capture most of the factual information. In surveys, we ask a lot of factual questions. We almost always want to capture at least basic demographic questions like gender, age, race, ethnicity, often education level or income. And a lot of surveys focus on behaviors. So asking people, have you done a, B, or C, or when's the last time you did XYZ? Or how many times did you do x in time period? Why? One key issue when you're asking about facts IS comprehension of the question. Remember, this is the first stage and the response process model. People have to understand what you're asking to be able to provide an answer. And you want all your respondents to understand your questions in the same way. A big concern here is conceptual variability. I might want to know how often people exercise in a typical week. But if all my respondents don't have the same definition of exercise, we'll get responses that are all over the place. One person might be thinking exercises, walking their dog, and they do that twice a day, every day, while someone else is only defining exercise as going to the gym for an hour, survey respondents will answer whatever you ask them. It's up to you as a questionnaire designer to write questions that clearly convey the concepts you're asking about. If we keep thinking about the response process model as a framework for how people answer questions. The next stage is retrieval with factual questions for often asking people to recall some piece of information. So when's the last time you bought toothpaste? How much vitamin D did you consume yesterday? Some of the challenges people face when it comes to recall our first not having encoded the information. I just didn't pay attention or I didn't know that something was happening. It's not that I don't remember drinking a glass of milk. It's just that I didn't encode the amount of vitamin D that was in it. When you're asking me to recall something, I might only retrieve generic information about similar events. So I know that I buy toothpaste about once a month, but I couldn't tell you how many times I bought Crest and how many times they're about Colgate over the past year. Or I might only retrieve partial information. I think I buy toothpaste about once a month, but maybe there is a month where I didn't. But if I'm answering a survey, I might estimate 12 times in the past year, not remembering that month that I skipped buying toothpaste, I might just forget as time goes by, memory fades. Other similar events in her fear. I might remember an event that was a highly emotional event, something unique or out of the ordinary. Maybe I noticed it in the moment or recounted it after the fact. But not something mundane like buying toothpaste, especially as time passes, I want to talk about some questionnaire design considerations to improve. Recall first, how far back are you asking people to remember? If you asked me how many cars I bought in the past five years, I'll probably be able to remember that. But if you're asking me how many times I bought toothpaste, five years is probably too long of a reference period. The reference period is the time frame you're asking people to think about when they're answering your question. A good rule of thumb when you're deciding what reference period 2 uses. The shorter, the better, as short as you can get away with. If your research question really hinges on asking people about their behaviors over the past many months or years. You might not have an option, but I would urge you to think about the reference period you're using and ask yourself, if you're asking about the past year, could you ask about the past month instead? One Good question is, could you answer this question? If I can't, it's pretty likely my respondents won't be able to either. Another way to improve recall is to make your questions as simple as possible. Here's an example of a question with a long definition that makes the ASC clear. But I think places a lot of cognitive burden on respondents. If I wanted to ask a question like this, I might decompose it into four separate questions. How many times the past 12 months did you go to the doctor's office? How many times in the past 12 months did you go to an emergency room and so on? At first you might think, why do I want to ask for questions when I could ask one? But unless you're paying by the question, I think for short simple questions are preferable over one long, convoluted question. In addition to making the question as simple as possible, you want to make the task as simple as possible while still meeting your research objectives. 6. Asking About Attitudes: Our last video was about factual questions. This one is about attitudinal questions. Surveys ask a lot of factual questions, but also a lot of more subjective questions. When we talk about attitude questions we're including opinions. So what I like or don't like, how much I support or oppose an idea or position. Feelings. How safe do you feel in the area where you live? How happy were you with the format of this class and predictions? How likely is it that you will buy a car in the next year? In questionnaire design, we talk about contexts, defects that can impact how people respond to questions or contexts defects can come from a lot of things. Number one is the subject of your questionnaire. If you tell me in the introduction to your survey that I'm going to be answering some questions about illegal and risky behaviors. I might go into it a little more wary than if you tell me we're going to talk about health and lifestyle in general, the more neutral you can be in the introduction that better when you think about your questionnaire, context effects can come from any pictures or visuals that you include in your instrument as online surveys and become a mainstay over the past 15 or 20 years, it's really easy to change the colors or the format or upload a photo to keep the questionnaire interesting, you should really be conscious about anything you put in your questionnaire and what purpose it serves. But when steady in the early days of web surveys asked people, how many times have you gone shopping in the last month? One group of respondents saw a picture of a clothing store, and the other saw a picture of a supermarket. The supermarket group reported a much higher number of shopping trips. And the inference is that they assumed the question about shopping included trips to the grocery store, where the groups that saw a photo of a clothing store had contexts that maybe we're referring to a less frequent type of shopping, respondents will make these inferences if they have an opportunity to. So it's up to the questionnaire designer to think about how every element of your questionnaire could possibly cause a context effect. Question wording can have an effect on how people answer questions. One classic study, I think from the 1950s, asked people how they felt about public speeches in favor of communism. And they asked one group if they thought the US government should forbid speeches. And as the other group, if they thought the government should not allow speeches. The difference between forbid and not allow might seem minor, but there was a huge difference, nearly 20 percentage points in agreement with these statements. In general, the recommendation with question wording is to use words that are as neutral as possible and to include both sides of the coin in your questions. So instead of asking people, do you support using the question to present both sides? Do you support or not support? Question order is one of the biggest contributors to context effects, and it's pretty much impossible to get rid of unless your survey only has one question. Something has to come first and something has to come last. You want to give some thought to how different orders of your questions might impact responses. So if you ask me early in a questionnaire, read how healthier lifestyle is on a scale from one to 10, I might say, oh eight, I'm pretty healthy. But if you first ask me, How often do you eat fast food, how's your stress level? How much do you sleep? And then ask me to rate how healthy my lifestyle is. I'm probably dropping down to about a five. I wouldn't say either of these orders is right or wrong. It's just a question of how you want people to assess their health. Is that a top of mind? Minimal contexts, clean rating? Or do you want to prime your respondents to think about fast food and sleep and stress, and then rate their health depending on your research question and objectives, I think either one could be appropriate, but it's something you just want to think about and ask yourself, how can you balance meeting your research objectives with question order effects which are inevitable? So to sum up, context effects always exist. Sometimes they're completely out of your control and sometimes you can make a decision about how to minimize their impact or how to balance them with considerations about the purpose of your questionnaire. 7. Asking About Sensitive Questions: This video is about sensitive questions. So I've mentioned some of my commandments of questionnaire design. And one of the key ones is to put yourself in your respondents shoes. One thing to be especially aware of is sensitive questions. Some topics that can be sensitive in service includes socially undesirable things like illegal activities, number of sexual partners, drug and alcohol use. But also questions about socially desirable behaviors like voting, exercising less now, but in the past, church attendance was considered a very socially desirable behavior. And one question that's consistently shown to be sensitive, at least in the US, is income, whether people make a lot of money or a little. They often do not want to report how much they make. We really can't avoid putting some questions in surveys that at least some people are going to find sensitive. So it's important to understand a few design considerations that can help to lessen the impact of sensitive questions on responses. So questions are sensitive, not everyone is going to be willing to answer. What are we as Questionnaire designers supposed to do? I'm going to talk about a few solutions or suggestions on how to increase accurate reporting for sensitive questions. First, think about the mode you're using for your questionnaire. We talked earlier about some of the pros and cons of interviewer administered versus self-administered questionnaires. If the general topic of a questionnaire is sensitive as opposed to just a few sensitive questions in an otherwise innocuous instrument, one general recommendation is to use a self-administered mode. This feels like common sense. It's easier to admit all my socially undesirable behaviors in an online survey rather than to a person who's asking me questions. A self-administered mode though, is not always possible. We talked earlier about mode selection and how sometimes an interviewer administered mode is the only option, are the most feasible options. So what can we do in the survey design phase to improve reporting on sensitive topics? I'm going to run through a few suggestions. If you're asking about illegal activities, immigration status, or anything that could have real repercussions for a respondent. One general recommendation is to explain upfront as clearly as possible how you as a researcher work to keep responses confidential. I usually remind people, we never connect your responses to your name. We always look at data in the aggregate. This is how we protect your information. Sometimes when you're approaching a sensitive question in the interview, you can include wording like just a reminder. Everything you tell me is confidential beyond assurances of confidentiality. When it comes to writing questions, you want to think about how you can increase your respondents comfort level with giving a truthful answer. How can we do this? The first suggestion is to minimize cognitive burden on respondents. If you can get away with it vis-a-vis your research question and objectives as things as simply as possible. Second, use unfiltered questions. This means using question wording that assumes a behavior. So if you ask me, do you ever ate Doritos for dinner? I want to say no. But if you assume the behavior, assume I have done this, give me the impression that it's acceptable to do this. I might be more likely to admit it. So how many times in the past week did you eat Doritos for dinner? I can say 0, but I can also admit that maybe it was too without quite as much sensitivity as the Do you ever question. So don't use a filter question to ask if someone has done something, assume this socially undesirable behavior. Finally, and in a similar vein, another recommendation is to use loaded wording in your question. So before you ask about my thereto dinner, give a little lead it like most people say that sometimes they aren't able or don't want to eat a full meal. What about you? Have you ever even once eaten doritos for dinner? The words I'm using convey that it's totally fine and normal to not eat a full meal. You notice I don't understand. I don't use words that have some inherent value judgment, like a good dinner. There's no judgment here. Most people say they do it. It's okay to eat Doritos for dinner. We are using loaded wording in this case the everybody does it approach. What all of these have in common is that you're trying to create a safe space for your respondents to tell you the truth. So if you're using loaded wording or assuming an undesirable behavior or asking a simple question rather than a really complicated one. You're telling your respondents you're okay, your behavior is okay. There's no judgment here. We just want to understand your truth. I think conveying that message directly or indirectly can help you get accurate responses to sensitive questions. 8. Response Options: Up to this point in the course, I have talked a lot about survey questions and not a lot about responses. Scales and response options is a huge topic. I'm going to try to do it justice by first taking about three general types of questions. Open-ended, closed ended, and scaled. Open-ended questions are what we use most often in daily life. So you ask a question, someone answers in their own words, like, How do you feel about this weather or what's been your biggest concern during the pandemic? I tend not to include more than one or two open-ended questions in a survey. They do have benefits. They are good ways to get at the why behind the numbers. You can get really rich and creative responses. People might bring up topics you hadn't even thought to include in your list of responses or with an open question, you avoid bias caused by response options are scales open-ended questions can also serve as a good check on data quality, especially in self-administered surveys. If someone just writes gibberish for a response in an open-ended question, That's a red flag that the rest of their answers might be garbage as well. But open-ended questions are harder on respondents. They pose more of a cognitive burden for you. They're harder to analyze. You have to decide how you want to code or categorize them. It can be time-consuming and costly to code responses to open-ended questions. Another thing I was open-ended questions is that you got what you got. Respondents might give you a really rich, unique answers and shed light on things you hadn't thought about. Or they might answer something completely irrelevant. It doesn't serve much purpose. If you're going to use open-ended questions in your survey, I'd urge you to be extremely careful with your wording. This is a place where you really need everyone to understand the question as you intended it. In closed ended questions with lists that are nominal, meaning there's no inherent order to them. A very real concern is a response order effect. If everyone gets the same list of five response options in the same order. We tend to see different types of bias depending on the mode of this survey. In a visual mode like a paper and pencil or online survey. We worry about primacy bias, where a disproportionate number of people select the early options in the list. This is usually attributed to people not bothering to read the entire list before answering in an oral mode like a phone survey. And we see the opposite recency bias. This is where people are more likely to pick later options in the list. This is usually attributed to people just not remembering all the options and parroting the last one they heard. In order to minimize the effects of order bias, I almost always recommend randomizing response lists, so each respondent sees it in a different order. If you're doing a web or computerized survey, this is generally pretty easy. It's usually just clicking a button. In paper surveys, it's harder to implement, but not impossible and really quite strongly recommended. I want to finish this session with a few cautions and considerations around scales and response options. The first one, as I mentioned, is to randomized lists of response options where appropriate. This applies when your list of response options is nominal. So what's your favorite flavor of ice cream? Chocolate, vanilla, strawberry pistachio. These do not have an inherent order. So the recommendation here would be to randomize if you want to include an other specify options. So I can write in my favorite ice cream is a rocky road or a don't know or prefer not to answer it standard to not include that in your randomization, but to put those options at the bottom of your list. So randomization and nominal lists, if you have an ordinal question. So a list where the items are in a logical order, randomization is not recommended. If I'm asking you, How old is your youngest child, it doesn't make sense for someone to see the response options out of order. In this case, though, an important consideration is to make sure everyone can place themselves in one and only one category. So I've seen a lot of questionnaires with response options that overlap. So how old is your youngest child? Under 22 to 44 to 67 or older. What do I do if I have a four-year-old, I want to be careful that every respondent only has one place to put themselves in a single response question. By the same token, you want to make sure that everyone has a place to put themselves. If my ranges to that question about age of my youngest child are 1234 to six or seven or older, and I have a two month old. How do I answer? Make sure your response options are all inclusive. Everyone has one and only one place to put themselves. My next cautionary tale around scales and response actions is what we call mismatches. So if my question is, during the past 30 days, has your use of alcohol caused you any emotional problems? When I read this question, I think my response is going to be either yes it has or no it hasn't. But then I see response options on a scale. So not at all, somewhat, considerably extremely. This is a mismatch. It's not only grammatically incorrect, but it increases the cognitive burden on a respondent. So it's not like I can't figure out which response option to choose. But it's a bit jarring if I was expecting to say yes or no. And now I have to choose between these four options. You want to make sure that your response options match Questions. So to sum up here, when you're thinking about your response options, remember our commandment to put yourself in your respondents shoes. Don't increase their cognitive burden. Make it as easy and as comfortable as possible for them to answer your questions. 9. Scales: So now we've talked about open-ended questions and closed-ended list questions. I've saved the best or at least the biggest topic for last. And that is scales. Scales are a pretty hot topic in the survey design world. There are a lot of things to think about when you're choosing a scale. So how many scale points are you going to use? I've seen three-point scales for points, 5.7 points, 10 points. So 100 points is your scale bipolar or unipolar? This is the difference between a scale that goes from very satisfied to very dissatisfied, so to opposite poles versus a scale that goes to very satisfied, goes from very satisfied to not at all satisfied, which focuses more on the presence or absence of something. In this case, satisfaction. Do your skill points have numeric labels or verbal labels? So are you asking me to rate my health from one to ten or is it as excellent, very good. Good, fair, poor? Are you including a midpoint on your scale? If I'm neutral or I don't really have an opinion on a five or 7 scale. I can answer in the middle. On a one to ten scale. There's no way or no way for me to be neutral. Are you including a don't know option? What about a prefer not to answer option? You obviously don't want someone to say they prefer not to answer every single question. But you might want to give someone a way to skip a question without skipping, without quitting the whole survey. So there are a lot of questions around skills and unfortunately not many clear answers. I'm sorry to disappoint, but I'm not going to give you a formula or recipe for when to use different types of scales. There's some recent research that points to 7 bipolar scales and 5 unipolar scales as being an optimal on factors like reliability, validity, differentiation. This is not to say that you should always use 7 bipolar scales and 5-point unipolar skills. You may need a lot more differentiation and you want people to use a 100 point scale. You might really only care if people like something, dislike it or they're neutral. I'm really cautious with the two-point scale like yes or no or like and dislike, you really back your respondents into a corner and don't give them a lot of options to express their opinions. I want to finish this session with a few cautions and considerations around scales and response options. The first one, as I mentioned, is to randomize lists of response options where appropriate. This applies when your list of response options is nominal. So what's your favorite flavor of ice cream? Chocolate, vanilla, strawberry pistachio. These do not have an inherent order. So the recommendation here would be to randomize if you want to include an other specify options. So I can write in my favorite ice cream is iraqi rude or a dome know or prefer not to answer it standard to not include that in your randomization, but to put those options at the bottom of your list. So randomization and nominal lists, if you have an ordinal question. So a list where the items are in a logical order, randomization is not recommended. If I'm asking you, How old is your youngest child, it doesn't make sense for someone to see the response options out of order. In this case, though, an important consideration is to make sure everyone can place themselves in one and only one category. So I've seen a lot of questionnaires with response options that overlap. So how old is your youngest child? Under 22 to 44 to 67 or older. What do I do if I have a four-year-old, I want to be careful that every respondent only has one place to put themselves in a single response question. By the same token, you want to make sure that everyone has a place to put themselves. If my ranges to that question about age of my youngest child are 1234 to six or seven or older, and I have a two month old. How do I answer? Make sure your response options are all inclusive. Everyone has one and only one place to put themselves. My next cautionary tale around scales and response options is what we call mismatches. So if my question is, during the past 30 days, has your use of alcohol caused you any emotional problems? When I read this question, I think my response is going to be either yes it has or no it hasn't. But then I see response options on a scale. So not at all, somewhat, considerably extremely. This is a mismatch. It's not only grammatically incorrect, but it increases the cognitive burden on a respondent. So it's not like I can't figure out which response option to choose. But it's a bit jarring if I was expecting to say yes or no. And now I have to choose between these four options. You want to make sure that your response options match your questions. So to sum up here, when you're thinking about your response actions, remember our commandment to put yourself in your respondents choose Joan, increase their cognitive burden, make it as easy and as comfortable as possible for them to answer your questions. 10. Survey Layout: Up until now, we've talked mostly about individual questions. And now I want to tell you about putting it all together to assemble a questionnaire that is a coherent, cohesive instrument, rather than just a list of questions on a page or on a screen. The first component to any questionnaires, the introduction. This will vary a bit depending on your mode, who your respondents are and any administrative requirements. In general, you want an introduction that tells respondents who you are or who is sponsoring the study. So I'm a grad student in the School of Public Health, or this study is being sponsored by the University of Michigan with the purpose of the study is, and some general idea of the topic. So we're doing this study to understand how people feel about current events. You don't want people to know too much upfront, but just to give them a general sense, how is this going to happen? What is going to be asked of them? How long is it going to take? Is someone asking them questions or do they have to write something down or use an iPad? What's the benefit to them? Do they get an incentive? Will they get to see the results? Are they just doing this out of the goodness of their heart to help other people like them. Finally, any information that might make people more comfortable with the experience. So assurances of confidentiality, anything that makes the steady seem credible and as pleasant or at least as painless as possible. This is the bare minimum if you're conducting a study through a university or another organization that requires IRB approval, they'll probably be a lot more that you have to cover before people can answer a single question. Once you get through the introduction, you're going to move on to your actual questionnaire. The first section is usually a screener. You want to make sure you're talking to the right people, even if that means just confirming that they are who you think they are. If there are questions that would disqualify someone from participating in your survey, you're better off asking those upfront. Not having someone waste their time going through an entire survey and then finding out that there are 25 years old and you only wanted to talk to people who are 18 to 24. By the same token in the screener. Don't tip your hand too much. You don't want people to know everything about the questionnaire or how they can qualify to be in the study. So for example, I did a study where we wanted to talk to people who shopped at Whole Foods at least once a month, I could have made the first question. Do you shop at Whole Foods at least once a month? But that reveals a lot about my questionnaire and who I want to talk to. Instead, we'd ask something like which of the following grocery stores do you shop at? At least once a month? Trader Joe's Whole Foods, Aldi, Kroger. It's just as easy for respondents to answer. But they're not going to go into the survey knowing that it has something to do with Whole Foods. After the screener, we move into the main body of the questionnaire. A few general points here. You want your questions to be organized and logical modules, you want to put questions to gather that go together. Don't bounce around asking about different topics or different reference periods. Think about what's going to make sense to your respondents and what's going to be easiest for them in terms of question ordering, we talked earlier about context effects and question order effects. You do want to make a conscious decision about how to manage these in general, I would say you want to start with easy questions first. You want your respondents to sort of warm up and get used to the process of answering questions before you ask them about more complex or sensitive topics. Sensitive questions, I usually like to put a leader in a questionnaire. Demographics we always collect, almost always collect some basic information in my world where we do a lot of consumer surveys, we want to balance by age and gender, and race and region. Sometimes you might have study specific demographics that you're interested in. Education or occupation or household composition when you're writing demographic questions, give some thought to your analysis plan. If you want to be able to talk about 18 to 24-year-olds, make sure you capture that in your age question. You'll be pretty annoyed if you get to the analysis phase. And you realize you only know how many people are 18 to 29. That's an example from my life. And finally, at the end of your questionnaire, thank your respondents. They just did you a favor, even just one line. Thank you for completing the survey is enough. If you have extra information you want to give them or instructions you need to give to respondents. That's a good place to do it. But at a minimum, you want to thank them for participating. When it comes to navigating a questionnaire, there are some things to consider that are specific to particular modes and some things that are quite general, no matter the mode, you should use, transitions in between modules or if you're switching from one subject to another. So if you've started with a series of questions about what the respondent did in the past week. And then you're switching gears to talk about their plans for the future. Say that just a simple sentence like now we're going to ask you a few questions about the future or switching topics. I have some questions about your plans for next week. You want to keep the flow of the questionnaire and not make it a jarring experience for respondents, especially in a paper and pencil survey, you want to be conscious not only of the actual length of your questionnaire, but how it looks to respondents. So it's pretty overwhelming to be staring down the barrel of a 10 page single-spaced paper and pencil survey. There are some easy things you can do that will make your questionnaires more efficient. One thing is using grid questions. If you're asking about satisfaction with five different things, instead of writing five questions, it's pretty standard to use a grid. Do be aware though, of making your grids too long. Some research has shown that the longer the grid, the more likely respondents are just straight line or give the same answer to every question. If you can keep your grids to four to six rows, That's a good rule of thumb, especially if you're doing an online survey, be conscious that a lot of people are probably responding on their phones. So you don't want a grid with 10 columns that people have to scroll across to see also on the topic of length and conveying to your respondents how long a questionnaire is. There is some debate for online surveys about using progress indicators where people can see a bar at the bottom of their screen telling them how far along they are. The best recommendation I've heard about progress indicators is that you should use them if they give you a respondent good news, but not if they don't. So if i just to answer ten questions and I see the progress indicator goal from 2% done to 3% done. That your questionnaires probably too long and your respondents are going to get frustrated. But if you can set your survey up so people feel like they're making good progress. A progress indicator can help to keep them motivated. Some more issues about navigating questionnaires. Number your questions and number the response options. Different disciplines and industries have different norms around letters or numbers. And I don't think there's a real right way to do it. But a questionnaire without numbers is hard if someone is reviewing it, if you're working as part of a team, and it can be unsettling for respondents not to know how far along they are, whatever numbering you use be consistent. Consistency in formatting is important again, to reduce cognitive burden on your respondents. You don't want them to waste mental energy trying to figure out why the response options are in a vertical list here, but a horizontal list on the next question, this extends to consistency in your scales. It may not be realistic to make every scale a seven-point scale or every scale a 10 point scale. But as much as you can keep your scales consistent, keep them in the same direction. So if extremely dissatisfied is on the left are at the top of your scale. You don't want the next question to have extremely unlikely. On the left or on the right are at the bottom if your questionnaire has skip patterns. So if you told me in question to that, unlikely to buy a car next year, I want you to answer question 3 about what kind of car you want. But if you said you're not likely to buy a car, I want you to skip to question 4 in online surveys, these are easy to implement automatically. You just have to be very clear in your mind when you're programming. What question people should branch to. In what circumstances? In a paper and pencil questionnaire where respondents are figuring it out themselves, you have to be even more clear. Some people recommend using arrows using a different color or capital letters should tell people if you answered not likely in question to skip to question four. Again, make it as easy and clear as possible for your respondents, especially in self-administered questionnaires, good visual design and easy to navigate instruments are really important. A user-friendly questionnaire is easier to respond to. You'll have less unit non-response where people refuse to participate or dropout. Less item non-response where people skip questions. And in general, higher data quality.