top of page
  • gschmilinsky10

Surveys 101 – Part 2

Updated: Jun 20, 2023

Are you looking to do a survey with Tailor Research but don’t know where to start? This is part 2 on a series of posts on how to get started. Read the following list of suggestions on how to design a good survey. If you have any further inquiries don’t hesitate to write to us at info@tailorresearch.com

Start simple: Try to make the first question(s) easy qualitative questions to ease the respondent into the survey and for that person to have a good, collaborative mindset for the rest of the survey.

Front-load the most important questions, because those are the questions that people will tend the most time considering. Respondents get survey fatigue. The longer the survey, the more fatigue they have, the less they will consider each question.

When to Use Quantitative (Structured) Surveys versus Qualitative (Unstructured) Surveys. Use quantitative surveys when you want your data to be broadly applicable to a large number of people and use qualitative surveys when you want unique insights from each respondent. Use a combination of qualitative and quantitative surveys when trying to accomplish both. Multiple-choice questions are easier for respondents to answer. They are also easier to analyze and tabulate than open-ended questions. Quantitative surveys should have multiple-choice type questions (radio buttons, drop-down, question, selection boxes, etc.), whereas qualitative surveys tend to have more open-ended questions. Open-ended questions often offer more for research as far as color and understanding of a subject, the who’s and why’s, but because each answer is often unique, it can be very difficult to compare answers. Quantitative surveys follow standard methods for randomly selecting a large number of participants (from a target group) and use statistical analysis to ensure that the results are statistically significant and representative of the whole population—whenever possible. A survey creator should be careful to analyze and act on the data. The type of questions asked has everything to do with the kind of analysis that can be made: multiple answers, single answers, open or closed sets, optional and required questions, ratings, rankings, and free-form answer fields are some of the choices open to you when deciding what kinds of answers to accept. A survey creator should keep in mind what is needed ultimately and then work backward. For example, if a survey is intended to get opinions and attitudes of a product or service, then open-ended questions can provide more details and understanding of those opinions, whereas binary choices (multiple-choice) can force a respondent to a choice that may lead the evaluator to a faulty conclusion. Also, multiple-choice answers leave little room for answers that are not available to the respondent but more representative. Also, unstructured/open-ended questions enable the respondents to express general attitudes and opinions that can help the researcher interpret their responses to structured questions, so a hybrid of choices is often the best answer to the design of a survey. However, multiple-choice surveys are easier to administer, analyze and get a respondent (thereby making multiple-choice surveys less expensive than surveys with many open-ended questions).

Keep the length of the survey to between 10 to 15 minutes or 15 to 20 questions. Often survey questions are embedded among other questions in an attempt to make the survey seem shorter. The reality is each sub-question is its own question (these should be separated—see below), and all of those questions individually should not go beyond 20 questions. The risk of making a survey with too many questions is the quality of the survey will deteriorate to the point where the answers cannot be relied upon, which of course defeats the purpose of the survey. Extra questions reduce your response rate, decrease validity, and make all your results less statistically significant. It is better to break-up a survey into multiple surveys than to use one long survey. Realistically estimate the time to complete the survey. The more open-ended questions and complex ranking you ask people to do, the more you will have poorer quality answers.

Bold important parts of the survey question

Randomize answer choices. Often survey creators keep the same order of response choices and or follow a logical order. There is a benefit to this in that respondents are less confused and will answer what is intended. However, it can also introduce bias in the results. Making answer choices random, when it makes sense, will help avoid these biases.

Avoid leading questions. For example, “How bad was the service you received from XYZ company?”. By using the words “how bad,” you are leading the respondent to think that service is bad. Instead you can write a neutral question such as, “How were the services you received from XYZ company?”

Avoid loaded questions. These are questions that pre-qualify or pre-assume that the respondent is qualified to answer the question. For example, “What kind of movies do you enjoy watching?” This question pre-qualifies the respondents as someone who watches movies. Not everyone actually does.

Avoid double-barreled questions. One of the most common mistakes by survey creators. A double-barreled question is one that has two questions wrapped in one. From experience, clients often embed questions within questions within questions. It can confuse the respondent, skew the results, and generally reduce the quality of a survey. For example, “What do you like or dislike about the XYZ feature?” A much better way to approach this question is to break into two questions, “What do you like about the XYZ feature?” or “What do you dislike about the XYZ Feature?”

Avoid absolutes. It is easy to use absolutes such as “never,” “always,” “all,” “every,” “ever,” etc. Sometimes absolutes are exactly the intent of the questions, but more often there are not. For example, “Do you always drive to work? (Yes/No)”; I should hope that you are not constantly eating; this would be very detrimental to your health. Instead, word the question, “What percentage of your work commute is by driving yourself to work?”

Avoid jargon and use clear, concise, and use examples to add clarity. Often, a survey creator knows what they are trying to state, knows industry jargon, but the respondent does not. This leads to questions not being fully understood. It is important not to use jargon and to write a survey question as if anyone could answer it. This is not always possible, but often it is. This means avoiding technical terms and acronyms. Use definitions and examples to add clarity. For example, “How did Y/Y 3rd quarter sales go?”. Instead this question could be written like this, “What percentage would you estimate that sales change in July, August, and September of 2019 versus the same period a year earlier?”

Another example is instead of writing, “Do you buy legumes at the store?” you could write, “Do you buy legumes (beans) at the store, e.g. black beans, kidney beans, garbanzo beans, etc.?”. In general, use simple words and phrases rather than complex ones. A common problem with surveys is writers wanting to use the precise technical phrase or to be sophisticated at the expense of the respondents’ ability to clearly understand the question and answer choice.

Avoid Order or Position Bias. This is the respondents’ tendency to check an alternative merely because it occupies a certain position in a list. Alternatives that appear at the beginning and, to a lesser degree, at the end of a list tend to be selected most often. When questions relate to numeric values (quantities or prices), there is a tendency to select the central value on the list. Order bias can be controlled by preparing several forms of the questionnaire with changes in the order of the alternatives from form to form. Unless the alternatives represent ordered categories, each alternative should appear once in each of the extreme positions, once in the middle, and once somewhere in between.

Odd or Even Number of Answer Choices: Many survey questions have Likert scales. A Likert scale is a rating scale that measures how people feel about something. Ideally, the number of choices is between four and eight. Too many choices can confuse the respondent and lower the likelihood of a quality response. The choice of an odd or an even number of answer choices (in a rating scale) is not always easy to determine. A choice of an odd or an even amount of answer choices is often a choice of whether you want a set of answers that statistically can lead to bias, because of a middle-choice, or whether you want to include a neutral choice rather than forcing the respondent toward one bias or another. If a midpoint is not present, the interpretation of the scale categories is up to the respondents. That is to state that respondents who see a middle choice often “judge” the answers as split (e.g. positively and negatively) based on that middle choice. With an odd number of categories, the middle scale position is generally designated as neutral or impartial. The midpoint may mean different things to different respondents, but the interpretation is not as “entirely” up to the respondents, as in the other case. Other considerations are some respondents may find a forced choice to be unpleasant. Therefore, odd or even scales may affect completion rates. Also, some respondents may choose the midpoint as a way to avoid making a decision. On the other hand, some may be truly neutral, and if they are forced to take a side, they may switch from one mild directional response to another if the same questions are administered again. This can have consequences for the reliability and validity of the results.

An “other (please specify)” category should be included where appropriate. Multiple choice questions should include choices that cover the full range of possible alternatives. The alternatives should be mutually exclusive and collectively exhaustive.

Sampling Techniques

Developing the list to pull the sample from: Developing a list to sample from, can be very challenging. Rarely is it possible to get a comprehensive list of the population, so developing a list to sample from becomes a big factor to be considered. For example, developing a survey of U.S. based, restaurant owners. Is there a list comprehensive list of restaurant owners for the United States? Probably not, and particularly finding that list with contact information, demographic information, etc. Realistically, a survey designer must generate a list that represents the general population and without bias. A survey designer must be careful to understand the population, which includes research items such as: percentage of restaurants in rural, sub-urban, and urban areas. What the sizes of these restaurants are in terms of employees, revenue, etc.; What type of food the restaurants serve as a percentage of the overall population. All these items make developing a list difficult. The more research and nuanced your research, the better the list will represent the overall population. Relying on approaches to developing a list often have hidden difficulties.

Simple Random Samples (SRS): This is the simplest type of random sampling. This is simply developing a way to sample from the population where each person chosen has an equal chance of being chosen. For example, numbering a list and then applying a random number generator to pick each potential respondent. To the point made earlier, this can still introduce bias if the list developed becomes skewed from the actual population—despite the random nature of the sampling.

Systematic Sample: Is a systematic approach to extract from a list, a sample of respondents to approach for a sample. The equiprobability method specifically uses a random approach for selecting the first respondent, then using a fixed sampling interval thereafter, recycling when at the end of the list.

Stratified Random Sample: Is a method of sampling that involves the division of a population into smaller groups known as strata. The “strata” are formed based on shared attributes or characteristics. A stratified sample can provide greater precision than simple random samples and often require a smaller sample for the same results. It also guards against unrepresentative samples discussed earlier.

Cluster Sample: Is a method of sampling where the target population is divided into clusters by some method. Then, some of these clusters are selected randomly for sampling. Depending on the number of steps followed to create the desired sample, cluster sampling is divided using a single stage, two stage or multiple stage sampling techniques. The advantage is this method is cost reduction at the expense of accuracy (sampling error is increased).

Multistage Sample: Is a method of taking samples in stages, using smaller and smaller samples at each stage (this is still dividing into clusters). The advantage is simplicity and flexibility at he expense of accuracy.

Determining Sample Size. How many people are in your population? Only you know who you are targeting as the population of eligible respondents but take the time to carefully define that population, then research to determine its size. A great resource we often use for U.S. based surveys is https://www.census.gov, but places to look are limitless. Tailor Research can help with your internet research needs.

Determine the confidence-level and margin-of-error you want in your survey? There’s a saying, people who say you can’t buy happiness just don’t know where to shop. While that is a joke, it is true in determining the sample size, confidence-level and margin-of-error for a survey. There are many “confidence-level calculators” available on-line. However, the real challenge in setting these parameters is the trade-offs. The higher the confidence-level, the higher the costs. The lower the margin of error, the higher the costs. For a fuller explanation of these terms and use, please see this link.

The higher the variance in the population, the higher the sample size needed Look out for skewed populations rather than normally distributed population distributions and populations with fat tails (kurtosis). As a general rule, if skewness is less than -1 or greater than 1, the distribution is highly skewed. If skewness is between -1 and -0.5 or between 0.5 and 1, the distribution is moderately skewed. If skewness is between -0.5 and 0.5, the distribution is approximately symmetric. What is Kurtosis? It is s a measure of the “tailedness” of the probability distribution.

Similarly to the concept of skewness, kurtosis is a descriptor of the shape of a probability distribution, and, just like skewness, there are different ways of quantifying it and corresponding ways of estimating it from a sample from a population. Whereas skewness differentiates extreme values in one versus the other tail, kurtosis measures extreme values in the tails. Excess kurtosis means the distribution of event outcomes have lots of instances of outlier results, causing “fat tails” on the bell-shaped distribution curve. A simple way to think about skewness versus kurtosis is that skewness has a distribution curve where it has a long tail to one side and much less on the other side; Kurtosis is where the distribution itself is less centrally located, meaning there is more in the population that in the tails than a normal distribution curve.

a. Skewness calculation

Xi = ith Random Variable

b. Kurtosis calculation

c. For a story of the dangers of Kurtosis, read this.

8 views0 comments
bottom of page