It’s Spring, the time of the year when making changes is top of mind with all, not just with people who change things for a living. In the best of cases, change is informed by knowledge, and surveys are an often used method of collecting the data that, when integrated, becomes just that – actionable change.
Writing a survey that actually collects the information needed is difficult. Very difficult. The happiest survey writers are the newest; they plow cheerfully forward with questions they’ve dreamed up, undeterred by practical experiences indicating just how dangerous it is to act on bad data.
Fortunately, for everyone writing survey questions and acting after analyses of them are complete, Sage Publications has published important texts on the surveying process. The books written by Floyd Fowler on survey questions themselves are classics, for example.
This is the first in a series of articles on stellar surveying tactics.
It’s important to keep in mind that people answering survey questions are supplying information within the context of the questions and responses provided to them. Sure, there are often open-ended questions, for which survey takers can write mini-essays, but just as often there are limited resources available to make sense of the material in these short statements. To survey successfully, it is important to get to know survey takers and the situations you’re investigating upfront, so that questions posed and response options provided are meaningful and the information collected is on target to answer questions that need to be resolved for effective design.
Have a finely tuned, straightforward objective for your entire survey in mind, before you write a single question. Then, make sure what’s in your survey aligns with it.
This seems easy and obvious, but many a survey meanders along for question after question, collecting information that doesn’t actually have any bearing on solutions that can be crafted. Edit. Having survey takers spend even 15 minutes answering questions is often not desirable; respondent boredom/exhaustion sets in well before that time.
Other important considerations, which are clearly elaborated by Fowler, abound. It’s important to be clear in all survey questions, defining terms that might be confusing over and over in the survey text, because your respondents aren’t as interested in remembering definitions you provide as you think they are. Terms that you might think are basic need to be carefully spelled out. For example, how do you define “workspace” and “workplace”? Probably not the same way as the person who’ll answer your questions. Similarly, the terms used in questions should be the same ones that the survey takers use in their everyday conversations to talk about whatever issue is being studied.
Misunderstandings about the meanings of terms cause trouble, as do leading questions. Questions can more overtly or covertly prejudice responses. An overt form of prejudice is clear when all question responses asking about a particular situation are subtly positive, for example. When carefully read, this sort of “direction” is often clear in survey questions. Asking someone how satisfied they are with something infers that they are indeed satisfied to some extent. People need to be asked, instead, about how satisfied or dissatisfied they are.
It’s pointless to ask people questions that they’re not willing or able to answer accurately. Willingness to answer accurately seems pretty clear. Some difficulties with accurate response are more subtle; for example, it’s hard to answer questions about normal situations when daily events don’t follow regular patterns.
People are also regularly asked survey questions about potential environments, and, to be frank, few of us are prophets. Hypotheticals, in general, are problematic.
In introductions to surveys and their subsections, it’s important not to prejudice responses. Telling people to answer honestly, etc., can alienate respondents because it infers that they might have been dishonest, for example. Also, survey writers sometimes spell out how data will be used in a way that indicates management’s position on a potential environmental change, which can color responses.
We’ll also talk about how to word responses that indicate things such as degree of agreement or disagreement with a statement, etc., in a future article. Right now, ask yourself, generally, if you want people to be able to be neutral on an issue or not; neutrality requires a mid-point on a scale, or an odd number of options along a continuum, whereas no mid-point leads to an even number of options.
When you’ve written a survey, it’s important to test your questions ahead of time. After you think you’re done writing the survey, sit down with someone who’s not been involved in developing it, and ask them to answer its questions. Ask them to narrate their experiences with your survey out loud as they go along (e.g., providing comments such as “Here, you’re asking about X and since Y and Z, I’m answering A”). Do not interrupt the survey taker, and accept their interpretations of your questions; then revise the survey as necessary. Repeat this step as often as necessary until your questions are working as intended.
Also, analyze the data from the first few other respondents immediately and carefully. It may point out other problems with the survey that haven’t come up in the walkthroughs. For example, if your analyses indicate that lots of older men are using onsite lactation rooms, take note. Is something wrong with the question you’ve written? Conversely, the question might be well done, and it might actually be that people are seeking refuge in these more private spots. Whatever the explanation, you need to check out the situation before you have thousands of potentially screwed up responses on your hands.
It’s important to understand the types of analyses that are possible with the data obtained from particular types of questions. If you plan to use information collected for analyses more complicated than counts of the frequency with which an option was chosen, for example, it’s time to get out a statistics book to make sure the data you’ll gather is up to the task. More on this point will follow in a future article.
More on complex survey issues, from wording tricky response options to how to order questions, to how many people should complete a survey for useful analyses, will follow in future articles here.
Sally Augustin, PhD, a cognitive scientist, is the editor of Research Design Connections (www.researchdesignconnections.com), a monthly subscription newsletter and free daily blog, where recent and classic research in the social, design, and physical sciences that can inform designers’ work are presented in straightforward language. Readers learn about the latest research findings immediately, before they’re available elsewhere. Sally, who is a Fellow of the American Psychological Association, is also the author of Place Advantage: Applied Psychology for Interior Architecture (Wiley, 2009) and, with Cindy Coleman, The Designer’s Guide to Doing Research: Applying Knowledge to Inform Design (Wiley, 2012). She is a principal at Design With Science (www.designwithscience.com) and can be reached at sallyaugustin@designwithscience.com.