A survey sets out to: describe, compare, explain, or predict a condition, behavior, or outcome. It provides trainers and managers information they need to make decisions about programs, projects, people, and initiatives. How much you should invest in a survey to accomplish these purposes depends on the value of the information derived from that survey.
Regardless of the type of survey instrument you plan to employ, there are certain characteristics surveys must meet. They are:
- measurable survey objectives
- sound research design
- effective survey question design
- sound sampling strategy, when needed
- effective survey response strategy
- meaningful data summary
- effective data display and reporting.
Survey objectives are the basis for all things about the survey. Survey objectives represent the need for the questions as well as the measures to be taken through the survey instrument. By reading the survey objectives, a surveyor should be able to identify the measures (or variables) as well as how best to collect the data. Good survey objectives also provide insight into the research design.
Survey objectives come in three forms: 1) a statement, 2) a question, or 3) a hypothesis. Because many surveys are used for descriptive purposes, the statement is the most common survey objective. However, there are times when a research question is an appropriate survey objective, particularly when the survey is intended to identify key issues that will ultimately form the basis for a larger survey. Hypotheses are special-purpose objectives and are, technically, only used when the theory the
survey is testing is based on enough evidence to justify hypothesis testing; although, specific, measurable, achievable, relevant, and time-bound (SMART) program objectives set for learning and development initiatives are written much like hypotheses.
Research design refers to how the survey will be administered in terms of targeted groups, comparisons of data to multiple groups, and frequency of survey administration. Many survey projects represent cross-sectional studies. In a cross-sectional design, a survey is administered to a group at a defined time. For example, you may decide to measure your employees’ overall satisfaction with their jobs. This measurement of satisfaction for the group at this particular time is a cross-sectional survey.
On the other hand, you may want to compare the change in behavior as measured by a 360-degree feedback survey between one group involved in a program and another group not involved in a program. This comparison of two groups falls into the experimental (randomly selected participants) or quasi-experimental (nonrandomly
selected participants) designs. Occasionally you will not know the specific questions to ask on a self-administered questionnaire. If that is the case, you can use a focus group (qualitative survey) to gather preliminary information that will inform the questionnaire. Or, you may administer a broad-based survey to capture data on key issues, but you use those data to guide questions asked during a focus group. These mixed method research designs are increasing in popularity and provide a robust foundation for collecting relevant data.
Survey Question Design
A quote by Ernst Cassirer, a Jewish German historian and philosopher, reads: “Are we to be disgusted with science because it has not fulfilled our hopes or redeemed its promises? And are we, for this reason, to announce the “bankruptcy” of science, as is so often and so flippantly done? But this is rash and foolish; for we can hardly blame science just because we have not asked the right questions.”
Right there—in a few brief words Cassirer captures the essence of survey question design. All too often we make decisions based on results derived from the wrong questions. Even if they are the right questions, if they are poorly written the outcome is the same: decisions based on bad questions.
Survey question design is the heart of survey research. Asking the right questions the right way to the right people in the context of an appropriate research framework generates relevant, useable information. But how do we know what are the rightquestions? We refer to the survey objectives. How do we know we are asking them the right way? Design questions for the audience, not for you.
Sampling is a process developed to avoid costs of sending out one more survey, while allowing assumptions to be made to nonrespondents of a population. While it is a common practice in large general population studies, marketing research, and opinion polling, its use is limited within the organisation setting. This is particularly true when evaluating learning and development programs, human resources initiatives, and large meetings or events. But, when needed, a sound sampling strategy is an imperative in order to reduce error when making inferences.
An effective survey administration strategy will help ensure you receive an acceptable quantity and quality of responses. Research describes a variety of incentives and processes available to us to increase our chances of getting a good response rate.
Data “summary” is a less intimidating way of referring to data “analysis.” However, if you collect survey data, whether with a statistical survey or an interview, you will analyze the data. But fear not, it does not have to be difficult. Many of the surveys used in learning and development, human resources, and meetings and events lend themselves to simple descriptive statistics. While many organizations are advancing their capability in more complex analytics, most survey data captured for the purposes of conducting needs assessments and program evaluations can be summarized using basic statistical procedures. Credible qualitative analysis can be done by simply categorizing words into themes.
Data Display and Reporting
A final characteristic of a good survey is one for which the final results are reported in such a way that stakeholders immediately “get it.” Reporting results requires written words, oral presentations, and effective graphical displays.
Reference: Survey Basics, Patricia Phillip, Jack Phillip, and Bruce Aaron, ASTD Press 2013