Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

surveyA survey sets out to: describe, compare, explain, or predict a condition, behavior, or outcome. It provides trainers and managers information they need to make decisions about programs, projects, people, and initiatives. How much you should invest in a survey to accomplish these purposes depends on the value of the information derived from that survey.

Regardless of the type of survey instrument you plan to employ, there are certain characteristics surveys must meet. They are:

  • measurable survey objectives
  • sound research design
  • effective survey question design
  • sound sampling strategy, when needed
  • effective survey response strategy
  • meaningful data summary
  • effective data display and reporting.

Survey Objectives
Survey objectives are the basis for all things about the survey. Survey objectives represent the need for the questions as well as the measures to be taken through the survey instrument. By reading the survey objectives, a surveyor should be able to identify the measures (or variables) as well as how best to collect the data. Good survey objectives also provide insight into the research design.

Survey objectives come in three forms: 1) a statement, 2) a question, or 3) a hypothesis. Because many surveys are used for descriptive purposes, the statement is the most common survey objective. However, there are times when a research question is an appropriate survey objective, particularly when the survey is intended to identify key issues that will ultimately form the basis for a larger survey. Hypotheses are special-purpose objectives and are, technically, only used when the theory the
survey is testing is based on enough evidence to justify hypothesis testing; although, specific, measurable, achievable, relevant, and time-bound (SMART) program objectives set for learning and development initiatives are written much like hypotheses.

Research Design
Research design refers to how the survey will be administered in terms of targeted groups, comparisons of data to multiple groups, and frequency of survey administration. Many survey projects represent cross-sectional studies. In a cross-sectional design, a survey is administered to a group at a defined time. For example, you may decide to measure your employees' overall satisfaction with their jobs. This measurement of satisfaction for the group at this particular time is a cross-sectional survey.

On the other hand, you may want to compare the change in behavior as measured by a 360-degree feedback survey between one group involved in a program and another group not involved in a program. This comparison of two groups falls into the experimental (randomly selected participants) or quasi-experimental (nonrandomly
selected participants) designs. Occasionally you will not know the specific questions to ask on a self-administered questionnaire. If that is the case, you can use a focus group (qualitative survey) to gather preliminary information that will inform the questionnaire. Or, you may administer a broad-based survey to capture data on key issues, but you use those data to guide questions asked during a focus group. These mixed method research designs are increasing in popularity and provide a robust foundation for collecting relevant data.

Survey Question Design
A quote by Ernst Cassirer, a Jewish German historian and philosopher, reads: "Are we to be disgusted with science because it has not fulfilled our hopes or redeemed its promises? And are we, for this reason, to announce the "bankruptcy" of science, as is so often and so flippantly done? But this is rash and foolish; for we can hardly blame science just because we have not asked the right questions."

Right there—in a few brief words Cassirer captures the essence of survey question design. All too often we make decisions based on results derived from the wrong questions. Even if they are the right questions, if they are poorly written the outcome is the same: decisions based on bad questions.

Survey question design is the heart of survey research. Asking the right questions the right way to the right people in the context of an appropriate research framework generates relevant, useable information. But how do we know what are the rightquestions? We refer to the survey objectives. How do we know we are asking them the right way? Design questions for the audience, not for you.

Sampling is a process developed to avoid costs of sending out one more survey, while allowing assumptions to be made to nonrespondents of a population. While it is a common practice in large general population studies, marketing research, and opinion polling, its use is limited within the organisation setting. This is particularly true when evaluating learning and development programs, human resources initiatives, and large meetings or events. But, when needed, a sound sampling strategy is an imperative in order to reduce error when making inferences.

Survey Response
An effective survey administration strategy will help ensure you receive an acceptable quantity and quality of responses. Research describes a variety of incentives and processes available to us to increase our chances of getting a good response rate. 

Data Summary
Data "summary" is a less intimidating way of referring to data "analysis." However, if you collect survey data, whether with a statistical survey or an interview, you will analyze the data. But fear not, it does not have to be difficult. Many of the surveys used in learning and development, human resources, and meetings and events lend themselves to simple descriptive statistics. While many organizations are advancing their capability in more complex analytics, most survey data captured for the purposes of conducting needs assessments and program evaluations can be summarized using basic statistical procedures. Credible qualitative analysis can be done by simply categorizing words into themes.

Data Display and Reporting
A final characteristic of a good survey is one for which the final results are reported in such a way that stakeholders immediately "get it." Reporting results requires written words, oral presentations, and effective graphical displays.

Reference: Survey Basics, Patricia Phillip, Jack Phillip, and Bruce Aaron, ASTD Press 2013 

Write comment (0 Comments)

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

JackPattiToday's executives must show accountability for their investments. Many executives have found that actually measuring the return on investment on a few selected, significant, high-profile programs is an excellent way to show fiscal responsibility for key projects and initiatives. Our best guess is that about 30 to 40% of executives are using ROI as a tool to evaluate non-capital investments. Yet, according to the Corporate Executive Board in their major benchmarking efforts, almost 80% of organizations want to use ROI in the future. This gap of actual use versus desired use underscores the misunderstandings and misconceptions of ROI as a legitimate part of the measurement mix for the C-Suite Executives.

For two decades we have been assisting organizations with this important issue. In the last ten years we have kept track of the many questions that are often asked about ROI in conferences, workshops, and consulting assignments.

Here are the 25 most frequently asked questions about ROI:

  1. How does ROI in non-capital investments such as human resources initiatives differ from the ROI used by the financial staff? The classic definition of return on investment is earnings divided by the investment – no matter what the application. In context of calculating the return on investment in human resources, the earnings become the net benefits from the program (monetary benefits minus the costs), and the investment is the actual program cost. The difficulty lies is developing the actual monetary benefits in a credible way.
  2. Do I have to learn finance and accounting principles to understand the ROI Methodology™? No! Many of the basic principles of finance and accounting are not needed to develop the return on investment in non-capital investments. However, it is important to understand issues such as revenue, profit, and cost. Ultimately, the payoff of learning and development or human resources will be based on either direct cost savings or additional profit generated. It is helpful to understand the nature and types of costs and the different types of profits and profit margins.
  3. Do I have to know statistics to understand ROI? No! The very basic statistical processes are all that are necessary to develop most ROI impact studies. It is rare for statistics to be needed beyond simple averages, variance, and the standard deviation. Sometimes hypothesis testing and correlations are necessary. These are very simple concepts and, by design, are simplified as much as possible.
  4. Is ROI just one single number? No! How can you communicate a program's value with a number? The ROI Methodology (or the ROI process) develops six types of data, with the actual ROI calculation being only one of them. The six types of data are:
    1. Reaction, satisfaction, and planned action
    2. Learning and application
    3. Implementation
    4. Business Impact
    5. ROI
    6. Intangibles
  5. Aren't the levels of evaluation out of date and no longer applicable? No! The original four steps or levels promoted by Don Kirkpatrick in 1959 to show how the data must be categorized for a learning and development program. The ROI Institute has modified and updated the context significantly. As shown above, the data are arranged in a chain of impact that must exist if the learning has business impact, which ultimately becomes business value. The chain of impact can be broken at any point, thus correlations do not always exist between the levels because there are barriers to success at any level. Although a few researchers take issue with the four levels, it is still the most widely used foundation for evaluation. ROI becomes the fifth level and is the consequence of the program expressed in monetary terms. A special paper is available from the ROI Institute which compares the ROI Methodology with the Kirkpatrick levels.
  6. Isn't ROI based on nothing but estimates that can be very subjective? No! Estimates are usually used in three areas:
    1. Sometimes the amount of improvement is estimated when records are not readily available to show the improvement or, in a forecast situation, where it is not known;
    2. When isolating the effects of a program;
    3. When converting monetary values; and
    4. Calculating the costs (acceptable finance practice).
    • Estimates are used only when other methods are not readily available or become too time consuming or expensive to obtain. When estimates are taken, they are adjusted for the error of the estimate to improve their credibility. In essence, results are understated. In every case, there are many alternatives to estimates and they are often recommended. Estimates are used routinely in some situations because they become the preferred method and are accepted by stakeholders or they may be the only way to obtain the data that are needed.
  7. Isn't ROI too complicated for most non-financial professionals? No! The ROI calculation itself is a very simple ratio: benefits divided by costs. The processes needed to arrive at the benefits follow a methodical, step-by-step sequence with guiding principles used along the way. The costs are developed using guidelines and principles as well. What makes it more complicated are the many options in each step in the process. The options are critical due to the many different situations, programs, and project that need to be evaluated and the different environments and settings in which they occur.
  8. Doesn't ROI cost too much? No! The cost for a study all the way through to ROI may represent as much as 5-10% of the entire project. This number varies considerably. For large, expensive projects the percent is much smaller. It's also important to note that in most organizations every program is evaluated at some level. The total cost of all evaluation, including selected ROI studies, is usually in the range of 3-5% of the total budget for the function.
  9. Is it always possible to isolate the effects of my program from other factors? Yes! This is the most difficult and challenging issue, but it is always possible, even if estimates are used. Some of the most sophisticated and credible processes involve control groups, trend line analysis, and forecasting models. Other, less sophisticated techniques are used such as expert estimation and customer input. Always strive to carve out the amount of data directly related to the program or project. When estimates are used, the data should be adjusted for the error of the estimate.
  10. Is it true that the ROI process does not reveal program weaknesses or strengths? No! The ROI Methodology captures six types of data. At Levels 1, 2, and 3, data always capture deficiencies or weaknesses in the process. At Level 3, the process requires collecting data about the barriers (which inhibit success) and enablers (which help success).
  11. Is it true that the ROI process does not result in recommendations for improvement? No! Each impact study using the ROI Methodology contains a section for recommendations for improvement. It is essential that this tool be utilized, first and foremost, as a process improvement tool. Recommendations for changes are always appropriate, even when studies have reflected a very successful project.
  12. Is it appropriate to do ROI for every program? No! Only a few select programs should be subjected to evaluation all the way through to the fifth level of evaluation (ROI). Ideal targets include programs that are very expensive, strategic, operationally focused, highly visible, and those that involve large target audiences and have management attention in terms of their accountability. In most organizations using this methodology, only about 5-10% of the programs are selected for ROI analysis each year.
  13. Which programs are not suited for ROI analysis (but may still achieve a positive ROI)? Certain programs need not be evaluated all the way through to ROI. The following programs are usually not appropriate for ROI: mandatory programs, compliance programs, legally-required initiatives, very specific operational job-related programs, brief programs, information-sharing programs, entry-level programs, new-to-the-job issues, and programs intended to align the individual with the organization.
  14. Who is using the ROI Methodology? Practically all types of organizations in the USA and around the world are using the ROI Methodology. To date, over 5,000 organizations have formally implemented ROI through skill building and ROI Certification. This includes government agencies, non-profit organizations, and NGO's. This includes over half of the Fortune 500 in the USA. Over 20 governments have endorsed this process, including the US Federal Government, The Canadian Government, Mexican government, and British government. Also the United Nations has endorsed this methodology as well. In essence, thousands of other organizations are utilizing ROI Methodology through an informal implementation in various parts of their organization. In addition, almost 30,000 specialists and managers have taken either a one-day or two-day ROI workshop and over 7,000 individuals have participated in a five-day comprehensive certification workshop.
  15. What types of applications are typical for ROI analysis?
    The application vary and include:
    1. Human Resources/Human Capital
    2. Training/Learning/Development
    3. Leadership/Coaching/Mentoring
    4. Diversity and Inclusion
    5. Knowledge Management
    6. Organization Consulting/Development
    7. Policies/Procedures/Processes
    8. Recognition/Incentives/Engagement
    9. Change Management
    10. Technology/Systems/IT
    11. Green Projects/Sustainability Projects
    12. Safety and Health Projects
    13. Talent Retention Solutions
    14. Project Management Solutions
    15. Quality/Six Sigma/Lean Engineering
    16. Meetings/Events
    17. Marketing/Advertising
    18. Communications/Publications
    19. Public Policy/Social Programs
    20. Risk Management/Ethics/Compliance
    21. Healthcare Initiatives
    22. Wellness and Fitness Programs
  16. How can I learn more about ROI? There are many options available to learn about ROI. Dozens of books, case studies, and templates have been published, with many of them made available from the publishers. Additional resources are available through and The recommended way to learn the ROI Methodology is through a workshop either conducted internally or in a public presentation. Insources offers a two-day workshop, and in partnership with The ROI Institute offers the five-day certification workshop now in Australia. Additionally, on-site consulting and coaching is an option.
  17. Can ROI be used on the front end of the project in a forecasting mode? ROI forecasting is an important part of the ROI M ethodology. This process uses credible data and expert input and involves estimating the improvement (projected benefits) that will occur when a program is implemented. Projected benefits are compared to projected costs to develop the forecasted ROI.
  18. How does ROI compare to a balanced scorecard? The ROI process generates six types of data (reaction, learning, application/impact, business impact, ROI, intangibles) which, in itself, is a balanced scorecard. The balanced scorecard process developed by Kaplan and Norton (1996) suggests four categories of data (learning and growth, internal business processes, financial, and customer). The data generated with the ROI Methodology may be grouped into one of these four categories. In addition, the ROI process adds two additional capabilities not normally contained in balanced scorecard methodology: it provides a technique to isolate the effects of a program; and it shows the costs vs. benefits of a particular program or initiative. Sometimes the measures on a balanced scorecard need to improve and a project is implemented to improve it. The ROI measures the success of the project. Thus, the ROI Methodology will complement the balanced scorecard process.
  19. How can I secure support for ROI in my organization? Building support for the ROI Methodology is an important issue. Top executives will usually support the process when they realize the types of data that will be generated. Most of the resistance comes from those directly involved in programs because they do not understand ROI and how it is used in the organization. When they are involved in implementing the methodology and the data are properly used to drive improvements, it helps to lower the resistance. The efforts to implement any major change program will apply with the implementation of the ROI Methodology.
  20. How can I minimize staff resistance to this methodology? Most internal staff will have some resistance to ROI unless they see the value it can bring to their work. Involvement, education, and process improvement are key issues. It is often the fear of ROI that generates resistance – a fear based on misunderstandings about the process and how the data will be used. The ROI methodology should be implemented as a process improvement tool and not as a performance evaluation tool for staff. No one wants to develop a tool that will reflect unfavorably on their performance review. Improvement in key decisions about the use of ROI will help minimize resistance. Also, resistance will be minimized when steps are taken to ensure that the data are communicated properly, improvements are generated, and the data are not abused or misused.
  21. Should I conduct an ROI study on my own program? If possible, the person evaluating the program should be independent of the program. It is important for the stakeholders to understand that the person conducting the study is objective and removed from certain parts of the study, such as the data collection and the initial analysis. Sometimes these issues can be addressed in a partnering role or limited in outsourcing opportunities – whether data collection or analysis. In other situations, the issue must be addressed and the audience must understand that steps are taken to ensure that the data were collected objectively, analyzed, and reported completely.
  22. Are there any standards for ROI? The ROI Methodology, as developed by Jack and Patti Phillips and their associates, contains standards labeled "Guiding Principles." These provide consistency for the analysis with a conservative approach. The conservative approach builds credibility with the stakeholders.
  23. What type of background is necessary for learning the ROI Methodology? It is helpful for the individual to understand the business in which the studies will be developed. Knowledge about operations, products, and financial information are very helpful. Also, the individual should not have a fear of numbers. Although the ROI Methodology does not involve much statistical analyses, it does involve data analysis. Excellent communication skills are needed to develop the various documents describing results and presenting those results to a variety of stakeholder groups. Finally, the ability to partner with many individuals is extremely important. This requires much focus, contact, collaboration with the client – this is a very client-focused methodology. Theindividual must be willing to meet with the key sponsors of programs and build those relationships necessary to capture the data and communicate the data to them.
  24. How is the ROI on e-learning developed? Applying the ROI Methodology to e-learning is the same as any other process, program, or solution. The monetary value of the benefits from the e-learning are compared to the cost of the e-learning. Many individuals assume that the benefits of training remain the same in that only the costs to provide training changes. What makes e-learning studies somewhat different is that these individuals assume that the e-learning is more cost effective on a larger scale. An ROI study should be conducted to show the actual value of the training. When an instructor-led program is compared to an e-learning program, the effectiveness of the training, when comparing the impact data (impact data from both programs) is compared with the respective costs for each of the programs. A higher ROI shows a more successful program in terms of providing value that exceeds costs.
  25. How do you calculate the ROI on the ROI? It is a very good question to raise in terms of the payoff using this methodology. The important issue is the value of implementing the methodology itself. While literally hundreds of organizations are reporting the benefits and successes, it is helpful to understand the internal payoff in the organization. The improvements and changes resulting from an impact study are tallied from one study to another and compared to the actual cost of the implementation. This, in essence, can generate the return on investment for utilizing this process. This approach is recommended for most major implementations. A special paper on this issue is available from the ROI Institute.
Write comment (0 Comments)

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

Piramide ROI
The reality is that training "events" don't work. As vocational education and learning professionals, it's time we stopped delivering training events and started delivering performance improvement processes that have learning as a key component. You don't go to a class and next week, everything changes.

Companies invest in training and development to enhance capability and improve performance. However to improve performance, employees have to do things differently and better when they return to work. If they don't change their behaviors back on the job, then performance won't change, no matter how much they learned. As Einstein supposedly quipped: "One definition of insanity is to continue doing the same thing and expect a different result."

People don't change their behavior just because they learned something new. People need to apply what they learned once they return to work if training is to produce business value. Whether or not they do so depends on how they answer two questions: "Can I?" and "Will I?" Unless they answer "yes" to both, they will slide back into old habits and the training will go to waste. Getting value from training requires both great learning and great learning transfer. That is why we have to think beyond events.

The answer to the "Can I?" question is strongly influenced by the training itself. To say, "Yes, I can," employees need to feel confident that they can competently perform the skills they were taught. The instructional design needs to include the right amount of content, and especially, adequate practice with feedback. However, the post-training environment also affects trainees' belief that they can; their manager has to give them the opportunity, and they need job aids and other forms of performance support to boost their confidence.

Even if employees can perform in a new and better way, they need to be motivated to make the effort; they have to also say, "Yes, I will." Many factors affect participants' willingness to apply what they have learned; almost all of these factors occur in the post-training "transfer climate." Support from the participant's manager is key.

If you want your training efforts to earn greater respect and produce even greater impact, think "process" not "event." Plan for and influence the whole process by which training becomes performance.

If you want your training efforts to be rewarded with even greater impact, start with why. To learn more, join us at one of the "Evaluate Training Programs using the ROI Methodology" workshop.

Write comment (0 Comments)

Privacy Policy
Terms of Sale
Terms of Use

  • Email:
  • Phone: 1300 208 774
  • ABN 74 625 075 041 


© 2019 - 2020 by Insources Group Pty Ltd. All rights reserved.