INSOURCES BLOG

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

Jack PhillipsBy Jack J. Phillips, Ph.D.

Learning and Development professionals often must evaluate their key learning programs, collecting several types of data—reaction, learning, application, impact, intangibles and maybe even return on investments.

What if the evaluation produces disappointing results? Suppose application and impact were less than desired, and the ROI calculation negative. This prospect causes some learning executives to steer clear of this level of accountability altogether.

For some L&D professionals, negative results are the ultimate fear. Immediately, they begin to think, "Will this reflect unfavorably on me? On the program? On the function? Will budgets disappear? Will support diminish?" These are all legitimate questions, but most of these fears are unfounded. In fact, negative results reveal the potential to improve programs. Here are 11 ways to address negative results and use them to facilitate positive transformations:

1. Recognize the Power of a Negative Study
When the study results are negative, there is always an abundance of data indicating what went wrong. Was it an adverse reaction? Was there a lack of learning? Was there a failure to implement or apply what was learned? Did major barriers prevent success? Or was there a misalignment in the beginning? These are legitimate questions about lack of success, and the answers are always obtained in a comprehensive evaluation study.

2. Look for Red Flags
Indications of problems often pop up in the first stages of initiation—after reaction and learning data have been collected. Many signals can provide insight into the program's success or lack of success, such as participants perceiving that the program is not relevant to their jobs. Perhaps they would not recommend it to others or do not intend to use it on the job. These responses can indicate a lack of utilization, which usually translates into negative results. Connecting this information requires analyzing data beyond overall satisfaction with the program, the instructor and the learning environment. While important, these types of ratings may not reveal the value of the content and its potential use. Also, if an evaluation study is conducted on a program as it is being implemented, low ratings for reaction and learning may signal the need for adjustments before any additional evaluation is conducted.

3. Lower Outcome Expectations
When there is a signal that the study may be negative, or it appears that there could be a danger of less-than-desired success, the expectations of the outcome should be lowered. The "under-promise and over-deliver" approach is best applied here. Containing your enthusiasm for the results early in the process is important. This is not to suggest that a gloom-and-doom approach throughout the study is appropriate, but that expectations should be managed and kept on the low side.

4. Look for Data Everywhere
Evaluators are challenged to uncover all the data connected to the program—both positive and negative. To that end, it is critical to look everywhere for data that shows value (or the lack of it). This thorough approach will ensure that nothing is left undiscovered—the fear harbored by many individuals when facing negative results.

5. Never Alter the Standards
When the results are less than desired, it is tempting to lower the standards—to change the assumptions about collecting, processing, analyzing and reporting the data. This is not a time to change the standards. Changing the standards to make the data more positive renders the study virtually worthless. Without standards, there is no credibility.

6. Remain Objective Throughout
Ideally, the evaluator should be completely objective or independent of the program. This objectivity provides an arms-length evaluation of its success. It is important not only to enter the project from an objective standpoint, but also to remain objective throughout the process. Never become an advocate for or against it. This helps alleviate the concern that the results may be biased.

7. Prepare the Team for the Bad News
As red flags pop up and expectations are lowered, it appears that a less-than-desired outcome will be realized. It is best to prepare the team for this bad news early in the process. Part of the preparation is to make sure that they don't reveal or discuss the outcome of the program with others. Even when early results are positive, it is best to keep the data confidential until all are collected. Also, when it appears that the results are going to be negative, an early meeting will help develop a strategy to deal with the outcome. This preparation may address how the data will be communicated, the actions needed to improve the program and, of course, explanations as to what caused the lack of success.

8. Consider Different Scenarios
Standards connected with the ROI methodology are conservative for a reason: The conservative approach adds credibility. Consequently, there is a buy-in of the data and the results. However, sometimes it may be helpful to examine what the result might be if the conservative standards were not used. Other scenarios may actually show positive results. In this case, the standards are not changed, but the presentation shows how different the data would be if other assumptions were made. This approach allows the audience to see how conservative the standards are. For example, on the cost side, including all costs sometimes drives the project to a negative ROI. If other assumptions could be made about the costs, the value could be changed and a different ROI calculation might be made. On the benefit side, lack of data from a particular group sometimes drives a study into negative territory because of the "no data, no improvement" standard. However, another assumption could be made about the missing data to calculate an alternative ROI. It is important for these other scenarios to be offered to educate the audience about the value of what is obtained and to underscore the conservative approach. It should be clear that the standards are not changed and that the comparisons with other studies would be based on the standards in the original calculation.

9. Find Out What Went Wrongleadership roi
With disappointing results, the first question usually asked is, "What went wrong?" It is important to uncover the reasons for the lack of success. As the process unfolds, there is often an abundance of data to indicate what went wrong. The follow-up evaluation will contain specific questions about impediments and inhibitors. In addition, asking for suggestions for improvements often underscores how things could be changed to make a difference. Even when collecting enablers and enhancers, there may be clues as to what could be changed to make it much better. In most situations, there is little doubt as to what went wrong and what can be changed. In worst-case scenarios, if the program cannot be modified or enhanced to add value, it may mean that it should be discontinued.

10. Adjust the Story Line
When communicating data, negative results indicate that the story line needs to change. Instead of saying, "Let's celebrate—we've got great results for this program," the story reads, "Now we have data that show how to make this program more successful." The audience must understand that the lack of success may have existed previously, but no data were available to know what needed to be changed. Now, the data exist. In an odd sort of way, this becomes a positive spin on less-thanpositive data.

11. Drive Improvement
Evaluation data are virtually useless unless used to improve processes. In a negative study, there are usually many items that could be changed to make it more successful. It is important that a commitment is secured to make needed adjustments so that the program will be successful in the future. Until those actions are approved and implemented, the work is not complete. In worst-case scenarios, if the program cannot be changed to add value, it should be terminated and the important lessons should be communicated to others. This last step underscores that the comprehensive evaluation is used for process improvement and not for performance evaluation of the staff.

Negative study results do not have to be bad news. Negative results contain data that can be used not only to explain what happened, but also to adapt and improve in the future. It is important to consider the potential of a negative study and adjust expectations and strategies throughout the process to keep the negative results from being a surprise. In the worst-case situation, negative data will surprise the key sponsor at the time of presentation.

Write comment (0 Comments)

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

Proving ROI in 2014It is difficult to imagine a world of learning and development without technology, and investment in technology continues to grow at astonishing rates. Its growth is inevitable and its use is predestined. But these investments attract attention from executives who often want to know if they're working properly.

Does it make a difference? How does it connect to the business? Does it really add the value that we anticipated? Is it as effective as facilitator-led learning? These are some of the questions training and education professionals must answer to show the impact learning technologies have on talent development.

A fundamental change
Learning technologies have been used in the workplace for more than 20 years, but it is only in the past few years that their impact could be described as a "fundamental change." More recent evolutions of learning technol- ogy bring significant change in how we grow and develop our current and future employ- ees. These include:

  • mobile learning
  • game-based learning
  • bring your own device (BYOD) programs • open educational resources
  • massive open online courses (MOOCs)
  • flipped classrooms.

Technology, with its many forms and features, is here to stay. However, some concerns must be addressed about the ac- countability and success of technology-based learning. 

The need for business results
Most would agree that any large expenditure in an organization should in some way be connected to business success. Even in non- business settings, large investments should connect to organizational measures of output, quality, cost, and time—classic measurement categories of hard data that exist in any type of organization.
In Learning Everywhere: How Mobile Content Strategies Are Transforming Training, author Chad Udell makes the case for connecting mobile learning to business measures. He starts by listing the important measures that are connected to the business, including:

  • decreased product returns
  • increased productivity
  • fewer mistakes
  • increased sales
  • fewer accidents
  • fewer compliance discrepancies
  • increased shipments
  • reduced operating cost
  • fewer customer complaints.

Udell goes on to say that mobile learning should connect to any of those measures, and he takes several of them step-by-step to show how, in practical and logical thinking, a mobile learning solution can drive any or all of these measures. He concludes by suggesting that if an organization is investing in mobile learning or any other type of learning, it needs to connect to these business measures. Otherwise, it shouldn't be pursued. This dramatic call for accountability is not that unusual.

Credible connections
Those who fund budgets are adamant about seeing the connection between investing in learning technologies and business results. These executives realize that employees must learn through technology, often using mobile devices. And they know that employees must be actively involved and engaged in the process and learn the content. But more importantly, employees must use what they have learned and have an impact on the business.

Unfortunately, the majority of results presented in technology case studies are void of measurements at the levels needed by executives. Only occasionally are application data presented—measuring what individuals do with what they learn—and rarely do they report a credible connection to the business. Even rarer is the ROI calculation.
Evaluation of technology-based learning rests on six levels of data. Level 0 represents inputs to the process, and Levels 1-5 encompass outcomes from the process.

In a recent review of award-winning e- learning and mobile learning case studies published by several prestigious organizations, not one project was evaluated at the ROI level where the monetary value of the impact was compared with the program's cost. They used the concept of ROI to mean any value or benefit from the program.

Mislabeling or misusing ROI creates some concerns among executives, who are accustomed to seeing ROI calculated in a very precise way by the finance and accounting team. Only two or three were evaluated on the cost savings of technology-based learning compared with facilitator-led learning. This may not be a credible evaluation.
Credible connections to the business were rare. Only one study attempted to show the impact of mobile learning using comparison groups. Even there, the details about how the groups were set up and the actual differences were left out. When the data are vague or missing, it raises a red flag.

Reasons for lack of data
In our analysis of technology-based learning programs, several major barriers emerged.

These obstacles keep the proponents from developing metrics to the levels desired by executives:

  • Fear of results. Although few will admit it, individuals who design, develop, or own a particular program are concerned that if the results are not good, the program may be discontinued and it will affect their reputation and performance.
  • This should not be necessary. Some designers and developers are suggesting that investments in technology-based learning should be measured on the faith that it will make a difference. After all, technology is absolutely necessary.
  • Measuring at this level is not planned. When capturing the business impact and developing the ROI, the process starts from the beginning, at the conception of the project or program. Unfortu- nately, evaluation is not given serious consideration until after the project is implemented, which is too late for an effective evaluation.
  • Measurement is too difficult. Some feel it is too difficult to capture the data or that it's impossible to secure quality information.
  • Measurement is not the fun part of the process. Technology-based learning is amazing, awesome, impressive, and fun. Gamification is taking hold. People love games. They're fun. However, measuring application, impact, and ROI is usually not fun (but it can be).
  • Not knowing which programs to evaluate at this level. Some technology propo- nents think that if they go down the ROI path, executives will want to see the ROI in every project and program. The chal- lenge is to select particular projects or programs that will need to be evaluated at this level.
  • Not prepared for this. The preparation for designers, developers, implementers, owners, and project managers does not usually include courses in metrics, evaluation, and analytics.

Because these barriers are perceived to be real, they inhibit evaluation at the levels de- sired by executives. But they are myths for the most part. Yes, evaluation will take more time and there will be a need for more planning. But the step-by-step process of the ROI methodology is logical.

Case studypyramid6
This case study highlights the key issues in calculating the impact and ROI of a mobile learning solution on business results.

Summary. This project involves a mobile learning application for sales associates of a large software firm specializing in software solutions for the trucking industry. Sales associates were provided a mobile learning solution for their iPads that was designed to teach them to describe and sell an upgrade to its most popular software product, ProfitPro.

Measuring results. The first 25 people signed up within three days of the program's an- nouncement. Level 1 reaction data were collected at the end of the fifth module. Re- actions were as expected, averaging 4.3 on a five-point scale.

Level 2 learning seemed appropriate, and quiz scores were above targets. The average score was 20.8 out of a possible 25.

Level 3 application data seemed to be on track. The skills of identifying pricing options and explaining implementation and support were off a little, but overall the objectives were met. As expected, there were some barriers and enablers to success. The barriers were minimal. However, there was a concern that 9 percent of sales associates were not encour- aged by their managers to use the program. The number one enabler was management encouragement.

The Level 4 impact data comparing the experimental group of 25 sales associates with the control group of 22 revealed that al- most all of the control group members (19
of 22) were selling the upgrade even though they did not participate in the program. But the difference between the two groups was impressive.

For the control group, the average amount of monthly sales per associate was $3,700
and the average time to first sale was 21 days. Conversely, the experimental group mem- bers took an average of only 11 days to make their first sale and sold on average $7,500 per month—a difference of $3,800. The difference was then annualized, producing an improve- ment of $1.14 million.

Next, the fully loaded costs were included to make the ROI calculation credible. The benefit-cost ratio and ROI were then calcu- lated. ROI was measured as 311 percent, which exceeded the objective.

In addition to the tangible sales increase converted to money, several intangibles were connected to the program:

  • made the first sale, on average, in 11 days
  • customer satisfaction
  • brand awareness for ProfitPro
  • job satisfaction of sales associates
  • stress reduction for sales associates
  • reputation of company.

Accountability
The appropriate level of evaluation usually is achievable within the budget and it is feasible to accomplish. It is a relatively simple process. The challenge is to take the initiative and be proactive—do not wait for executives to force the issue.

Owners and developers must build in ac- countability, measure successes, report results to the proper audiences, and make adjustments and improvements. This brings technology-based learning to the same level of accountability that IT faces in the imple- mentation of its major systems and software packages. IT executives must show the impact, and often the ROI, of those implementations. Technology-based learning should not escape this level of accountability.

By Tamar Elkeles, Patti Phillips, and Jack Phillips

Write comment (0 Comments)

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

strategyBy Dr Patti Phillips

Today's business community, and particularly the talent development community, are relying more on analytics to make decisions about programs and projects than ever before. Yet many people still ask: How do we ensure the right measures are taken and that the connection between investment in our people and the results achieved is clear?

The answer is clear: through the process of business alignment.

Business alignment ensures that investment drives relevant business results. Achieving business alignment requires that programs, projects, and initiatives be positioned for success, and then, evaluated accordingly. There are three phases to business alignment:

  • Clarify stakeholder needs
  • Develop measurable objectives
  • Evaluate accordingly.

Clarify stakeholder needs
Initial alignment occurs when stakeholder needs are identified, which typically begins with the po­tential payoff for opportunities or problems. These payoff opportunities represent an organization's opportunity to make money, save money, avoid costs, or to do some greater good.

Some payoff opportunities are obvious, such as a $1.5 million cost due to unwanted employee turnover. Other payoff opportunities are not so obvious, like the desire to build a great workplace.

Once clear about the potential payoff opportunities, the next step is to identify the specific business measures that, if improved, will position your organization to take advantage of the payoff opportunity.

For example, let's say your company executives want the organization to become a green organization. The next step is to identify the business measures that need to improve to position your company as green. One relevant business measure may be the number of kilowatt-hours used per month.

Next, you will need to identify the performance needs, which are those behaviors or actions that if changed will improve the business measures. Using the green example, a performance need associated with kilowatt-hours might be employees leaving their computers on after they leave from work.

Then you will need to identify what employees need to know to change their behavior. In our example, one learning need may be that employees need to know the environmental impact associated with leaving computers on overnight.

Finally, identify the program, project, or initiative that will address stakeholder needs. By getting clear on the ultimate payoff first and working through the process—from business need, performance need, and learning need, to the solution—you've increased your chances of identifying the right investment opportunity given the goal at hand. More important, you've also established a basis for the measures to be taken.

Develop objectives
The second phase of alignment is through the development of objectives. Objectives correspond to stakeholder needs. De­veloping powerful program objectives at multiple levels positions the investment for success.

Objectives describe to designers and de­velopers the intent of an initiative, giving them direction concerning which components to include. They also provide facilitators, team leaders, and program managers direction as they assist participants in preparing to change behavior, apply knowledge, and drive business outcomes.

Objectives representing stakeholder needs communicate the "what and why" of a program or project to participants. They keep impact objectives in constant focus, re­minding participants of the ultimate reason for investing in a program.

Developing objectives that reflect stakeholder needs communicate to stakeholders that the program owner "gets it"—that he or she has paid attention to what is needed to support the organization's success.

Objectives also set the stage for program evaluation, ensuring the right measures are taken and that results important to stake­holders are developed. Developing powerful objectives that reflect stakeholder needs is imperative to achieving business alignment.

Evaluate the program
The last phase in the alignment process is evaluation, where data are collected and analyzed. The final result is a reporting of the chain of impact that occurs, as people are involved in a program or project.

For example, you may have decided to implement a green campaign. Based on the information provided through the campaign, employees report agreement that going green is relevant and important. They indicate they know their role, why turning off their computer is important, and how much money they can save the company.

Three months after the campaign, you find that 90 percent of employees report they turn off their computer before leaving work. Upon checking the electric power bill, your organization's facilities manager reports an average decrease of 5 percent in kilowatt-hours used per month based on three months of data. These findings are directly aligned with the stakeholder needs described earlier.

But the question remains: How much of that decrease in kilowatt-hours used is really due to the green campaign?

Your decision to answer, or not answer, this question determines the credibility of your results. While some people argue that you can't isolate the effects of a program on results, we, along with others, argue that you can't afford not to.

There are a variety of ways in which to isolate the effects of investments made in talent—from a control group arrangement and trend line analysis, to forecasting models and estimates adjusted for error. Jack Phillips was first to introduce the practice of isolating the effects of programs on results to the human resources industry in the early 1980s. Today, some consulting practices build their branding campaign around this one crucial step.

If you want to ensure that you take the right measures, analyze the right data, and make a clear connection to the business results you purport, follow the three steps to business alignment. Not only will you generate meaningful data, but also you will improve your chances for a positive ROI.

If you want learn more about business alignment and how to isolate the effects of your program on improved business measures, join us for Evaluate Training Programs using the ROI Methodology, or the ROI Certification Program.

Write comment (0 Comments)

Disclaimer
Privacy Policy
Terms of Sale
Terms of Use

  • Email: info@insources.com.au
  • Phone: 1300 208 774
  • ABN 74 625 075 041 

SUBSCRIBE TO OUR NEWSLETTER

© 2019 - 2020 by Insources Group Pty Ltd. All rights reserved.

Search