It is difficult to imagine a world of learning and development without technology, and investment in technology continues to grow at astonishing rates. Its growth is inevitable and its use is predestined. But these investments attract attention from executives who often want to know if they’re working properly.
Does it make a difference? How does it connect to the business? Does it really add the value that we anticipated? Is it as effective as facilitator-led learning? These are some of the questions training and education professionals must answer to show the impact learning technologies have on talent development.
A fundamental change
Learning technologies have been used in the workplace for more than 20 years, but it is only in the past few years that their impact could be described as a “fundamental change.” More recent evolutions of learning technol- ogy bring significant change in how we grow and develop our current and future employ- ees. These include:
- mobile learning
- game-based learning
- bring your own device (BYOD) programs • open educational resources
- massive open online courses (MOOCs)
- flipped classrooms.
Technology, with its many forms and features, is here to stay. However, some concerns must be addressed about the ac- countability and success of technology-based learning.
The need for business results
Most would agree that any large expenditure in an organization should in some way be connected to business success. Even in non- business settings, large investments should connect to organizational measures of output, quality, cost, and time—classic measurement categories of hard data that exist in any type of organization.
In Learning Everywhere: How Mobile Content Strategies Are Transforming Training, author Chad Udell makes the case for connecting mobile learning to business measures. He starts by listing the important measures that are connected to the business, including:
- decreased product returns
- increased productivity
- fewer mistakes
- increased sales
- fewer accidents
- fewer compliance discrepancies
- increased shipments
- reduced operating cost
- fewer customer complaints.
Udell goes on to say that mobile learning should connect to any of those measures, and he takes several of them step-by-step to show how, in practical and logical thinking, a mobile learning solution can drive any or all of these measures. He concludes by suggesting that if an organization is investing in mobile learning or any other type of learning, it needs to connect to these business measures. Otherwise, it shouldn’t be pursued. This dramatic call for accountability is not that unusual.
Credible connections
Those who fund budgets are adamant about seeing the connection between investing in learning technologies and business results. These executives realize that employees must learn through technology, often using mobile devices. And they know that employees must be actively involved and engaged in the process and learn the content. But more importantly, employees must use what they have learned and have an impact on the business.
Unfortunately, the majority of results presented in technology case studies are void of measurements at the levels needed by executives. Only occasionally are application data presented—measuring what individuals do with what they learn—and rarely do they report a credible connection to the business. Even rarer is the ROI calculation.
Evaluation of technology-based learning rests on six levels of data. Level 0 represents inputs to the process, and Levels 1-5 encompass outcomes from the process.
In a recent review of award-winning e- learning and mobile learning case studies published by several prestigious organizations, not one project was evaluated at the ROI level where the monetary value of the impact was compared with the program’s cost. They used the concept of ROI to mean any value or benefit from the program.
Mislabeling or misusing ROI creates some concerns among executives, who are accustomed to seeing ROI calculated in a very precise way by the finance and accounting team. Only two or three were evaluated on the cost savings of technology-based learning compared with facilitator-led learning. This may not be a credible evaluation.
Credible connections to the business were rare. Only one study attempted to show the impact of mobile learning using comparison groups. Even there, the details about how the groups were set up and the actual differences were left out. When the data are vague or missing, it raises a red flag.
Reasons for lack of data
In our analysis of technology-based learning programs, several major barriers emerged.
These obstacles keep the proponents from developing metrics to the levels desired by executives:
- Fear of results. Although few will admit it, individuals who design, develop, or own a particular program are concerned that if the results are not good, the program may be discontinued and it will affect their reputation and performance.
- This should not be necessary. Some designers and developers are suggesting that investments in technology-based learning should be measured on the faith that it will make a difference. After all, technology is absolutely necessary.
- Measuring at this level is not planned. When capturing the business impact and developing the ROI, the process starts from the beginning, at the conception of the project or program. Unfortu- nately, evaluation is not given serious consideration until after the project is implemented, which is too late for an effective evaluation.
- Measurement is too difficult. Some feel it is too difficult to capture the data or that it’s impossible to secure quality information.
- Measurement is not the fun part of the process. Technology-based learning is amazing, awesome, impressive, and fun. Gamification is taking hold. People love games. They’re fun. However, measuring application, impact, and ROI is usually not fun (but it can be).
- Not knowing which programs to evaluate at this level. Some technology propo- nents think that if they go down the ROI path, executives will want to see the ROI in every project and program. The chal- lenge is to select particular projects or programs that will need to be evaluated at this level.
- Not prepared for this. The preparation for designers, developers, implementers, owners, and project managers does not usually include courses in metrics, evaluation, and analytics.
Because these barriers are perceived to be real, they inhibit evaluation at the levels de- sired by executives. But they are myths for the most part. Yes, evaluation will take more time and there will be a need for more planning. But the step-by-step process of the ROI methodology is logical.
Case study
This case study highlights the key issues in calculating the impact and ROI of a mobile learning solution on business results.
Summary. This project involves a mobile learning application for sales associates of a large software firm specializing in software solutions for the trucking industry. Sales associates were provided a mobile learning solution for their iPads that was designed to teach them to describe and sell an upgrade to its most popular software product, ProfitPro.
Measuring results. The first 25 people signed up within three days of the program’s an- nouncement. Level 1 reaction data were collected at the end of the fifth module. Re- actions were as expected, averaging 4.3 on a five-point scale.
Level 2 learning seemed appropriate, and quiz scores were above targets. The average score was 20.8 out of a possible 25.
Level 3 application data seemed to be on track. The skills of identifying pricing options and explaining implementation and support were off a little, but overall the objectives were met. As expected, there were some barriers and enablers to success. The barriers were minimal. However, there was a concern that 9 percent of sales associates were not encour- aged by their managers to use the program. The number one enabler was management encouragement.
The Level 4 impact data comparing the experimental group of 25 sales associates with the control group of 22 revealed that al- most all of the control group members (19
of 22) were selling the upgrade even though they did not participate in the program. But the difference between the two groups was impressive.
For the control group, the average amount of monthly sales per associate was $3,700
and the average time to first sale was 21 days. Conversely, the experimental group mem- bers took an average of only 11 days to make their first sale and sold on average $7,500 per month—a difference of $3,800. The difference was then annualized, producing an improve- ment of $1.14 million.
Next, the fully loaded costs were included to make the ROI calculation credible. The benefit-cost ratio and ROI were then calcu- lated. ROI was measured as 311 percent, which exceeded the objective.
In addition to the tangible sales increase converted to money, several intangibles were connected to the program:
- made the first sale, on average, in 11 days
- customer satisfaction
- brand awareness for ProfitPro
- job satisfaction of sales associates
- stress reduction for sales associates
- reputation of company.
Accountability
The appropriate level of evaluation usually is achievable within the budget and it is feasible to accomplish. It is a relatively simple process. The challenge is to take the initiative and be proactive—do not wait for executives to force the issue.
Owners and developers must build in ac- countability, measure successes, report results to the proper audiences, and make adjustments and improvements. This brings technology-based learning to the same level of accountability that IT faces in the imple- mentation of its major systems and software packages. IT executives must show the impact, and often the ROI, of those implementations. Technology-based learning should not escape this level of accountability.
By Tamar Elkeles, Patti Phillips, and Jack Phillips