INSOURCES BLOG

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

5 Characteristics of successful virtual trainingWhen you look at a half-filled glass of water, what do you see? Is it half-empty or half-full? While either one can be a correct answer, how you respond depends on your experience and your current perspective.

In a similar vein, when you consider virtual training, do you see it as a chore to complete ("Ugh, another boring online meeting to attend...") or as a refreshing way to connect with others and learn? Again, either answer can be correct. Your perspective is probably based on your experience and your expectations.

Sadly, many people—myself included—have experienced bad virtual training. Perhaps you've heard a monotone presenter who doesn't engage the audience, or you've attended a session beset with technology problems. But these experiences don't mean that all virtual training is bad. Virtual training can be an engaging way to learn--when it is intentionally planned and executed.

The most successful virtual training includes five common characteristics.

Intentional selection
What's the reason your organization chose a virtual training solution? Hopefully, virtual training is chosen because it's the best way to learn a certain topic or to reach a particular audience, and not simply as a way to slash travel costs. Select virtual training as a delivery method because it makes sense.

Appropriate technology
There are many online platforms available on the market, yet not all of them are conducive to virtual training. Select a virtual classroom platform that includes all the features you need for your learning to be successful. If you need small group interaction, choose one that includes breakout rooms. Or, if you need to show detailed live demonstrations, choose one that has high-definition video streaming.

Thoughtful design
A successful virtual training class engages participants, invites them to connect with each other, and helps them learn and apply a new skill. Good design is more than just posting slides online and clicking through them while someone talks. It's about creating a high-quality online learning experience.

Skilled facilitators
The quality of a facilitator can make or break a virtual training class. The best facilitators build rapport with learners, create a safe learning environment, and are extremely comfortable with the technology. They can engage a remote audience, similar to a radio host who creates a community of active listeners. In addition, using a co-facilitator or producer is a must for smooth and seamless virtual training.

Prepared participants
Participant preparation is an often-overlooked key ingredient to virtual training success. In order for participants to get the most out of an online class, they need

  • An appropriate space in which to learn with minimal distractions
  • adequate equipment to participate, such as a telephone headset so they don't have to cradle a phone in their ear for 90 minutes
  • aligned expectations—know that they are attending a training class and not just another meeting or conference call.

Through careful planning and deliberate execution, your virtual training can be a smashing success. Your participants will appreciate it, and perhaps have a changed perspective about online learning.

Source: ATD Community of Practice, by Cindy Huggett author of The Virtual Training Guidebook  from ASTD Press.

Write comment (0 Comments)

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

How this unique system compares with competing models and theories.

The ROI Methodology had its beginnings in the early 70s when the first ROI study was conducted to evaluate a cooperative education program at Lockheed Martin. The study was developed at the request of executives and the first study translated into funding, improvement, and support for the program. From this first study, the model was refined over a period of years, presented at conferences throughout the 1970s and 80s. The first book describing the methodology was published in 1983 (Handbook of Training Evaluation and Measurement Methods, published by Gulf Publishing). This book was the first book on training evaluation in the USA. Since then, over 50 other books have been published to evaluate learning and development, including Don Kirkpatrick's book, which was published in 1994. The uniqueness of the development of this process is that it was published early and offered to others to continue to use and refine. At the same time the developer, Dr. Jack Phillips, continued to make improvements and adjustments along the way.

The Development

The philosophy behind the development is that it had to satisfy three important groups for it to be successful. The users must see it as user friendly and void of complicated mathematics and difficult processes. The second group, and perhaps the most important, is the executives who fund projects. They have to see the results as credible, reflecting data that is important to them. Finally, the third group is the researchers, evaluators, and professors who need a process that is built on a sound foundation that is valid and reliable. Here is more detail on these three groups.

User Compatibility
This evaluation system is the most used evaluation system in the world, adopted by over 5,000 organizations. Over 30,000 individuals have attended two-day workshops to build critical skills and over 7,000 individuals have been involved in ROI Certification leading to the designation of Certified ROI Professional. A 5,000 member global network has been established. ROI networks are operating in over 25 countries and various states and territories of the USA. At least one ROI conference is held globally each year and in some years, several ROI conferences are conducted. Users of the methodology have published over 400 case studies in a variety of major publications. No other evaluation system compares in its usage, adoption, and documentation.

Executive Compatibility: CEO and CFO Friendly
This evaluation system meets the requirements of executives who want to see programs evaluated with data they appreciate and understand. This methodology collects and reports impact data, the number one measure that executives want to see from projects. It also includes an operating standard to always isolate the effects of the program on the data, which is a critical credibility issue with executives. This step determines the amount of improvement in a business measure that is connected to a specific project or program. Finally, it generates ROI using very conservative standards on both the monetary benefits and the cost. ROI is the number two measure sought by executives when evaluating projects and programs.

Most of the 5,000 studies conducted each year by the users of the ROI Methodology are presented to C- suite executives. In the ROI Certification workshop, users are prepared to make this briefing to the executives to position the data correctly and secure executive buy in. Many top executives have endorsed the methodology and endorsed the books. CFOs speak highly of the process. Some CFOs have actually shared the podium in ROI presentations. In a growing number of organizations, the CFO is the point person for implementing ROI.

Over half of the Fortune 500 companies in the USA are using this methodology and most of those connections are through C-suite executives. Governments in the USA, UK, Ireland, Singapore, Italy, Chile, Spain, Canada, and Mexico (to name a few) have endorsed the methodology and have adopted it as their evaluation system of choice. It has also been adopted by the UN as their evaluation system of choice to implement throughout the network of agencies and departments. The ROI Methodology appears regularly in the business press. This methodology has been featured in the Wall Street Journal, CNBC, Business Week Magazine, Fortune Magazine, and on CNN. In addition, it has been featured in TV and radio shows in over a dozen other countries outside the USA.

Researchers' Compatibility
The methodology meets the criteria for a theoretically sound and logically based process that is both reliable and valid. The first ROI study was published in a peer review journal in 1975 (Journal of Cooperative Education). Since then it has been published in over 200 articles and including some peer review journal articles. In 1995, when the ROI certification began, professors were invited routinely to be involved to provide feedback on how the process could be adjusted to improve its validity and reliability. They made it a better process.

The system of data collection and analysis follows a logical chain of impact and a logical framework (logframe). The methodology is often referred to as the enhanced logic model and this is the way it is labeled within the United Nations network of evaluators. The enhancement to the logframe is based on the three additions.

  1. It always includes a method to isolate the effects of the program, which addresses the attribution issue, which is critical for the credibility of a study.
  2. It has the potential to always develop the cost benefit analysis. (ROI)
  3. There are five categories (levels) of outcome data, (Reaction, Learning, Application, Impact, ROI). Together, this makes the ROI Methodology a much improved version of the logframe.

ROI books have been adopted for use in over 100 universities with 50 of those in the USA. ROI Certification is a regular course in many universities for masters or Ph.D. programs. It meets all of the standards required by the International Society for Performance Improvement (ISPI) for a human performance technology (HPT) model.

The ROI Methodology is the most documented evaluation system in the world with over 70 books now supporting this process translated into 38 languages. Case studies have been published from over 30 countries and professors routinely connect with ROI Institute founders in presentations and articles and books. The UN Women, a United Nations agency, nominated this methodology for a Nobel Prize in 2012, based on its contributions to the results framework and success at the UN.

In summary, this is a process that has been accepted in academic circles, research groups, evaluation specialists, and others who are demanding that a model have a sound basis in its development.

Comparison with Other Models

Perhaps the best way to evaluate this methodology is to compare it with other models and theories. For the most part, the other systems of measuring and evaluating projects fall short of providing the proper system for accountability, process improvement, and results generation. As we examine the ways in which programs are evaluated, ten requirements surface. Table 1 lists each problem or issue and presents what is needed for improvement. It also shows how the ROI Methodology addresses all ten of these areas. Here is more detail.

Topic

Problem or issue

What is Needed

ROI Methodology

Focus of use 

Audit focus; punitive slant; surprise nature 

Process improvement focus 

The number one use for the ROI Methodology 

Standards 

Few, if any, standards exist 

Standards needed for consistency and credibility 

Twelve standards accepted by users 

Types of data 

Only one or two data types 

Need a balanced set of data 

Six types of data representing quantitative, qualitative, financial, and nonfinancial data 

Dynamic adjustments 

Not dynamic; does not allow for adjustments early in the project cycle 

A dynamic process with adjustments made early and often 

Adjusts for improvement at four levels and at different time frames 

Connectivity 

Not respectful of the chain of impact that must exist to achieve a positive impact 

Data collected at each stage of the chain 

Every stage has data collection and a method to isolate the project’s contribution 

Approach 

Activity based 

Results based 

Twelve areas for results-based processes 

Conservative nature 

Analysis not very conservative 

A conservative approach is needed for buy in 

Very conservative; CFO and CEO friendly 

Simplicity 

Not user friendly; too complex 

User-friendly, simple steps 

Ten logical steps 

Theoretical foundation 

Not based on sound principles 

Should be based on theoretical framework 

Endorsed by hundreds of professors and researchers; grounded in research and practice 

Acceptance 

Not adopted by many organizations 

Should be used by many 

More than 5,000 organizations using the 

ROI Methodology 

Focus of Use
Sometimes evaluation looks like auditing. Usually during a surprise visit, someone checks to see whether the program is working as planned, and a report is generated (usually too late) to indicate that a problem exists.

Evaluation of many capital expenditures, for example, is often implemented this way. The project is approved by the board, and after it is completed, a board-mandated follow-up report is produced by internal auditors and presented to the board. This report points out how things are working and/or not working, often at a point that is too late to make any changes. Even in government, social sciences, and education, the evaluations are often structured in a similar way. For example, our friends in the British government tell us that when new projects are approved and implemented, funds are set aside for evaluation. When the project is completed, an evaluation is conducted and a detailed report is sent to appropriate government authorities. Unfortunately, these reports usually reveal that the program is not working, and it is too late to do anything about it. Even worse, the people who implemented the project are either no longer there or no longer care. When accountability issues are involved, the evaluation reports usually serve as punitive information to blame the usual suspects or serve as the basis for performance review of those involved.
It is not surprising that auditing with a punitive twist does not work for process improvement. Project evaluations must be approached with a sense of process improvement—not performance evaluation. If the project is not working, then changes must take place for it to be successful in the future.

Standards
Unfortunately, many of the approaches to evaluate projects lack standards unless the project is a capital expenditure, in which case the evaluation process is covered by Generally Accepted Accounting Principles (GAAP). However, most programs or projects are not capital expenditures. In these instances, standards must be employed to ensure consistent application and reliable results. Overall, the standards should provide consistency, conservatism, and cost savings as the program is implemented. Use of standards allows the results of one program to be compared to those of another and the project results to be perceived as credible.

Types of Data
The types of data that must be collected vary. Unfortunately, many programs focus on impact measures alone, showing cost savings, less waste, improved productivity, or improved customer satisfaction. These measures will change when the program is implemented. The types of measures also include intangibles.
What is needed is a balanced set of data that contains financial and non-financial measures as well as qualitative and quantitative data. Multiple types of data not only show results of investing in programs or projects, but help explain how the results evolved and how to improve them over time. To effectively capture the return on investment, six types of data are needed: reaction, learning, application, impact, ROI, and intangible benefits.

Dynamic Adjustments
As mentioned earlier, a comprehensive measurement system must allow opportunities to collect data throughout project implementation rather than waiting until it has been fully completed (perhaps only to find out it never worked from the beginning). Reaction and learning data must be captured early. Application data must be captured when project participants are applying knowledge, skills, and information routinely. All these data should be used to make adjustments in the project to ensure success, not just to report post program outcomes at a point that is too late to make a difference. Impact data are collected after routine application has occurred and represent the consequences of implementation. These data should be connected to the project and must be monitored and reviewed in conjunction with the other levels of data. When the connection is made between impact and the project, a credible ROI is calculated.

Connectivity
For many measurement schemes, such as the balanced scorecard, it is difficult to see the connection between a project and the results. It is often a mystery as to how much of the reported improvement is connected to the project or even whether a connection exists.
Data need to be collected throughout the process so that the chain of impact is validated. In addition, when the business measure improves, a method is necessary to isolate the effects of the project on the data to validate the connection to the measure.

Approach
Too often, the measurement schemes are focused on activities. People are busy. They are involved. Things are happening. Activity is everywhere. However, activities sometimes are not connected to impact. The project must be based on achieving results at the impact and ROI levels. Not only should the project track monetary results, but also, the steps and processes along the way should focus on results. Driving improvement should be inherent to the measurement process. By having a measurement process in place, the likelihood of positive results increases. A complete focus on results versus activity improves the chances that people will react positively, change their attitude, and apply necessary actions, which lead to a positive impact on immediate and long-term outcomes.

Conservative Nature
Many assumptions are made during the collection and analysis of data. If these assumptions made are not conservative, then the numbers are overstated and unbelievable, this will decrease the likelihood of accuracy and buy in. The results, including ROI, should be CFO and CEO friendly.

Simplicity
Too often, measurement systems are complex and confusing for practical use, which leaves users skeptical and reluctant to embrace them. The process must be user-friendly, with simple, logical, and sequential steps. It must be void of sophisticated statistical analysis and complicated financial information, at least for the projects that involve participants who lack statistical expertise. It must be user-friendly, even to those who do not have statistical or financial backgrounds.

Theoretical Foundation
Sometimes measurement systems are not based on sound principles. They use catchy terms and inconvenient processes that make some researchers and professors skeptical. A measurement system must be based on sound principles and theoretical frameworks. Ideally, it must use accepted processes as it is implemented. The process should be supported by professors and researchers who have used the process with a goal of making it better.

Acceptance
A measurement system must be used by practitioners in all types of organizations. Too often, the measurement scheme is presented as theoretical but lacks evidence of widespread use. The ROI Methodology, first described in publications in the 1970s and 1980s (with an entire book devoted to it in 1997), now enjoys more than 5,000 organizations using it. It is used in all types of projects and programs from technology, quality, marketing, and human resources, among others. In recent years it has been adopted for green projects and sustainability efforts.

The success of the ROI Methodology is a comprehensive process that meets the important needs and challenges of those striving for successful projects.

The Elusive ROI

Without a doubt, the concept of ROI has entered every field. In recent literature, it is mentioned regularly, and often with a lot of passion, but some issues coincide with ROI usage. Sometimes individuals and executives use the term ROI to reflect a benefit or value instead of the financial definition of ROI. In other terms, they are using cost effectiveness to show that if they lower costs, they have positive ROI. In other cases, it is considered cost recovery, which may help the ROI definition, but sometimes does not. Sometimes terms such as return on expectation or return on inspiration (ROE/ROI) are used, which have dramatically different meanings for finance and accounting executives than they do for those who make up such acronyms.

ROI Basics

Profits can be generated through increased revenue or cost savings. In practice, more opportunities can be found for cost savings than for increased revenue. Cost savings can be realized when improvements in productivity, quality, efficiency, cycle time, or actual cost reduction occur. In a review of almost 500 studies, the vast majority of which were based on cost savings, approximately 85 percent of the studies used a payoff based on cost savings from output, quality, efficiency, time, or a variety of soft data measures. The others used a payoff based on revenue increases, where the earnings were derived from the profit margin. Cost savings are important for non- profits and public-sector organizations, where opportunities for profit are often unavailable. Most projects or programs are connected directly to cost savings; ROI can still be developed in these settings.

The formula should be used consistently throughout an organization. Deviations from the systems or misuse of, the formula can create confusion, not only among users, but also among finance and accounting staff. The chief financial officer (CFO) and the finance and accounting staff should become partners when evaluating programs for ROI. The staff must use the same financial terms as those used and expected by the CFO. Without the support, involvement, and commitment of these individuals, widespread use of ROI will be unlikely.

ROI Misuse
Table 2 shows some financial terms that are misused in literature. The word, return, is a finance and accounting term. Terms such as return on intelligence (or information), abbreviated as ROI, do nothing but confuse the CFO, who assumes that ROI refers to the return on investment as described earlier. Sometimes return on expectations (ROE), return on anticipation (ROA), and return on client expectations (ROCE) are used, also confusing the CFO, who assumes the abbreviations refer to return on equity, return on assets, and return on capital employed, respectively. The use of these terms in the payback calculation of a project will also confuse, and perhaps lose the support of, the finance and accounting staff. Other terms are often used with almost no consistency in terms of financial calculations. The bottom line: don't confuse the CFO. Consider this person an ally, and use the same terminology, processes, and concepts when applying financial returns for projects.

Term 

Misuse 

CFO Definition 

ROI 

Return on information 

Return on inspiration 

Return on intelligence 

Return on involvement 

Return on investment 

ROE 

Return on expectation 

Return on events 

Return on engagement 

Return on equity 

ROA 

Return on anticipation 

Return on assets 

ROCE 

Return on client expectation 

Return on capital employed 

ROV 

Return on value 

ROP 

Return on people 

ROR 

Return on resources 

ROT 

Return on technology 

ROL 

Return on luck 

ROW 

Return on web 

ROM 

Return on marketing 

ROO 

Return on objectives 

ROQ 

Return on quality 

Sometimes particular terms appear to gain attention in practical use and need more explanation. One of these is the return on expectation (ROE), where normally the expectation of a particular program or project is defined by some client group. In reality, the expectation is actually an objective that is set for the project. The good news is that the ROI Methodology allows for the objectives to be set at five different levels (reaction, learning, application, impact, and ROI). Any expectation that can be created with a stakeholder will fit into those categories. There is no need for a new term that creates confusion. The problem with the return on expectation is that it often creates an illusion that it is impact data. It quickly loses credibility outside of the department where it is created.

Some people refer to the term, return on value (ROV), but it usually has no calculated value. It has no definition that cannot be explained in the five levels. Value will always been defined in one or more of the categories of the ROI Methodology, essential with objectives set for each of these levels. Meeting the objectives is essentially accomplishing the same thing without a new term. Again, the word return creates an impression that there is value - more than what meets the eye.

Others use the concept of return on objectives (ROO), suggesting a different model. Obviously, the creators of these "new models" are not aware that basis for the ROI Methodology is its five levels of objectives that are set with various stakeholders and clients throughout the process. ROE, ROV, and ROO add nothing and are essentially a part of the ROI Methodology.

Other terms, such as return on luck, return on inspiration, return on training, return on technology, return on event, and return on engagement are creating nothing but confusion with key clients, who often have a completely different view of those terms or no understanding at all about what they mean. The rule is keep it simple and use terms that are acceptable for C-suite executives, who are often very familiar with the appropriate ROI terminology.

Write comment (0 Comments)

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

What's the ROI on ROI?

When an organization implements of the ROI Methodology, the concern about the value or payoff of this approach becomes an issue. This process, when used properly, a function from activity-based to results-based. The activity-based approach focuses on developing programs, counting people involved, and reporting data about activities. The results-based approach requires that programs begin with the end in mind with very specific business measures. It also involves a continuous focus on results throughout the process and creating expectations from all involved to deliver the results. These results include data that are CEO and CFO-friendly.

The paradigm shift from an activity-based process to results-based process is showed below.

Activity Based – Characterized by:

Results Based – Characterized by:

No business need for the program

Program linked to specific business needs

No assessment of performance issues

Assessment of performance effectiveness

No specific measurable objectives

Specific objectives for application and business impact

No effort to prepare program participants to achieve results

Results expectations communicated to participants

No effort to prepare the work environment to support program

Environment prepared to support program

No efforts to build partnerships with key managers

Partnerships established with key managers and clients

No measurement of results or cost benefit analysis

Measurement of results or cost benefit analysis (ROI)

Reporting on programs is input focused

Reporting on programs is outcome focused

This shift is often met with resistance from those who are involved in the process. This approach requires training program designers, developers, facilitators, and coordinators to think about accountability early and often shifts the way in which programs are deployed. For example, it requires them to develop objectives at the application and business impact levels. Because of this, the investment in ROI implementation becomes significant, not only in direct costs, but also in time and efforts. Sometimes, this is perceived as extra work by the learning and development team, instead of an opportunity to show the connection to business value. So the obvious question is, "Is this worth it?" Although most executives suggest that this approach is necessary and the "ROI on the ROI" is not needed. However, it is a helpful and recommended exercise.

The Benefits

The clients who have been using this methodology for several years have reported many benefits. These are usually in six categories.

Improve Projects. The number-one benefit of using the ROI Methodology is that the data drive improvement in projects and programs. This is the principal focus of the methodology; data are collected to show how the project should change to increase success. When projects are not delivering the value needed, i.e., a negative ROI, the data indicate what needs to change to deliver the proper business value. Some users report their application of the process has led to the removal of unnecessary programs. When programs are successful, data are collected to show how new improvements can be made to make them more successful.

Secure Funding. Additional funds are often attributed directly to the use of the ROI Methodology. Some budgets have increased significantly, even in the face of budget reductions in other parts of the organization. One tool and small appliance maker reported a two-fold increase in the learning and development budget based on the use of ROI. A large, well known insurance company quadrupled its learning and development budget in two years with the ROI Methodology. Some users have been able to secure funding on a pre-program basis, with an ROI forecast. Others use the ROI Methodology to justify next year's budget.

Implement New Projects. Some users evaluate a pilot program to determine if that program should be implemented in other areas. Capturing six types of data creates a much better database for decision making. When executives and sponsors are convinced that the program is adding value, particularly with a positive ROI, this same project can then be implemented in many other areas of the business. For example, one of the world's largest retailers uses the ROI Methodology to show the value of projects before they are implemented throughout all of its stores. Using a pilot group of 20-25 stores, the company compares the results with a similar group and makes the decision to implement the program based on its complete profile of success, including ROI. This lowers the risk associated with the decision to implement.

Build Support. Support of projects and programs is an area of concern for most project leaders and program directors. Additional support is almost always needed, particularly from middle-level managers. When the ROI Methodology is used, these managers have more data about the success of programs. When programs and projects drive impact and ROI data, managers will support the effort. ROI users report examples of a dramatic increase of support from middle level managers.

Enhance Relationships. Collecting data to show the value of projects and programs is one of the best ways to enhance relationships and earn a "seat at the table." To be effective in an organization, users must work with a variety of clients and stakeholders. Productive relationships with key managers must be developed. Many users of the process indicate that relationships with business partners have improved. As one manager in a brewery in Europe stated, "Presenting an ROI impact study was the first time I had an intelligent business discussion with the CEO, and it made a tremendous difference in our relationship going forward."

Improve Image. When data reveal the success of various projects and programs at the impact and ROI levels, the image begins to change. Some organizational functions have a reputation for not contributing value (human resources, learning and development, communications, change management, public relations, ethics, and compliance are often viewed this way.) Critics question the value of projects and programs in these areas. While this change does not occur overnight, users report the image of the function has been enhanced considerably with the use of ROI, graduating from the perception of an activity-based cost center to a results-based investment center.

 

Monetary Benefits

The good news is that the specific benefits from this implementation can be converted to money in a credible way. The monetary impact can be derived from these important areas.

Improving Program Effectiveness. When impact and ROI studies are conducted, there are almost always improvements to be made and these can be easily converted into money. For example, when a program delivers a positive ROI, usually there are recommendations about what is needed to increase the ROI. Those recommendations are implemented and the ROI can be calculated again. When steps are taken to improve the program, the impact increases and the corresponding monetary value of that impact increases. The monetary value is already calculated as part of the ROI Methodology, so it is a matter of showing the improvements in money connected to the revised program. The difference in monetary benefits, the numerator of the two calculations, shows the monetary value of this revised program, caused directly by the use of ROI evaluation.

This benefit is even more dramatic when a program is evaluated and it has a negative ROI. Changes are often made to make it positive and it is evaluated again. The monetary benefit change is the difference in the numerator of the two calculations. These benefits can be substantial, particularly for a program that is implemented across an entire organization.

Improving Program Efficiencies. Often, when recommendations are made to improve programs, there are action items to make it more efficient. This may involve reducing the time for sessions to deliver the same level of effectiveness. An example is to provide the same content in one week that would usually take two weeks. Another example may be the use of e-learning or blended learning to replace some of the instructor-led sessions. A five-day program becomes three days of live meetings with a few hours of e-learning in advance and post-program coaching provided by a manager. These efficiencies will translate into a cost reduction with the revised approach. It is a matter of showing the cost difference in a subsequent follow-up, the difference in the denominator of the two calculations. In essence, the new version shows a cost reduction.

Expanding Successful Programs. When the ROI Methodology is used to evaluate a pilot of a new program, tremendous benefits can surface. There are two options. First, a pilot program is evaluated and the results are positive. It can be implemented now, across the organization, with confidence that the success will be delivered. The value added with this program is attributable to the use of this methodology. In other words, the program would not have been implemented without this process.

The second option is the case when a particular program is requested, but the evaluation of the pilot, using this methodology, shows that it does not deliver monetary value. Also, it cannot deliver the monetary value, even with adjustments. Consequently, the program is not implemented. The savings can be tremendous. The monetary benefits would be a matter of calculating the costs of what normally would have been incurred without this analysis. Typically the program would have been implemented, and a tremendous amount of money would be wasted on the program. If this situation occurs in just one program, it should pay for the complete ROI Methodology implementation.

Discontinuing Ineffective Programs. Sometimes an ongoing program is evaluated and the data shows that it is not adding value and a redesign or redeployment will not make it successful. It is a program that is not adding business value and will never add business value, regardless of how it is changed. Those programs need to be removed, thus saving the organization a tremendous amount of money. In one UN organization, for example, the ROI Methodology was used to show that an existing, on-going program does not add business value. Consequently, it was discontinued, saving that particular organization almost half of its learning and development budget, a tremendous monetary windfall that was available only because they were using this methodology.

The Intangibles

Many intangibles are derived from the use of ROI. The intangible benefits are the benefits that are not converted to money. One benefit is the effective use of data at lower levels that do not translate necessarily into impact and ROI. This is important when building an evaluation system. Also, there are benefits derived from improving the image of the learning and development function, building business partnership relationships, and improving the efficient use of learning and development. This translates into positioning learning and development in more of a strategic function within the organization. 

Source: ROI Institute, Inc.

Write comment (0 Comments)

Disclaimer
Privacy Policy
Terms of Sale
Terms of Use

  • Email: info@insources.com.au
  • Phone: 1300 208 774
  • ABN 74 625 075 041 

SUBSCRIBE TO OUR NEWSLETTER

© 2019 - 2020 by Insources Group Pty Ltd. All rights reserved.

Search