How this unique system compares with competing models and theories.
The ROI Methodology had its beginnings in the early 70s when the first ROI study was conducted to evaluate a cooperative education program at Lockheed Martin. The study was developed at the request of executives and the first study translated into funding, improvement, and support for the program. From this first study, the model was refined over a period of years, presented at conferences throughout the 1970s and 80s. The first book describing the methodology was published in 1983 (Handbook of Training Evaluation and Measurement Methods, published by Gulf Publishing). This book was the first book on training evaluation in the USA. Since then, over 50 other books have been published to evaluate learning and development, including Don Kirkpatrick’s book, which was published in 1994. The uniqueness of the development of this process is that it was published early and offered to others to continue to use and refine. At the same time the developer, Dr. Jack Phillips, continued to make improvements and adjustments along the way.
The philosophy behind the development is that it had to satisfy three important groups for it to be successful. The users must see it as user friendly and void of complicated mathematics and difficult processes. The second group, and perhaps the most important, is the executives who fund projects. They have to see the results as credible, reflecting data that is important to them. Finally, the third group is the researchers, evaluators, and professors who need a process that is built on a sound foundation that is valid and reliable. Here is more detail on these three groups.
This evaluation system is the most used evaluation system in the world, adopted by over 5,000 organizations. Over 30,000 individuals have attended two-day workshops to build critical skills and over 7,000 individuals have been involved in ROI Certification leading to the designation of Certified ROI Professional. A 5,000 member global network has been established. ROI networks are operating in over 25 countries and various states and territories of the USA. At least one ROI conference is held globally each year and in some years, several ROI conferences are conducted. Users of the methodology have published over 400 case studies in a variety of major publications. No other evaluation system compares in its usage, adoption, and documentation.
Executive Compatibility: CEO and CFO Friendly
This evaluation system meets the requirements of executives who want to see programs evaluated with data they appreciate and understand. This methodology collects and reports impact data, the number one measure that executives want to see from projects. It also includes an operating standard to always isolate the effects of the program on the data, which is a critical credibility issue with executives. This step determines the amount of improvement in a business measure that is connected to a specific project or program. Finally, it generates ROI using very conservative standards on both the monetary benefits and the cost. ROI is the number two measure sought by executives when evaluating projects and programs.
Most of the 5,000 studies conducted each year by the users of the ROI Methodology are presented to C- suite executives. In the ROI Certification workshop, users are prepared to make this briefing to the executives to position the data correctly and secure executive buy in. Many top executives have endorsed the methodology and endorsed the books. CFOs speak highly of the process. Some CFOs have actually shared the podium in ROI presentations. In a growing number of organizations, the CFO is the point person for implementing ROI.
Over half of the Fortune 500 companies in the USA are using this methodology and most of those connections are through C-suite executives. Governments in the USA, UK, Ireland, Singapore, Italy, Chile, Spain, Canada, and Mexico (to name a few) have endorsed the methodology and have adopted it as their evaluation system of choice. It has also been adopted by the UN as their evaluation system of choice to implement throughout the network of agencies and departments. The ROI Methodology appears regularly in the business press. This methodology has been featured in the Wall Street Journal, CNBC, Business Week Magazine, Fortune Magazine, and on CNN. In addition, it has been featured in TV and radio shows in over a dozen other countries outside the USA.
The methodology meets the criteria for a theoretically sound and logically based process that is both reliable and valid. The first ROI study was published in a peer review journal in 1975 (Journal of Cooperative Education). Since then it has been published in over 200 articles and including some peer review journal articles. In 1995, when the ROI certification began, professors were invited routinely to be involved to provide feedback on how the process could be adjusted to improve its validity and reliability. They made it a better process.
The system of data collection and analysis follows a logical chain of impact and a logical framework (logframe). The methodology is often referred to as the enhanced logic model and this is the way it is labeled within the United Nations network of evaluators. The enhancement to the logframe is based on the three additions.
- It always includes a method to isolate the effects of the program, which addresses the attribution issue, which is critical for the credibility of a study.
- It has the potential to always develop the cost benefit analysis. (ROI)
- There are five categories (levels) of outcome data, (Reaction, Learning, Application, Impact, ROI). Together, this makes the ROI Methodology a much improved version of the logframe.
ROI books have been adopted for use in over 100 universities with 50 of those in the USA. ROI Certification is a regular course in many universities for masters or Ph.D. programs. It meets all of the standards required by the International Society for Performance Improvement (ISPI) for a human performance technology (HPT) model.
The ROI Methodology is the most documented evaluation system in the world with over 70 books now supporting this process translated into 38 languages. Case studies have been published from over 30 countries and professors routinely connect with ROI Institute founders in presentations and articles and books. The UN Women, a United Nations agency, nominated this methodology for a Nobel Prize in 2012, based on its contributions to the results framework and success at the UN.
In summary, this is a process that has been accepted in academic circles, research groups, evaluation specialists, and others who are demanding that a model have a sound basis in its development.
Comparison with Other Models
Perhaps the best way to evaluate this methodology is to compare it with other models and theories. For the most part, the other systems of measuring and evaluating projects fall short of providing the proper system for accountability, process improvement, and results generation. As we examine the ways in which programs are evaluated, ten requirements surface. Table 1 lists each problem or issue and presents what is needed for improvement. It also shows how the ROI Methodology addresses all ten of these areas. Here is more detail.
Problem or issue
What is Needed
Focus of use
Audit focus; punitive slant; surprise nature
Process improvement focus
The number one use for the ROI Methodology
Few, if any, standards exist
Standards needed for consistency and credibility
Twelve standards accepted by users
Types of data
Only one or two data types
Need a balanced set of data
Six types of data representing quantitative, qualitative, financial, and nonfinancial data
Not dynamic; does not allow for adjustments early in the project cycle
A dynamic process with adjustments made early and often
Adjusts for improvement at four levels and at different time frames
Not respectful of the chain of impact that must exist to achieve a positive impact
Data collected at each stage of the chain
Every stage has data collection and a method to isolate the project’s contribution
Twelve areas for results-based processes
Analysis not very conservative
A conservative approach is needed for buy in
Very conservative; CFO and CEO friendly
Not user friendly; too complex
User-friendly, simple steps
Ten logical steps
Not based on sound principles
Should be based on theoretical framework
Endorsed by hundreds of professors and researchers; grounded in research and practice
Not adopted by many organizations
Should be used by many
More than 5,000 organizations using the
Focus of Use
Sometimes evaluation looks like auditing. Usually during a surprise visit, someone checks to see whether the program is working as planned, and a report is generated (usually too late) to indicate that a problem exists.
Evaluation of many capital expenditures, for example, is often implemented this way. The project is approved by the board, and after it is completed, a board-mandated follow-up report is produced by internal auditors and presented to the board. This report points out how things are working and/or not working, often at a point that is too late to make any changes. Even in government, social sciences, and education, the evaluations are often structured in a similar way. For example, our friends in the British government tell us that when new projects are approved and implemented, funds are set aside for evaluation. When the project is completed, an evaluation is conducted and a detailed report is sent to appropriate government authorities. Unfortunately, these reports usually reveal that the program is not working, and it is too late to do anything about it. Even worse, the people who implemented the project are either no longer there or no longer care. When accountability issues are involved, the evaluation reports usually serve as punitive information to blame the usual suspects or serve as the basis for performance review of those involved.
It is not surprising that auditing with a punitive twist does not work for process improvement. Project evaluations must be approached with a sense of process improvement—not performance evaluation. If the project is not working, then changes must take place for it to be successful in the future.
Unfortunately, many of the approaches to evaluate projects lack standards unless the project is a capital expenditure, in which case the evaluation process is covered by Generally Accepted Accounting Principles (GAAP). However, most programs or projects are not capital expenditures. In these instances, standards must be employed to ensure consistent application and reliable results. Overall, the standards should provide consistency, conservatism, and cost savings as the program is implemented. Use of standards allows the results of one program to be compared to those of another and the project results to be perceived as credible.
Types of Data
The types of data that must be collected vary. Unfortunately, many programs focus on impact measures alone, showing cost savings, less waste, improved productivity, or improved customer satisfaction. These measures will change when the program is implemented. The types of measures also include intangibles.
What is needed is a balanced set of data that contains financial and non-financial measures as well as qualitative and quantitative data. Multiple types of data not only show results of investing in programs or projects, but help explain how the results evolved and how to improve them over time. To effectively capture the return on investment, six types of data are needed: reaction, learning, application, impact, ROI, and intangible benefits.
As mentioned earlier, a comprehensive measurement system must allow opportunities to collect data throughout project implementation rather than waiting until it has been fully completed (perhaps only to find out it never worked from the beginning). Reaction and learning data must be captured early. Application data must be captured when project participants are applying knowledge, skills, and information routinely. All these data should be used to make adjustments in the project to ensure success, not just to report post program outcomes at a point that is too late to make a difference. Impact data are collected after routine application has occurred and represent the consequences of implementation. These data should be connected to the project and must be monitored and reviewed in conjunction with the other levels of data. When the connection is made between impact and the project, a credible ROI is calculated.
For many measurement schemes, such as the balanced scorecard, it is difficult to see the connection between a project and the results. It is often a mystery as to how much of the reported improvement is connected to the project or even whether a connection exists.
Data need to be collected throughout the process so that the chain of impact is validated. In addition, when the business measure improves, a method is necessary to isolate the effects of the project on the data to validate the connection to the measure.
Too often, the measurement schemes are focused on activities. People are busy. They are involved. Things are happening. Activity is everywhere. However, activities sometimes are not connected to impact. The project must be based on achieving results at the impact and ROI levels. Not only should the project track monetary results, but also, the steps and processes along the way should focus on results. Driving improvement should be inherent to the measurement process. By having a measurement process in place, the likelihood of positive results increases. A complete focus on results versus activity improves the chances that people will react positively, change their attitude, and apply necessary actions, which lead to a positive impact on immediate and long-term outcomes.
Many assumptions are made during the collection and analysis of data. If these assumptions made are not conservative, then the numbers are overstated and unbelievable, this will decrease the likelihood of accuracy and buy in. The results, including ROI, should be CFO and CEO friendly.
Too often, measurement systems are complex and confusing for practical use, which leaves users skeptical and reluctant to embrace them. The process must be user-friendly, with simple, logical, and sequential steps. It must be void of sophisticated statistical analysis and complicated financial information, at least for the projects that involve participants who lack statistical expertise. It must be user-friendly, even to those who do not have statistical or financial backgrounds.
Sometimes measurement systems are not based on sound principles. They use catchy terms and inconvenient processes that make some researchers and professors skeptical. A measurement system must be based on sound principles and theoretical frameworks. Ideally, it must use accepted processes as it is implemented. The process should be supported by professors and researchers who have used the process with a goal of making it better.
A measurement system must be used by practitioners in all types of organizations. Too often, the measurement scheme is presented as theoretical but lacks evidence of widespread use. The ROI Methodology, first described in publications in the 1970s and 1980s (with an entire book devoted to it in 1997), now enjoys more than 5,000 organizations using it. It is used in all types of projects and programs from technology, quality, marketing, and human resources, among others. In recent years it has been adopted for green projects and sustainability efforts.
The success of the ROI Methodology is a comprehensive process that meets the important needs and challenges of those striving for successful projects.
The Elusive ROI
Without a doubt, the concept of ROI has entered every field. In recent literature, it is mentioned regularly, and often with a lot of passion, but some issues coincide with ROI usage. Sometimes individuals and executives use the term ROI to reflect a benefit or value instead of the financial definition of ROI. In other terms, they are using cost effectiveness to show that if they lower costs, they have positive ROI. In other cases, it is considered cost recovery, which may help the ROI definition, but sometimes does not. Sometimes terms such as return on expectation or return on inspiration (ROE/ROI) are used, which have dramatically different meanings for finance and accounting executives than they do for those who make up such acronyms.
Profits can be generated through increased revenue or cost savings. In practice, more opportunities can be found for cost savings than for increased revenue. Cost savings can be realized when improvements in productivity, quality, efficiency, cycle time, or actual cost reduction occur. In a review of almost 500 studies, the vast majority of which were based on cost savings, approximately 85 percent of the studies used a payoff based on cost savings from output, quality, efficiency, time, or a variety of soft data measures. The others used a payoff based on revenue increases, where the earnings were derived from the profit margin. Cost savings are important for non- profits and public-sector organizations, where opportunities for profit are often unavailable. Most projects or programs are connected directly to cost savings; ROI can still be developed in these settings.
The formula should be used consistently throughout an organization. Deviations from the systems or misuse of, the formula can create confusion, not only among users, but also among finance and accounting staff. The chief financial officer (CFO) and the finance and accounting staff should become partners when evaluating programs for ROI. The staff must use the same financial terms as those used and expected by the CFO. Without the support, involvement, and commitment of these individuals, widespread use of ROI will be unlikely.
Table 2 shows some financial terms that are misused in literature. The word, return, is a finance and accounting term. Terms such as return on intelligence (or information), abbreviated as ROI, do nothing but confuse the CFO, who assumes that ROI refers to the return on investment as described earlier. Sometimes return on expectations (ROE), return on anticipation (ROA), and return on client expectations (ROCE) are used, also confusing the CFO, who assumes the abbreviations refer to return on equity, return on assets, and return on capital employed, respectively. The use of these terms in the payback calculation of a project will also confuse, and perhaps lose the support of, the finance and accounting staff. Other terms are often used with almost no consistency in terms of financial calculations. The bottom line: don’t confuse the CFO. Consider this person an ally, and use the same terminology, processes, and concepts when applying financial returns for projects.
Return on information
Return on inspiration
Return on intelligence
Return on involvement
Return on investment
Return on expectation
Return on events
Return on engagement
Return on equity
Return on anticipation
Return on assets
Return on client expectation
Return on capital employed
Return on value
Return on people
Return on resources
Return on technology
Return on luck
Return on web
Return on marketing
Return on objectives
Return on quality
Sometimes particular terms appear to gain attention in practical use and need more explanation. One of these is the return on expectation (ROE), where normally the expectation of a particular program or project is defined by some client group. In reality, the expectation is actually an objective that is set for the project. The good news is that the ROI Methodology allows for the objectives to be set at five different levels (reaction, learning, application, impact, and ROI). Any expectation that can be created with a stakeholder will fit into those categories. There is no need for a new term that creates confusion. The problem with the return on expectation is that it often creates an illusion that it is impact data. It quickly loses credibility outside of the department where it is created.
Some people refer to the term, return on value (ROV), but it usually has no calculated value. It has no definition that cannot be explained in the five levels. Value will always been defined in one or more of the categories of the ROI Methodology, essential with objectives set for each of these levels. Meeting the objectives is essentially accomplishing the same thing without a new term. Again, the word return creates an impression that there is value – more than what meets the eye.
Others use the concept of return on objectives (ROO), suggesting a different model. Obviously, the creators of these “new models” are not aware that basis for the ROI Methodology is its five levels of objectives that are set with various stakeholders and clients throughout the process. ROE, ROV, and ROO add nothing and are essentially a part of the ROI Methodology.
Other terms, such as return on luck, return on inspiration, return on training, return on technology, return on event, and return on engagement are creating nothing but confusion with key clients, who often have a completely different view of those terms or no understanding at all about what they mean. The rule is keep it simple and use terms that are acceptable for C-suite executives, who are often very familiar with the appropriate ROI terminology.