Mod4


 * module 4 > lesson 1 **

** Introduction **
With continuing budget cuts and limited resources, instructional and performance technologists are being held more accountable for training and/or other performance interventions than ever before. To acquire resources, justify expenditures and remain a valued resource within an organization, technologists must be prepared to answer such questions as: It is important for instructional and performance technologists to understand methods for evaluating performance and determining return on investment. The purpose of this activity is to increase your ability to measure and evaluate performance, as well as calculate return on investment. Given a performance intervention, describe how you would apply one method for either:
 * Context **
 * Did the intervention work?
 * What were the results?
 * Was the invention worth the time and money?
 * What was or will be the return on investment?
 * Such questions are not easily answered.
 * Objectives **
 * Evaluating the effectiveness of the intervention
 * Tracking performance related to the intervention
 * Calculating the return on investment in the intervention.
 * Resources**
 * Rossett: Chapters 8-13
 * Mager and Pipe: Chapter 13
 * HPT: Chapters 13, 14, and 51

**module 4 > lesson 2**

** Evaluation **
Evaluations provide information about worth, value or meaning to guide decision-making. The object of an evaluation (Evaluand) can be almost anything (e.g., program, performance intervention, individual competence). In the systematic instructional design process, Evaluation is typically depicted as last step (as noted in Figure 1.2). Such representations, however, can be misleading. Formative evaluations may be completed during the Development phase of the systematic process, and organizations can also measure and/or evaluate the Analysis and/or Design phase. Figure 1.2. ADDIE Model.
 * Evaluation**

A system may contain inputs, processes, outputs and feedback loops. For example, Figure 2.2 depicts a human performance system for an individual playing golf. Inputs may consist of a golf course, golf shoes, golf ball, golf clubs, etc. An individual taking a golf swing may be considered a key process. Then, if you play golf like I do, the output may be the golf ball hitting a tree, with feedback indicating that changes must be made to the swing (or I should quit golfing). An objective-oriented evaluation would determine if outputs meet intended objectives, and evaluation data would be gathered and fed back into the system for improvement. Figure 2.2 Basic Elements of a Human Performance System



Evaluation has its roots in measurement theory, research and practice. In education and in business and industry, we have a long history of measuring physical, and then cognitive individual differences (e.g., IQ). Before the 1930, the term "Measurement" and "Evaluation" were used synonymously. In 1930, Tyler posited the concept of behavioral objectives that lead to the delineation of Program Evaluations (used to determine if learners achieved the specified objectives). During this time, evaluation studies tended to be experimental in design. Then, in the early 1960's, the purpose of evaluations broaden, focusing on the use of data. Evaluation also became a distinct profession. In the 1970's, Cronback and Scriven introduced the term "Formative Evaluation." During the 1970's, the U.S. government invested millions of dollars in the development of math and science curriculum as a response to the Soviet Union's launching of the Sputnik. Unfortunately, students did not perform well on related national math and science examinations. Teachers blamed poor performance on students, and students blamed the problem of teachers. However, the primary cause of the problem was found to be in the design of the instructional materials. Cronback and Scriven then coined the term "Formative Evaluation" to refer to evaluations completed during the design and development process to improve the effectiveness and quality of instructional materials. Table 2.1 elaborates on the differences between formative and summative evaluation methods in terms of purpose, scope, method and process. Table 2.1. Comparison of key features associated with formative and summative evaluations
 * Brief History**


 * || ** Formative Evaluation ** || ** Summative Evaluation ** ||
 * ** Purpose ** || To improve:
 * processes, products and outcomes.
 * quality by reducing variance in educational products and services || To determine the value, worth or outcomes to inform decision making process regarding:
 * Program continuation, discontinuation and/or revision
 * Individual placement or certification ||
 * ** Scope ** || Small in scope || Larger in scope ||
 * ** Methods ** || * Alpha (internal) & Beta (learners) tests
 * Expert reviews, One to One and Small Group evaluations, and Field Trials.
 * Usability tests. || * Objective-oriented evaluations
 * Management-oriented
 * Consumer-oriented
 * Expertise-oriented
 * Adversary-oriented
 * Naturalistic-oriented ||
 * ** Process ** || * Planning
 * Instrument Development
 * Data Collection
 * Data Analysis
 * Reporting || * Planning
 * Instrument Development
 * Data Collection
 * Data Analysis
 * Reporting ||

In the late 1970's, many criticized summative evaluation methods, arguing that they relied too heavily on quantitative and experimental approaches that required the // a priori // (advanced) specification of key questions to be answered by the evaluation, and served the interests of managers and other key decision makers (who determined the nature of the questions). Critics argued for more naturalistic evaluation methods that allowed key issues and questions to emerge during the evaluation, and included data from the perspective of all stakeholders (including, but not limited to learners and instructional designers). The 1980's saw continued debate between those who advocated quantitative versus qualitative evaluation methods (paralleling similar debates among researchers who were also considering alternative research designs). One of the positive outcomes of this debate was that it pushed people to consider the political nature of evaluations, as well as the misuse and nonuse of evaluation data. The "naturalists" gained legitimacy and by the 1990's, naturalistic evaluation approaches gained the respect of most evaluators. Now, there continues to be a proliferation of evaluation models, that are still mired by ideological and conceptual differences. Furthermore, like needs assessment, evaluations are often neglected. For the purposes of this course, it is important and interesting to note the similarities between needs assessment and evaluation. As you study needs assessment (throughout the course) and evaluation (in this Unit), you will see that both apply similar tools and process, with the primary difference being that needs assessments are conducting prior to the design and development of an intervention, and summative evaluations are completed after an intervention has been implemented. In general, evaluations based on a positivist epistemology tend to apply quantitative methods (e.g., controlled experiments with treatment and control Groups. Measured outcomes focus on dependent variables that are clearly defined and operationalized. The validity and reliability of methods and measurements are of utmost importance, and analyses concentrate on statistical comparisons. In contrast, evaluation based on an Interpretist epistemology tend to apply qualitative methods be naturalistic in nature, relying on observations, interviews and document analysis. The validity and reliability of methods and measurements are still important, but focus is place on triangulating information to determine validity and reliability (rather than statistical approaches). Figure 2.3. Comparison of alternative summative evaluation methods
 * Approaches to Summative Evaluation**

Details on each basic approach are left to an Evaluation course. For the purposes of this course, keep in mind that there are a number of basic approaches to summative evaluation that differ in terms of their epistemological foundations and applied methods. Kirkpatrick's model (along with others) is discussed in further detail in Chapter 14 of the HPT course textbook. You should gain a good understanding of Kirkpatrick's model. Even though it was first delineated in 1959, it is still often used in business and industry, as well as the military, demonstrating the power and usefulness of related concepts. Kirkpatrick's four levels of training evaluation (as a specific objective-oriented evaluation approach) include: **Evaluation Process** Shrock and Geis (1999) describe three basic stages associated with the evaluation process: Early Stage (Determining Evaluability) Middle Stage Concluding Stage If you haven't done so already, you should be sure to read and review Shrock and Geis' description of the evaluation process, as well as their discussion of follow-up issues and caveats.
 * Level 1 – Reactions
 * Level 2 – Learning
 * Level 3 – Behavior
 * Level 4 – Outcome
 * 1) Identify players
 * 2) Identify purpose and use
 * 3) Identify objectives
 * 4) Secure collaboration
 * 5) Assess importance of evaluation
 * 6) Become familiar with program and players
 * 7) Consider criteria and standards
 * 8) Choose models and methods
 * 9) Conclude and make evaluability decision
 * 1) Design evaluation
 * 2) Prepare required materials
 * 3) Conduct pilot study
 * 4) Implement design
 * 5) Inform clients and stakeholders
 * 1) Analyze data
 * 2) Consider ethical implications of results
 * 3) Report results

** module 4 > lesson 3 **

** Performance **
Over the past decade, there has been an increased interest in human performance and business results, rather than training and individual learning. Table 2.2 illustrates the change in interest, reported the percentage of over 1000 companies, surveyed by Training Magazine, who typically measure various levels of training outcomes. Table 2.2. Percentage of companies reporting evaluations at four levels
 * Performance Tracking**


 * **Level** || **1940's - 1990's** || **1990's** ||
 * I - Reactions || 100% || 86% ||
 * II - Learning || 75% || 51% ||
 * III - Behavior || 50% || 50% ||
 * IV - Impact || 10% || 44% ||

The move from measuring to monitoring to performance tracking is further illustrated by identifying key concepts associated with each movement:

To facilitate the move from measurement to tracking, practitioners, such as Lindsley (1999), note the importance of keeping the tracking system simple, and use KISSING as an acronym to remind others of key features that are thought to be necessary for establishing a powerful tracking system. Shift from sophisticated, academic measurements to practical means (e.g., Lickert Scale Averages to Number of Checks)
 * Measuring**
 * Before, occasionally during and usually after performance intervention
 * Episodic
 * Typically tests, followed by punishment (negatively viewed)
 * Monitoring**
 * External to performance system. External recording system added to performance change system (obtrusive)
 * Continuous (collects as performance is happening)
 * Tracking**
 * Counter within performing system records itself without interfering with system's performance (non-obtrusive)
 * Continuous (collects as performance is happening)
 * **K**eep
 * **I**t
 * **S**imple
 * **S**tandard
 * **I**mpactful
 * **N**atural
 * **G**raphic

**module 4 > lesson 4**

Return on Investment (ROI)
According to Swanson (1999), performance interventions are often viewed as a: Furthermore, some view Kirkpatrick’s Level IV evaluation of business impact as being flawed and functionally impotent because it is devoid of economic theory or principle-referenced processes or tools. Such views, in combination with increased emphasis on accountability, has led to the concept of Return on Investment (ROI). In short, ROI methods analyze actual benefits or forecast potential benefits of specified performance interventions. It measures the performance value of a program, and the cost of a program to calculate the benefits (or ROI) resulting from the program (Performance Value – Costs = Benefits). Swanson (1999) describes a number of cases associated with (a) classic ROI studies, (b) ROI studies conducted prior to development to make investment decisions, and (c) studies of new ROI theories and tools. I strongly encourage you to read and review these case studies if you haven't done so already. Swanson (1999) also describes a method for determining ROI that includes three separate, but interrelated analyses: I recommend that you review these analyses, along with the ROI case study Swanson presents to further illustrate the application of the analyses. While considering ROI, it is important note that others, such as, but not limited to Phillips (2002), posit alternative methods for calculating ROI. The following is a brief overview of ROI methods described by Phillips. Those interested in ROI are encouraged to acquire and read Phillips' book on ROI, published by CEP Press (http://www.cepworldwide.com) in collaboration with the International Society for Performance Improvement (http://www.ispi.org), along with other related references found in the course textbook. To begin, it is interesting to note how Phillips relates ROI to needs assessment, evaluation and program objectives. As noted in Table 2.3, Phillips views ROI as a fifth and higher level of evaluation than Kirkpatrick's level 4 training evaluation. He also distinguishes 5 levels of needs assessment. How would you compare Phillips' 5 levels of needs assessment with what you've learned about Kaufman's OEM and Rossett's TNA? Table 2.3 Linking ROI to Needs Assessment, Evaluation and Program Objectives
 * Context**
 * Major business process (something that an organization must do to succeed);
 * Value added activity (something that is potentially worth doing);
 * Optional activity (something that is nice to do); or
 * Waste of business resources (something that has costs exceeding its benefits).
 * Calculating ROI**
 * Performance Value Analysis;
 * Cost Analysis; and
 * Benefit Analysis.
 * Relating ROI with Needs Assessments and Evaluations**


 * = Levels ||= Needs Assessment ||= Program Objectives ||= Evaluation ||
 * = 5 ||= Potential Payoffs  ||=  ROI  ||=  ROI  ||
 * = 4 ||= Business Needs  ||=  Impact Objectives  ||=  Business Impact  ||
 * = 3 ||= Job Performance  ||=  Performance Objectives  ||=  Behavior/Transfer  ||
 * = 2 ||= Skills & Knowledge  ||=  Learning Objectives  ||=  Learning  ||
 * = 1 ||= Preferences  ||=  Satisfaction Objectives  ||=  Reactions  ||

Phillip (2002) goes on to describe a four stage process for calculating ROI, including: Stage 1: Evaluation Planning Stage 2: Data Collection Stage 3: Data Analysis Stage 4: Communicate Results Phillips' method for calculating ROI differs from the method discussed in the course textbook. Similar to the course textbook, Phillip also notes that ROI is a function of monetary benefits and costs. However, Phillips' posits that ROI is equal to the net program benefits divided by program costs multiplied by 100 (as depicted in Figure 2.4). //Figure 2.4 Formula for Calculating ROI//
 * Evaluation Process Leading to ROI Estimates**
 * Develop Program Objectives
 * Develop Evaluation Plan
 * Collect Data During Program Implementation (Level 1 Reaction & Level 2 Learning)
 * Collect Data After Program Implementation (Level 3 Behavior & Level 4 Impact)
 * Isolate Effects of Program
 * Convert Data to Monetary Values
 * Capture Program Costs
 * Identify Intangible Benefits
 * Calculate ROI
 * Implement Communication Process
 * Phillips' ROI Formulas**

ROI = (Net Program Benefits/Program Costs) * 100

To further clarify, Phillip indicates the the Net Program Benefits is equal to Program Benefits minus Program Costs. Thus, the formula for calculating ROI may also be depicted as shown in Figure 2.5.

ROI = [(Program Benefits - Program Costs)/ Program Costs] * 100 //Figure 2.5 Formula for Calculating ROI//

Phillips also compares ROI to more traditional cost-benefit analyses. He notes that Benefit Cost Ratios are often calculated using the formula illustrated in Figure 2.6.

//Figure 2.6 Formula for Calculating Benefit Cost Ratio//

BCR = Program Benefits / Program Costs

Phillips then suggests that BCR and ROI presents the same basic information but from different perspectives. For example, let's say an effective training program saved a company $581,000 (Program Benefits) and cost $229,000 to develop and deliver (Program Costs). Using the formulas (above), the BCR would equal $581,000 divided by $229,000 (= 2.54 or 2.5:1). In comparison, the Net Benefits of the program would equal $581,000 minus $229,000 (= $352,000) which means the ROI would equal $352,000 divided by $229,000 multiplied by 100 (=154%). In terms of cost-benefits, this means that for every $1 invested in the training program, the program returned approximately $2.50 in monetary benefits. In terms of ROI, this means that each $1 invested in the program returns approximately $1.50 in net benefits after costs are covered. Whatever method you choose to calculate ROI, the primary challenge lies in putting monetary value to intangible results, such as: One tactic used to address intangible results is to use only tangible, monetary benefits and program costs to calculate ROI, including intangible benefits as a discussion item when communicating ROI, rather than as a part of the actual ROI calculations. Others attempt to put reasonable monetary values on intangible results and include such measures in their calculation of ROI. It is important to note that the credibility of reported ROI findings may be lost if the conversion of intangible results to monetary values is too subjective or inaccurate. Again, whatever the case may be, keep in mind that for some programs, the intangible, difficult to measure, non-monetary benefits may be just as important or more important than tangible, monetary benefits. As such, a direct an and concerted effort must be placed on labeling and explaining all variables included in ROI calculations.
 * Challenges**
 * Improved public image
 * Increased job satisfaction
 * Increased organizational commitment
 * Enhanced leadership
 * Reduced stress
 * Improved teamwork, or
 * Improved customer service