Organizations that have reached higher measurement levels use a blended approach for the various frameworks and find ways to customize evaluation.
by Frank Kalman
August 23, 2013
Most rely on quantitative as well as qualitative measures, ensuring some human intuition and analysis is included.
This is the case at Qualcomm Inc., a maker of wireless communication equipment. The company organizes its learning evaluation data using a tool, Metrics That Matter, from KnowledgeAdvisors that connects directly to its learning management system.
Jaime Maas, manager of organizational development at Qualcomm, said its program managers are assigned to specific learning topics. Managers input progress measures into the LMS, which streamlines the data to Metrics That Matter.
Then she said the tool automates levels 1 through 5 — including the Phillips ROI model — to help learning leaders report and analyze the data.
Maas said Qualcomm ties participation rates in learning with outcomes such as performance, promotion and turnover. She said they also calculate ROI, but the measure is analyzed in aggregate along with qualitative measures.
Many of the companies interviewed for this report said they use Metrics That Matter, but Kent Barnett, founder and CEO of KnowledgeAdvisors, said the tool competes with similar technologies from companies such as SAP Business Objects and Cognos.
In addition to real-time analysis, some companies use isolated impact studies after a program to evaluate learning. Tom Evans, chief learning officer at professional services firm PricewaterhouseCoopers LLP, said these studies happen three to six months after a program at the company. Impact study data are collected through surveys, focus groups and field visits, so the team can see how learning has been applied.
Larger organizations might consider segmenting learning team members to focus exclusively on individual functions in learning evaluation. Lew Walker, vice president of learning services at AT&T, said this helps ensure departmental learning needs are equally matched with customized evaluation. “People that are doing the design and delivery … become very intimate to not only how that organization operates, but the jobs those people are doing that we are training them on.”
Walker said AT&T, which uses the Kirkpatrick model in addition to Phillips’ ROI model, enforces another mechanism to gauge evaluation: Program instructors have all done the jobs they train others on.
As a result, they enter the instructional role with a deep understanding of the skills required to perform. This gives them an enhanced perspective of qualitative measures for a program and helps the firm reach more sophisticated levels of measurement.
Telus’ Pontefract initiated a measurement model called AUGER, or access, usage, grading, evaluation and return. The company measures interaction and site visits on its online learning portal, time on site, coaching interactions and grades, if necessary.
After adding these measures in aggregate, they’re paired with quarterly surveys asking employees if they’ve learned in formal, informal and social ways, and how learning has contributed to a change in performance — return on performance. Pontefract said financial gains are viewed indirectly through cost savings from reduced turnover and increased productivity from higher employee engagement.
“We’ve gone from a 62 percent return on performance three years ago to 74 percent as of current day,” he said. “… And when you look at our business metrics, our attrition is down, our stock doubled and then split, our customer satisfaction went up 12 points.”