Overwhelmingly, surveys of learning leaders identify the measurement of learning’s business impact as a high priority for the learning profession.
by Michael Ph.D.
July 30, 2008
Overwhelmingly, surveys of learning leaders identify the measurement of learning’s business impact as a high priority for the learning profession. Yet, those same surveys consistently show that few companies actually do the measurements. This incongruity probably is one reason that few chief executives state that learning has a strategic impact for their organizations.
The bottom line is that we talk about the seat at the table, but the honest truth is that we don’t walk the talk. Our senior management is saying, “Where’s the beef?” We know the questions are appropriate; yet, little progress is being made. What stops us from reaching what we ourselves say is needed in the profession: the measurement of business impact?
I am going to be so bold as to suggest that what stops us has little to do with our internal learning community discussion and more to do with bigger cultural issues related to innovation and risk taking. In the end, what stops us has little to do with the tired internal debate about levels and ROI and lots to do with the fact that we and our management are stuck. What we need is less talk and more experimentation. We need to experiment with various measurement approaches.
In the face of severe scrutiny on quarterly earnings, our senior management has been reluctant to volunteer funds to experiment. At the same time, those same leaders are demanding more innovation from all parts of the organization.
Lest we become too smug in the conclusion that measuring business impact is all about “them” and not about those of us in the learning profession, I want to point out that we are not being strong advocates for innovation and experimentation. The onus is on learning professionals to become advocates for innovation, to champion the change that has the potential to get us out of the current rut.
The list of things that block us from measuring business impact actually is relatively short. An eLearning Guild survey identified two factors of importance. The first is the expertise required to conduct good scientific experimentation. This expertise involves access to business data and the messiness of our information structures, control groups for comparison of results and sophisticated statistical analysis methods to separate the influence of other factors from the impact of the learning on the business outcomes. Few in our profession have the expertise to do this type of work. That does not mean the profession cannot obtain it; it merely means it does not currently have it.
The other blocking factor revealed by the eLearning Guild survey is the lack of resources — the budget required to design and conduct quality experiments. It is important to recognize this budget is not the cash required to actually develop and deploy the learning intervention. This is the budget needed to design the experiment, to capture and analyze the data, to do the statistical analysis and, from the results, create the management reports that senior management can understand.
When it comes to budgets for experimentation, we are not talking about a lot of money. In a recent business impact study conducted under the guidance of Bellevue University’s Human Capital Lab, the resources required to analyze the impact of Chrysler’s sales consulting training program amounted to a mere 1.6 percent of the total cost to deploy the training to 49 percent of the Chrysler dealer network.
Chrysler is not alone in this effort to innovate. Learning leaders in several large organizations are looking to deploy these methods in their operations. The issue is more about innovation and risk management, both on the part of the learning executives, as well as the legal staff, than it is about the actual cash required to do the work.
The resources required to measure the business impact is peanuts when compared to the potential impact on the organization. What’s missing is not the available capital to do the work, but the will to strongly advocate the innovation that experimentation and evaluation represents.