Value is pretty straightforward when it comes to capital assets. When it comes to human capital, measuring value becomes more difficult.
by Michael Ph.D.
August 27, 2008
Value is pretty straightforward when it comes to capital assets. When it comes to human capital, measuring value becomes more difficult. Still, it can be done in a way that makes sense for business leaders.
Most senior manager discussions about learning eventually boil down to cost. Why? What is it about cost that makes it the center of attention in training and learning decisions? It’s time to move beyond costs and introduce value into these management decisions.
The value journey begins with costs. The first reason cost is dominant is because training and learning expenditures are accounted for as an expense on the income statement. Senior managers care a lot about the income statement, that is, increasing the top line (revenue) while decreasing the costs that siphon away resources on the accounting passage from the top line to the bottom line (profits).
The second reason costs dominate is because we agree on how to measure costs. Our financial systems have rules on how to assign a hard number to costs. The accountants measure and manage costs with great enthusiasm.
Neither of these characteristics exists on the value side of the equation. The value created by training and learning expenditures is not recorded anywhere. One of the reasons there is no record is because there is no agreement within the learning community about how to measure value in the first place. The value conversation is being held hostage by the hardened sides that have polarized the ROI debate. It is time to move on.
Managing the value creation equation is important for several reasons. First of all, when we examine costs, we spend heavily on learning and development. ASTD estimated U.S. organizations spent $129.6 billion on employee learning and development in 2006.
We all have a stake in the value being created by these learning expenditures. In the 21st century, global economy intangible assets such as know-how are what create the majority of the value. This is a major shift from the 20th century, when investments in plant and equipment were the primary engine of value creation. The good news is that this shift puts learning leaders at the heart of the value creation process. The bad news is there is little agreement about how to measure the value side of the equation.
One symptom of this impasse can be seen on corporate balance sheets. In a February BusinessWeek article, Moody’s Investor Service reported a record $1.6 trillion in cash on the balance sheets of nonfinancial U.S. companies, $600 billion more than five years before. By itself, this data means little.
When the cash position is combined with what learning leaders say they want, a totally different picture emerges. A Bersin & Associates study documents that 72 percent of surveyed learning executives stated that measurement of business impact is most important. In the same survey, when it comes to measuring business impact, 82 percent of organizations would like to spend more on measurement and 31 percent would like to spend much more.
The only possible conclusion is that when it comes to measuring value created, it is not about the motivation, prioritization or even availability of cash that blocks the efforts. The issue is how to measure the value created.
Research is being funded by the Bellevue University Human Capital Lab, which is founded on the development of management models, language and data — knowledge designed to represent training and education expenditures as strategic investments rather than current period expenses. The lab is supporting research on measurement at a growing list of companies including Chrysler, Citi, ACS, Sun Microsystems and others. As a result of the work being done on value creation with these and other organizations, there are three key elements related to the “how” of the value measurement issue. They are:
1. What outcome should we measure?
2. How do we know the measured outcomes are directly related to the learning intervention and not the result of all the other things going on in our rapidly changing world?
3. What will it cost to measure?
Let us begin with the easiest of the three: cost. In this case, this refers to the cost to measure the value created. At the lab, we have documented studies that the cost to measure ranges from 1 to 5 percent of the cost of the learning intervention.
To put this in perspective, let’s apply these ratios to the ASTD-reported expenditures for 2006. Had the learning industry spent between 1 and 5 percent on measuring value, it would have meant that it expended between $1.296 billion and $6.48 billion to measure the value created by the $129.6 billion actually spent. The fact is there was not anywhere near $1 billion dollars expended on measuring value in 2006. Why should we be surprised that we do not have measurements of the value created? We have not committed the resources to do the analysis.
Items 1 and 2 are more complex, with 2 being the greatest departure from current practices. Once again, let’s begin with the easier of the two: defining the outcomes to be measured.
When it comes to measuring outcomes, metrics should revolve around outcomes critical to the executive suite. Surveys of senior executives define what turns out to be a rather short list, a kind of 80-20 structure in which 20 percent of the outcomes address 80 percent of executive priorities. These include:
• Growth in revenue.
• Employee engagement.
• Retention.
• Productivity.
• Innovation.
• Customer satisfaction.
• Cash flow.
The striking thing about this list is that, with the exception of innovation, most organizations already are measuring these outcomes. The primary variation from organization to organization is in the prioritization of the items on the list. If the measurements are taking place, what is missing in the linkage of these outcomes to learning interventions?
The first thing that must happen is to get senior operating management to agree on the value of changes in the desired outcomes. Revenue is easy. A $1 increase in revenue is valued at $1. While we are on the easiest outcome to measure value, let’s introduce the contentious issue, namely the finance manager’s pushback in some form of this question: “How do you know the $1 increase in revenue resulted from your learning program and not something else?”
We will return to this linkage issue in a moment. In the meantime, let’s discuss the missing part of the executive’s outcome list: retention.
The issue is not whether the business measures retention. Almost every organization does, and if it doesn’t, it should be in the face of the inevitable retirement of the baby boomers. The issue in retention is management alignment around the value of a 1 percent reduction in voluntary turnover, or turnover at the initiative of the employee.
Here, as in all of the items on the list, the key is not to identify a best practice of some other organization, but to develop a consensus with your senior management about the value of a 1 point improvement in retention. The same value consensus process must take place for employee engagement, productivity and customer satisfaction if these are agreed-upon outcome priorities for your organization. Many learning leaders are frozen about this management conversation, particularly anticipation of item 2.
The only way to answer the CFO’s head-scratcher is to show that the analysis takes “everything else” into consideration in deriving the relationship between the learning intervention and the value created. A brief statistical discussion is required at this point.
In its most basic form, the value equation is represented by this simple equation:
Y = b*X + €
Here Y is the dependent variable, the value being created. In the simplest case, Y could be the dollars of revenue or the value of improved retention or the incremental automobiles sold. X is the independent variable, the learning intervention. (Note: There can be multiple Ys and multiple Xs.)
In the case of Chrysler’s sales consultant training, the X variable was 0 or 1 (e.g., either sales consultants received the training or they did not). The b variable is the unit value created by the learning intervention. In the Chrysler study, the value of b was 15.6 incremental vehicles sold per year for those sales consultants who received the training relative to those who did not receive the training. In the Chrysler study, 21,300 sales consultants received the training while 13,700 did not.
The final term in the equation is €, or what is known in statistics as the “error term.” The important thing about the error term — and what makes a statistical analysis so critical to measuring value — is that it contains the total effect of “everything else.” While we may not know everything contributing to the error term, what we can measure is the cumulative, or aggregate, effect of all that other “stuff” going on. It is all captured in the error term.
In statistical analyses, the conclusions about the strength of the relationship between intervention X and the dependent variable is relative to the size of the error term. In some cases, when the error term is especially large, it is impossible to say that the value created was a result of the learning intervention and not something else.
In the Chrysler study, there was 99 percent certainty that the 15.6 incremental-vehicle impact was related to the training and not something else. This is a very powerful conclusion, one that empowers the learning organization and senior management to invest the needed resources to create the potential value.
The last item on the executive priority list is innovation, an outcome critical to the survival of organizations in the 21st-century global economy.
Wikipedia has an interesting approach to the innovation issue. The key concept of that definition of innovation is experimentation: “The essence of an experiment is to introduce a change in a system (the independent variable) and to study the effect of this change (the dependent variable).”
All of this brings us to the goal of the discussion, namely the progression from costs to value. The essence of the conversation is that measuring value created is not only desired, but possible. What it will take is innovation on the part of the learning leaders. That innovation needs to take place in the form of a commitment of between 1 and 5 percent of learning resources to design and conduct experiments that address items 1 and 2 above. This namely derives a management consensus on outcomes and their value and analyzes the results of the experiments in terms of the statistical significance of the intervention relative to everything else as captured in the error term.
If we are able to accomplish this as a learning community, we will not only move beyond costs to value but also fulfill the charter to innovate. What could conceivably be more valuable in the end?