Measuring Both the Journey and the Destination

Aaron Pomerantz, PhD

Measurement

Many people in human development claim to value measurement, just like many people claim to believe in evidence-based practice (see our last Leadership Lab!). However, the fact is that many developmental programs aren’t meaningfully assessed or evaluated, at least not in a way that allows for any degree of rigorous, empirical examination.

This lack of measurement isn’t necessarily malicious! It’s not like people are passionate about not measuring their outcomes effectively.

However, when it comes to improving measurement efforts, many people simply don’t know where to start. In such cases, it’s not helpful to say something like “just measure better!” Pointing out problems is easy. Solving them is harder, but far more useful.

Of course, there are numerous practical measurement concerns that could overwhelm even the most seasoned practitioner: bipolar or Likert scales? Self-report or behavioral coding? Performance task or survey? These are often questions for measurement professionals, and not something the average human development professional needs or even has time to worry about.

But just because the average human developer doesn’t need to worry about psychometrics doesn’t mean they don’t struggle with a deeper question: What exactly should we be measuring?

As with all elements of evidence-based practice, the specific answer to that question will be highly contextual and depend on a variety of factors. But that doesn’t mean there aren’t broad truths to apply. In this case, there are two broad categories of measurement that should be considered, each very different from the other, but both essential to ensuring effective leader development: process measures and outcome measures.

Process Measures

Process measures (sometimes called formative, reaction, or engagement measures) relate to how the actual process of a program—its administration and experience—is perceived and received. In other words, process measures capture whether people enjoy what you’re doing, if they’re engaged with it, and whether they want to come back.

In practice, this can include things like simply rating their satisfaction or enjoyment, their willingness to recommend the program, the perceived value or a program, or rates of attendance, completion, and attrition. Many leader development and other human development programs rely heavily on process measures in their assessment.

Now, it’s important to note that there’s nothing wrong with process measures. It’s important, fundamental even, to understand the degree to which a leader development program is enjoyable and perceived as valuable; if it isn’t, no-one will come and the program will fail, no matter how powerful its content might otherwise be.

However, while process measures are important for developmental programs to use, they are not actual measures of development itself. That means it’s problematic when leader development programs use process measures as evidence of impact.

To illustrate why this is a problem, imagine hosting a pizza-and-movie night—something that’s pretty common in college environments like where the Doerr Institute does a lot of our work. If you work in a more upscale environment, you can imagine hosting a fine wine and cheese party. If one asked attendees of either party how they enjoyed it, whether they’d attend again, or if they would recommend it to a friend, the ratings among attendees would probably be sky-high (depending on the quality of pizza / cheese and wine). Such ratings would probably be higher than (or at least, quite close to) the answers we’d receive to the same questions after even the most transformative, masterful executive coaching engagement. Thus, based on process measures alone, the party would appear to be a more successfully developmental process than coaching, simply because we’d only be measuring enjoyment.

That’s why process measures are insufficient for impact assessment—i.e., if you’re only using process measures, you’re not meaningfully measuring the impact of your program. Again, that doesn’t mean process measures aren’t important. Engagement matters; if people don’t enjoy themselves, they won’t show up, which means process measures are necessary for evaluating satisfaction, interest, coach-client fit, or how engaging an intervention is in a particular context.

Despite their necessity, however, they’re still not measures of developmental outcomes. Don’t neglect them, but don’t confuse them with evidence of people’s growth. For that kind of evidence, we need true outcome measures.

Outcome Measures

Outcome measures (also known as summative, learning, or transfer measures) are what allow us to assess whether people have been meaningfully developed including, but certainly not limited to, outcomes like:

  • Increasing their sense of themselves as a leader
  • Their self-efficacy to perform leadership tasks
  • Their ability and willingness to think systematically about their leadership practices

Outcome measures come in multiple forms, including self-reports, behavioral assessments, knowledge tests, or even observational reports like 360 or peer evaluations. The key is that they meaningfully assess and quantify the effects a developmental program claims to produce.

Selecting the right outcome measures can be difficult for numerous reasons. Of course, there are the psychometric concerns like reliability and validity, but even more difficult can be the issue of context. Just like developmental efforts—from coaching to mentorship to guided-self reflection—must be contextually shaped in order to be effective, so too must measures be suited to their context of use. This can be especially problematic when working with specific sub-populations. At the Doerr Institute, more traditional, organizationally focused measures are less applicable to the college students we work with. For example, most college students don’t have “direct reports,” and might not even be aware of the term, so any leadership scale that references direct reports has a limited utility with our population.

Adapting or identifying measures that fit your developmental context takes effort, even with strong resources and expertise. Nevertheless, without context-appropriate outcome measures, you can’t make genuine developmental claims. You might have an extraordinary program and never know it, or be on the verge of a breakthrough and remain unaware because you’re not capturing the change you’re creating.

Thus, using the right outcome measures is essential not only for demonstrating a program’s effectiveness, but also for programmatic accountability and improvement. Outcome measures allow us to quantify development, refine our approaches, and communicate our impact with increased confidence, making our work more trustworthy and, ultimately, marketable. When we can demonstrate and elaborate on the specific changes a program can measurably demonstrate, we differentiate ourselves in a powerful way.

Bringing It Together

At the end of the day, human development needs both process and outcome measures. By capturing both, we not only create a better developmental process, but we produce better developmental outcomes.

Process measures help us refine the developmental experience to keep people engaged. Outcome measures help us ensure that our efforts are meaningful and impactful. We needn’t choose between them, but nor should we confuse or neglect either of them. Ultimately, measuring both is what it truly means to measure what matters in leader development.