Both in and beyond leader development, there’s a natural tension between the people who run programs and the people who measure them. For people who design and lead developmental efforts, from coaching initiatives to training programs, the belief that their efforts work is what motivates them to invest the enormous amounts of time and effort that development requires.
However, those of us tasked with assessing developmental impact start from the opposite place. We don’t assume a program works and then look for confirmation — if anything, we assume it doesn’t work and then look for evidence to contradict that assumption.
This apparently “negative” orientation originates in a core scientific principle—null hypothesis testing—but it can create a lot of friction. In fact, the first time I suggested taking such an approach, I was told that developing leaders was a matter of “personal, intellectual, and spiritual formation,” and that my scientific approach was “cynical.” Other people who have tried to bring empirical measurement into developmental efforts have shared similar experiences of being labeled pessimistic, obstructive, or even of undermining the programs they’re trying to help.
Doubt isn’t cynicism; it’s an essential part of evidence-based practice, a manifestation of humility even.
Empirical measurement exists to provide an objective counterweight to bias, to operate beyond enthusiasm or social pressure. It’s not something evaluators can simply “turn off,” because it’s how we ensure that good intentions are matched by real impact.
That said, not all doubt is created equal. Some forms of doubt can fundamentally impede, if not actively harm, developmental efforts. Thus, knowing when doubt is the “good” or the “bad” kind requires understanding the line between skepticism (the kind of doubt that fuels good science) and cynicism (the kind that kills it).
Toxic Doubt: Cynicism
Cynicism is the belief that people are primarily motivated by self-interest, is often accompanied by emotions like contempt and anger, and tends to produce antagonistic interactions with other people (1). Cynicism doesn’t just allow for corruption or selfishness to exist; it assumes they’re the default. In leader development contexts, cynicism might show up as the assumption that a program is intentionally overpromising, manipulating results, or is just “wishful thinking” — no matter what the evidence might actually say.
Far from enabling evidence-based practice and effective measurement, cynicism undermines it.
It kills curiosity, science’s fundamental motivator, by assuming that people are acting in bad faith or that failure is a foregone conclusion.
This stops us from looking for nuanced or complex explanations, leaving important questions like “what mechanisms drive change?” or “what participants might benefit the most from this intervention?” unasked.
Worse, cynicism can actively distort how we interpret data. It becomes its own form of confirmation bias, leading to cherry-picking results, overly broad conclusions, and limiting our inquiries until a study that could have uncovered real impact instead settles for “I told you it wouldn’t work.”
From an assessment perspective, cynicism isn’t rigor; it’s paralysis. It kills curiosity, alienates collaborators, poisons the empirical process, and all but guarantees results that validate its own pessimism.
Healthy Doubt: Skepticism
Unlike cynicism, skepticism isn’t driven by negativity or pessimism. Instead, it’s a form of doubt driven by what we might term rational vigilance. Skepticism is willing to question claims without rejecting the possibility that such claims are true (3).
Thus, skepticism isn’t blindly optimistic, but it is curious — which makes it essential to meaningful assessment. We don’t start assuming that no effect exists because we’re rooting against a program. Our initial doubt is what allows us to be more certain, upon finding evidence that a program or intervention does work, that it’s not simply an artifact of interpretive bias, contextual demands, or simple random chance.
This balance between doubt and curiosity creates a productive tension. Like the practitioners we work with, assessment professionals want to see positive outcomes. We aren’t rooting against the developmental process or hoping that people can’t grow into more effective, confident, and self-aware individuals. However, we also want to be sure that any evidence we find is both meaningful and useful. That tension makes skepticism incredibly powerful, hence why it’s such an essential part of the assessment process.
This tension also distinguishes skepticism from cynicism. While cynicism is global, assuming that everyone is corrupt or self-interested, skepticism is contextual and takes different circumstances and challenges on their own terms. Cynicism asserts that people can’t change; skepticism questions a specific program to be sure that any data or conclusions from it are appropriately interpreted (2). It is targeted and falsifiable, and the skeptic’s only goal is to learn, not to achieve any specific outcome.
At its heart, skepticism is an act of respect, honoring both the effort behind development and the evidence required to validate it. Skepticism’s questions aren’t dismissals — they’re taking the work seriously enough to test it rigorously.
Applying Healthy Skepticism in Impact Assessment
Although skepticism and cynicism are different, they’re easily confused, especially in leader-development contexts. When someone has poured themselves, their time, and their effort into a program, even principled questioning can feel like a personal attack, particularly if skepticism comes across as judgment rather than curiosity. However, science, and therefore evidence-based practice, requires skepticism, and we must embrace it if we want to meaningfully claim that something “works.”
For practitioners, skepticism isn’t an accusation of failure or lack of rigor. It’s an opportunity to demonstrate that your programs are succeeding, and to strengthen them where they’re not. When assessment teams question outcomes or assumptions, they’re not undermining the work; they’re taking it seriously enough to test it honestly.
For evaluators, healthy skepticism also demands discipline. We must be transparent about our motives for doubt, clear in our reasoning, and respectful in our tone. That means explaining why we’re asking hard questions and how those questions serve learning rather than fueling cynicism. It means checking whether we’re truly open to being proven wrong, ensuring that our inquiry targets evidence, not people.
Again, when properly applied, skepticism is an act of respect. It treats both the people who build programs and the evidence that evaluates them as worthy of honesty. Practiced well, it transforms tension into trust and keeps development both caring and credible.
Embracing Doubt
Embracing doubt isn’t easy. It’s uncomfortable by design. Skepticism requires willingly confronting the possibility we might be wrong, and that can be a very difficult position to take. However, it’s also the only position for evidence-based practice. The alternative — pretending certainty where none exists — is just wishful thinking.
Genuine data-driven development means accepting the discomfort that comes with evidence. Healthy skepticism doesn’t undermine growth, it makes it measurable, meaningful, and real. In other words, skepticism is the only way to honor both the science of leader development and the people behind that important work.
Keywords: Evidence-Based Practice; Impact Assessment; Measurement; Evaluation; Empiricism; Skepticism
References
- Neumann, E., & Zaki, J. (2023). Toward a social psychology of cynicism. Trends in Cognitive Sciences, 27(1), 1–3. https://doi.org/10.1016/j.tics.2022.09.009
- Rutjens, B. T., Sutton, R. M., & van der Lee, R. (2018). Not all skepticism is equal: Exploring the ideological antecedents of science acceptance and rejection. Personality and Social Psychology Bulletin, 44(3), 384–405. https://doi.org/10.1177/0146167217741314
- Stojanov, A., & Halberstadt, J. (2019). The conspiracy mentality scale: Distinguishing between irrational and rational suspicion. Social Psychology, 50(4), 215–232. https://doi.org/10.1027/1864-9335/a000381
