For many, the holidays are a time of reflection, and the Doerr Institute is no exception. Every December (this year, literally the day before this Leadership Lab goes out), we have what we call our Data Party—a standing tradition where we take time, as an institute, to examine and reflect on what the data tell us about the past semester.
Like all traditions, there are elements of whimsy. Ryan Brown, our Managing Director for Measurement, always wears his red blazer. We eat apple fritters—a tradition of unclear origin, but unquestioned importance to the Data Party experience. We intentionally try to keep the experience engaging.
But the core of Data Party—the thing that actually makes it fun—is the opportunity it provides to live our values of being data-driven, engaging in evidence-based practice, and continually seeking opportunities to grow and improve.
That doesn’t mean all of it is comfortable, or that we spend the entire time in fritter-fueled euphoria. Like many holiday traditions, there can be uncomfortable moments. However, that discomfort is part of the point. It’s a signal that we care about what we’re doing and that we take the work of leader development seriously.
Over time, Data Party has become so embedded in our annual rhythms at the Doerr Institute that we don’t question it. At the same time, it’s one of the things other leader developers—even here at Rice University—often find most remarkable about us. As we’ve connected with colleagues on and beyond our campus, we’ve helped others think through how to host their own data parties in their own contexts.
And so, for our final Leadership Lab of 2025, I want to share three essentials for having your own Data Party. Even without red blazers or apple fritters (both of which we recommend), the experience of sitting with your data, examining it systematically, and determining how to change, learn, and grow as a result is an invaluable tradition, well worth adopting in the new year.
1: Examine the Process
The first part of our Data Party focuses on process. For us, that means looking at how people engage with our leader development programs and what the data tell us about their experience.
One way we do this is by examining who we’re reaching. That includes engagement across class years (first-year students through seniors), as well as how graduate students, students from nontraditional backgrounds, and professionals engage with our programs. Evidence-based practice doesn’t occur in a vacuum; you have to understand who it is you’re actually engaging with.
This is where process measures (previously discussed here) come into play. At Data Party, we primarily focus on two categories of process data. The first is behavior, especially attrition. After all, you can have the best program in the world, but if nobody is showing up or staying, you’re not really changing anyone.
Process measures like attrition can be deceptively complex—even defining them can be a challenge. At the Doerr Institute, we track attrition in two ways:
- students who sign up for a program but never show up, and
- students who attend a first session but do not complete the program.
We track both numbers, but for practical purposes, we care more about the second. That doesn’t make the first number useless! The first number helps us estimate something like a “conversion rate,” which informs recruitment and outreach efforts. But when it comes to understanding a program itself, the second number matters much more, because to us, it represents students “voting with their feet,” signaling how engaging and valuable they find the leader development process once it begins.
Our second category of process data focuses on perceived value. For every program, we use a perceived value index (PVI), asking participants questions like “would you recommend this program to a friend?” and “do you believe this was a good use of your time?” to examine how positively they view their engagement with our programs.
What “works” for measuring engagement might look different depending on your context. You might define attrition differently, or simply not track it at all. You might need more detailed PVIs than we use. Or you might rely more heavily on behavioral indicators like repeat participation, referrals, or completion of optional activities. Regardless of the specifics, the broader point remains the same: before evaluating whether a program worked, it’s essential to understand what the data tell you about the process of leader development itself. If the process isn’t working—if people aren’t showing up, staying engaged, and finding value—then impact assessment alone won’t give you the full picture.
2: Impact Assessment
After we examine our process measures, we turn our attention to the topic everyone cares most about: do our programs work?
This is where our validated measures come in—tools we’ve intentionally chosen (or developed, refined, and validated) to fit our context, from self-reports to peer evaluations to more objective behavioral indicators. Whatever we can ethically and responsibly collect, we collect, and the results of this process are presented at Data Party.
However, the point isn’t just to celebrate programmatic success or bemoan programmatic failure. In both cases, the real value comes from the question, “What do we do now?” When a program works—meaning we obtain suitable statistical evidence that participants improve on relevant outcomes from pre to post—we discuss whether we’re happy leaving the program as-is or whether there’s more we want to accomplish with it.
Similarly, when a program doesn’t work, we don’t just wring our hands or “go back to the drawing board.” Instead, we ask the deeper, and sometimes more painful, questions about why the program didn’t show the results we hoped for. Sometimes it is a programmatic issue. Perhaps we need to revise what materials are taught or how developmental processes are structured. Other times, the “failure” falls just as much on the research and evaluation team. Sometimes our tools didn’t fit the context or the material being taught. In either case, this outcome can only be meaningfully contextualized—and turned into actionable next steps—through the dialogue and deep analysis of the Data Party, where practitioners and independent evaluators come together to make sense of the results.
The importance of actionability can’t be overstated. Determining whether something “works” is just the beginning; the real question is what we do about it. That doesn’t mean the answers are always satisfying. Sometimes the best answer we can give is, “We don’t know,” whether due to insufficient data or previously unconsidered issues. Sometimes the answer is as simple as, “This program doesn’t work.” In every case, what matters is the shared agreement that we are here for a common purpose—to see and abide by what the data say—and that we will be calm, rational, and purposeful in how we respond.
That kind of buy-in matters, and it must be intentional. Which brings me to the final element of Data Party.
3: Data Party as a Shared Leadership Practice
At the end of the day, the most important Data Party tradition at the Doerr Institute isn’t a color scheme or a pastry; it’s making sure that everyone who helps lead our leader development efforts is present and has a voice. This isn’t something that can be siloed in the Research and Evaluation team.
That doesn’t mean we make everyone suffer through interpretations of statistical models or data cleaning practices. However, what is important is that everyone understands the data at a conceptual level, so that we can all meaningfully discuss what it’s saying, what it’s not saying, and what we want to do with it.
That’s because assessment is only one step in interpretation. This doesn’t mean it’s not important! Indeed, without empirical assessment, evidence-based practice becomes impossible, and the term “data-driven” nonsensical. However, once we have the data, interpreting what the numbers mean and what we should do with them requires people who are plugged in, open, willing to receive feedback, and willing to have difficult conversations.
What this creates, over time, is a shared leadership practice rather than a technical reporting exercise. Data Party isn’t just about presenting findings; it’s about collectively owning them. When program staff, communications teams, and directors are all in the room engaging with the same data, evidence-based practice stops being something “the R&E team does” and becomes something the organization does together. The data no longer belong to a single function or role, but instead become part of our shared institutional understanding of our mission, goals, and constraints. That shared understanding is what allows data to meaningfully inform leadership decisions, rather than sitting in a slide deck or report that no one feels responsible for acting on.
This is why even though Data Party can sound simple, it’s one of the most impactful parts of our institutional culture. It’s a tradition with a purpose, and it’s a part of what keeps us going.
A Tradition Worth Sharing
Taken together, these three elements—examining process, assessing impact, and sharing interpretation—are what make Data Party more than just another meeting or a holiday party masquerading as institutional business. These three elements turn data into something that can be shared and used to gain organizational clarity, effectiveness, and, ultimately, purpose.
As 2025 ends, my hope isn’t just that you replicate our choice of pastries or clothing but instead consider what it might look like to build your own version of this tradition. Whether that means convening a team, carving out protected time, or simply sitting down on your own to engage honestly with your data, the practice matters. In a year ahead that will almost certainly demand hard decisions, having a deliberate, shared way of seeing clearly and responding thoughtfully can be one of the most valuable leadership practices you can adopt.
