In each of our DMC newsletters, we will use the R&E Spotlight section to cover a measurement-related subject that is both novel and of potential practical value to our community. For this inaugural newsletter, I thought I would share with you all a project that I am working on right now with my colleagues, Aaron Pomerantz and Stefanie Johnson. The genesis for this project was the need to find a way to assess the impact of a professional coach training course that we were adapting for leaders and supervisors who want to develop some basic coaching skills but do not wish to become professional coaches.
The question we were asking was, “What else, besides observable coaching skills, might we assess to capture the potential impact of this training?” As I reflected on the content of this skills-focused course, it occurred to me that a potential benefit of completing this course might be a shift in how participants think about their roles as leaders—more specifically, in whether participants grow in their tendency to see their leadership role in terms of the development of other people, along with their sense of efficacy in developing others and their motivation to do so. As we examined the literature, we found several studies measuring the extent to which leaders actually engage in or support the development of their subordinates, but we could not find any measures of a leader’s mindset or self-concept around the topic of follower development. That gap in the literature spelled opportunity. Thus was born our project creating and validating a measure we are calling the “developmental self-concept” scale, or DeSC.
Using the concepts of a leader’s “developmental self-definition,” “developmental efficacy,” and “developmental motivation” as a scaffold, we drafted a set of fifteen initial items, which we subsequently refined in a variety of ways to make them simple, clear, and broadly applicable (e.g., not requiring that a respondent have a large number of subordinates reporting to him or her). We pilot tested these items with about eighty MBA students at Rice University who were enrolled in a non-credit training course similar to the one the Doerr Institute was developing for a broader audience, and we used this initial sample to examine the internal reliabilities of each of the three subscales of the DeSC (all reliability estimates, using Cronbach’s alpha, were well above .80).
Following many of the principles that we discuss throughout the Demystifying Measurement certification course, we have begun to gather data to examine the convergent and discriminant validity of the DeSC. Our plan is to gather data from a second, larger sample of managers and supervisors later this spring, which will allow us to conduct a formal analysis on the underlying statistical structure of the measure and to examine what outcomes the DeSC subscales are capable of predicting. We also have a study in progress in which we randomly assigned MBA students who signed up to take a coach training course to either a training or waitlisted (control) group. We will compare the DeSC scores of these two groups before and after the training course concludes, which will allow us to determine whether such a course is capable of shifting participants’ developmental self-concepts, in addition to imparting observable coaching skills.
Our hope is to submit this project to a peer reviewed, academic journal later in 2024. If you would like to learn more about this DeSC project, or if you think you might have an interesting sample of people to whom we could consider administering this measure, feel free to reach out to me directly (rpb7@rice.edu).