Friday, April 13
Measuring Learning: The Theory of Change
Last week, we kicked off Measurement Month with a discussion of why measurement matters. This week, we’ll dig into connecting measurement and design to build the metrics in from the beginning.
Throughout this post I’ll use an example from a training program I worked with earlier in my career to illustrate how the measurement models can be put to work in training.
The Alaska Example: Job Training for Teens
In a my last job, I worked with Anchorage Youth Employment in Parks (YEP), a program hiring Anchorage teens to complete park improvement projects while learning job skills in trail building, construction, and habitat restoration. In partnership with a variety of community organizations, this program accomplished two key goals: Complete park improvements and train teens in in-demand job skills.
As program partners secured public job training funds for this program, it was essential to ensure YEP was successfully developing employment-ready teens. After the program’s pilot year, program partners worked with a University of Alaska sociologist to create a data model to measure the YEP program’s effectiveness and continually improve the program.
YEP used the Theory of Change model to guide the development of program metrics and evaluation. The Theory of Change model offers program designers a helpful guide for planning and measuring learning and development. The Theory of Change looks like this (from “How does the Theory of Change Process Work?”):
Step One: Identify Goals
Start your program design with the basics: Goals. Ask yourself and your stakeholders straightforward questions: What skills must participants have when they finished the program? What new responsibilities will they be qualified to take on next? To develop specific and measurable goals, conduct stakeholder interviews and focus groups. As you design your program goals, make sure you have a thorough understanding of your target audience to design initiatives that meet both their needs and the professional growth they wish to accomplish for themselves.
In the YEP example, the goals for the students were to develop competency in several professional outdoor skills: Forest maintenance, trail building and streambank restoration. Participating teens were required to demonstrate technical competence in these professional skills as well as leadership and teamwork abilities.
Step Two: Connect the Preconditions and Identifying the Changes
Each program goal will have a chain of needed preconditions, which in turn require changes from the baseline data. The program design phase is an exploration of each step needed to achieve the preconditions necessary for the end goal.
In the YEP example, program managers worked with experts in each of the subject areas (trail-building, forestry, and watershed restoration) to assess the competencies necessary to reach the desired end goals for the teens. We developed activities (both training programs and work projects) that would enable participating teens to explore different skills and apply them to real work environments. We made sure our summer-long program connected all of the dots between participating teens’ beginning qualifications and our desired end goals.
Step Three: Develop Indicators
The next step is to create a data framework for the “before and after” states achieved through a training program. In the Anchorage example, this meant a case study of the target audience of our training program. We identified their likely background, education, and their desired outcomes from the program, and created scales to measure where an individual would stand within that range at the beginning of the program and the end.
We included measurement of both teens’ soft skills, like leadership, communication, and cooperation, and their specific job skills, in trail-building, forestry, and watershed restoration. We developed detailed assessments for teens to complete at the beginning and end of the program and a shorter, simpler questionnaire that they filled out on a weekly basis. Teens were asked for both self-assessment and for qualitative feedback, to which we later assigned data values. We also asked teens for feedback about the program itself.
In addition to asking the teens to assess themselves and the program, we asked both crew leaders (team leaders) and the program supervisor to complete detailed assessments of the teens’ skills and abilities at the beginning and end of the programs. All of these assessments were used to create our data framework.
Finally, we created long-term goals and measurements for the teens’ success after participation in the program, including employment in our target sectors, college attendance, career-readiness and also advancement in the program in returning years. We designed annual surveys to check in with program alumni and assess their long-term progress.
Step Four: Write the Narrative
The final step in the program design phase of the theory of change model involves creating a narrative: A model describing how your initiative will create change to achieve your program goals. This is more than just a pretty story: This is your opportunity to test your logic in plain English. When you create your narrative, this is a document you will share with your stakeholders to make sure that the steps you outline make sense - and that the data measures you identify connect accurately to the program goals. (In the next post, I’ll share some sample narratives to help guide your program design process.)
Step Five: Implement, Iterate, Improve
This last step is where the really good stuff happens. Once you’ve designed a program, it’s time to implement it, collect data, and use data to make adjustments and improvements.
In the Youth Employment in Parks example, program managers used the Theory of Change model to provide both structure in the program design and metrics to measure its effectiveness. We found the data invaluable in enabling us to continually improve the program, by adding training in specific areas or reducing or removing unneeded components. We ultimately reduced the time spent in content study and achieved greater mastery of the content. This was a positive feedback loop, and achieved improved results continuously as participant feedback enabled us to iterate and improve.
Next week we’ll examine the process of building your measurement model.