The Language of Impact: Five Terms You Need to Know
Five Terms You Should Know
Last month, our resident metrics gurus Andrew Niklaus + Nick Arevalo took their turn in The Hot Seat, a regular staff gathering designed to excavate and share knowledge across the team.
We talked about how Tipping Point defines impact, and how our support for grantees is rooted in our growing understanding of the non-profit organizational lifecycle. We have impact if our grantees are effective at the poverty-fighting work they do, and once a grantee can demonstrate efficacy, we can help scale that impact to affect the lives of more people in need.
In order to unpack this aspect of our work even further, we started a glossary of metrics-related terms we often use. Below are five concepts you need to know to understand how Tipping Point approaches impact and measures grantees’ efforts in the fight against poverty:
1. Theory of Change
Theory of change is shorthand for how our field talks about managing the performance of programs. Theory of change starts with mission, the most common articulation of why an organization exists in the first place, and then asks a series of basic questions: Who do you target? Where do you work geographically? What do you do? How often does your intervention occur? What outcomes are you working toward? Theories of change map out an organization’s structure to show whether its work indeed fulfills its mission.
Tipping Point grantees work with disenfranchised populations, so theories of change are especially critical. For example, if working with youth who have recently graduated from 8th grade as the first in their families to do so, with the goal of someday supporting them to graduate from college, there are many stages of client engagement that occur over a long period of time. Some interventions will be based in proven methods, and some will test promising interventions based on experience and progress made to date.
2. Long-term Outcomes
We support our grantees to drive toward long-term outcomes, or indicators that put their clients on a path out of poverty. These include things like high school and college graduation rates, housing retention and job retention. In our data requests, we hold our groups to a level of rigor with their intended outcomes, while also taking into account the elements that make each program unique. If we were to fully standardize, we would lose some of the important nuance, especially with regards to different populations targeted by seemingly similar programs.
An organization like grantee Larkin Street Youth Services that works with homeless and runaway youth in San Francisco will have much different time horizons and placement rates than one like grantee SHELTER, Inc. that provides housing for families in Martinez. Because the motivations, needs and behaviors of these two sub-groups are very different, success is measured differently as well.
Over the past two years, we looked at best-in-class examples of tracking long-term outcomes. We worked with our groups to determine a set of core metrics and a technology solution to capture it. We’ve gone from collecting over 1,000 metrics for our 46 grantees, down to about 135. By refining our asks, we intend to lessen the reporting burden on grantees and more clearly identify our progress through key poverty reduction indicators.
3. Performance Management
We talk about performance management as a way to assess the impact of an organization overall. At its center is a relational database, which captures the efforts of staff in correlation with impact on clients. Along with client outcomes, the database should also track the leading process indicators needed to get there. Every step a client takes — from the intake assessment to program completion — should be logged. When that data exists, we can pull it out of the system to create a feedback loop and determine what’s working, and what’s not. We want organizations to be able promote successes and limit failures by frequently reviewing and leveraging data.
Performance management also means holding staff accountable. For example, every case manager should have a set of outcomes for their clients—likely housing and employment related—which roll up the org chart, all the way to the Executive Director.
Tipping Point’s model has always been to provide general operating support and build up staff. Initially, supporting a strong performance management system meant simply funding the development and implementation of a database, but our thinking has evolved over the years. We learned the necessity of a database administrator, and that it is critical to have an internal champion beyond the Executive Director or CEO (usually the COO, Associate Director or Director of Programs) to drive the success of the whole impact framework. For real results, an organization must be committed to putting the necessary structure in place, building support at all levels of the organization, and investing huge amounts of time, money and people.
4. Comprehensively vs. Intensively Served Clients
Last year, our grantees served 133,000 total clients. That number represents a wide range: it’s both those individuals who experienced the whole suite of services available at each grantee, as well as those who simply came in for a conversation or maybe a cup of coffee. If we look only at those who were intensively served — about 88,000 people last year across our 46 grantees — those individuals received an entire set of services at the fully prescribed dosage.
There are other ways that we can go even deeper with the data. Another step down from the intensively served number is one that captures people who received full services and have reached milestones in the program, but only achieved certain sub-outcomes, didn’t maintain the overall outcome at follow-up, or for whom the program just didn’t take at that time. The final layer of client data is calculating the amount of people who retain and demonstrate a palpable level of post-program impact. These are people who — a year or two after participating in the grantee program — are still in school, still employed at a living wage, still healthy, or still housed.
Even with big data, this depth of knowledge is very difficult to achieve. It can cost several thousand dollars per client to follow up post-program, and right now, we as a field put a premium on helping people with immediate needs. While this will always be a priority, our goal with Tipping Point’s core metrics is to start to shift our investments to also set grantees up with the systems they need to allow us to get longer term data that shows how clients are doing up to two years after exiting our programs.
5. Summative + Formative Evaluation
While some groups will always remain highly effective regional interventions, those that do scale or want to shape policy or influence changes in legislation, will eventually need to complete a formal evaluation. A randomized control trial is the industry standard, and there are two primary types of evaluations in the social sector: summative and formative. Summative evaluations are about impact, while formative evals are for process. Summative studies define a set of questions about a program, the answers to which reveal whether the program’s outcomes can be attributed to the services the program delivers. Formative evaluations are process-based, and they ask: Are you doing the same things at the same time across the organization? Does staff know what to do and when to do it? Is the program model running with fidelity? These studies can take anywhere from a year to several years to do, and can cost anywhere from $500,000 to $10M to execute. Here are a few examples of evaluation findings from Tipping Point grantees over the past few years: