Grant Reports – Part 2: Let Grantees Define Outcomes!

This is Part 2 of 3 in continuation of my previous blog post which began to describe our process of selecting grantee metrics.

As you know, it’s much easier to both track and measure outputs, as compared to outcomes. If you’re wondering what is the difference between outputs and outcomes: in brief, outputs quantify the activities that the grantee undertook — there are counts of products of a program’s activities, or units of service. A program’s outputs are intended to produce desired outcomes for the program’s participants. Outcomes are benefits experienced by participants during or after their involvement with the program, and may relate to knowledge, skills, attitudes, values, behavior, condition, or status.

So how do you collect outcome data without driving your grantee crazy and resorting to having to use a randomized controlled trial that will cost 5x your grant amount? And also we need to acknowledge that in many cases, outcomes may take a long time to manifest, AND that it can be challenging to attribute changes in an outcome to a particular program (e.g., think about the impact of tutoring programs on academic achievement — if a student demonstrates improvement, it’s hard to know what part was due to the tutoring program, school, home, or support from another source, short of doing a randomized controlled trial and/or finding a really robust comparison group).

There’s no magic bullet, but I can tell you what we did for the hope & grace fund, and our client and advisory board seemed satisfied with this outcome.

We looked at what outcomes grantees collected, and they were all over the map, since we had grantees who addressed women’s mental health and well-being through a wide range of programs. But we took a big-tent approach and came up with a higher-level outcome that encompassed all these specific outcomes. In our case, to capture a project’s impact on individuals’ well-being, we asked grantees to count the number of individuals who have positive changes in attitudes and behaviors related to their well-being , after receiving services from the project, which is intentionally vague and broad. And we then asked our grantees to define what “well-being” meant in accordance to their programs and decide how they’d collect this data (which could include simple surveys designed by the grantee where participants could self-report whether they observed an improvement in attitude/affect as a result of the program).

So well-being could take on any one or more of numerous definitions, as decided by the grantee, including any of these indicators, which are provided as examples:

  • improved family functioning
  • increase in feelings of personal safety / reduced exposure to violence
  • improved self-sufficiency (including employment / financial stability)
  • improved support networks
  • decrease in perception of internal/external stigma related to mental health
  • positive change in attitude/behavior  to seek help for mental health and use available mental health resources
  • reduced alcohol or drug use
  • whether an individual entered a substance use disorder treatment program 
  • increased sense of emotional support
  • increased sense of resilience

So, after we aggregated grantee reports, we were able to report that X% of the total Y individuals served by our grantees experienced some positive change in their well-being as a result of programs funded by hope & grace. This was sufficient to give us a directional sense of whether people were benefiting overall from our grantmaking and establish X as our baseline.

More thoughts on DEI and grantee reporting to be shared in Part 3 next week.

We invite you to share your thoughts and ideas in the comments section below.

For links to our resources including our checklist of recommendations to incorporate DEI in grantmaking practice, our suggested dashboard of DEI metrics to track, our Stanford Social Innovation Review article, and a video presentation of our work, please go to our homepage.