Grant Reports – Part 2: Let Grantees Define Outcomes!

This is Part 2 of 3 in continuation of my previous blog post which began to describe our process of selecting grantee metrics.

As you know, it’s much easier to both track and measure outputs, as compared to outcomes. If you’re wondering what is the difference between outputs and outcomes: in brief, outputs quantify the activities that the grantee undertook — there are counts of products of a program’s activities, or units of service. A program’s outputs are intended to produce desired outcomes for the program’s participants. Outcomes are benefits experienced by participants during or after their involvement with the program, and may relate to knowledge, skills, attitudes, values, behavior, condition, or status.

So how do you collect outcome data without driving your grantee crazy and resorting to having to use a randomized controlled trial that will cost 5x your grant amount? And also we need to acknowledge that in many cases, outcomes may take a long time to manifest, AND that it can be challenging to attribute changes in an outcome to a particular program (e.g., think about the impact of tutoring programs on academic achievement — if a student demonstrates improvement, it’s hard to know what part was due to the tutoring program, school, home, or support from another source, short of doing a randomized controlled trial and/or finding a really robust comparison group).

There’s no magic bullet, but I can tell you what we did for the hope & grace fund, and our client and advisory board seemed satisfied with this outcome.

We looked at what outcomes grantees collected, and they were all over the map, since we had grantees who addressed women’s mental health and well-being through a wide range of programs. But we took a big-tent approach and came up with a higher-level outcome that encompassed all these specific outcomes. In our case, to capture a project’s impact on individuals’ well-being, we asked grantees to count the number of individuals who have positive changes in attitudes and behaviors related to their well-being , after receiving services from the project, which is intentionally vague and broad. And we then asked our grantees to define what “well-being” meant in accordance to their programs and decide how they’d collect this data (which could include simple surveys designed by the grantee where participants could self-report whether they observed an improvement in attitude/affect as a result of the program).

So well-being could take on any one or more of numerous definitions, as decided by the grantee, including any of these indicators, which are provided as examples:

  • improved family functioning
  • increase in feelings of personal safety / reduced exposure to violence
  • improved self-sufficiency (including employment / financial stability)
  • improved support networks
  • decrease in perception of internal/external stigma related to mental health
  • positive change in attitude/behavior  to seek help for mental health and use available mental health resources
  • reduced alcohol or drug use
  • whether an individual entered a substance use disorder treatment program 
  • increased sense of emotional support
  • increased sense of resilience

So, after we aggregated grantee reports, we were able to report that X% of the total Y individuals served by our grantees experienced some positive change in their well-being as a result of programs funded by hope & grace. This was sufficient to give us a directional sense of whether people were benefiting overall from our grantmaking and establish X as our baseline.

More thoughts on DEI and grantee reporting to be shared in Part 3 next week.

We invite you to share your thoughts and ideas in the comments section below.

For links to our resources including our checklist of recommendations to incorporate DEI in grantmaking practice, our suggested dashboard of DEI metrics to track, our Stanford Social Innovation Review article, and a video presentation of our work, please go to our homepage.

Grant Reports – Part 1: Metrics that Won’t Make Your Grantees Want to Poke Their Eyes Out

In my previous blog post, I mentioned how we had developed what we hoped would be a streamlined reporting process for our hope & grace fund grantees– at the end of their grant (which ranged from $20K to $100K for a one-year grant period), we asked them to:

  • Write a 1- to 2-page executive summary of the results of their grant and lessons learned / challenges encountered / possible next steps
  • Fill out a spreadsheet template with some basic metrics of the people they served (outputs — # of women, race/ethnicity, age group, etc.) and a couple extremely high-level outcomes.

In terms of the actual development and selection of the original set of grantee metrics, our goal was to minimize the burden on our grantees, especially the smaller ones who did not have dedicated evaluation staff and probably had more limited data collection capabilities.

The underlying reason for collecting these metrics was to be able to demonstrate to our internal and external stakeholders the aggregate impact of our grantmaking — our client was interested in sharing the results with the people who bought their skincare products and donated 1% of their purchases to the hope & grace fund.

Here’s what we did:

  1. In the grant application, we asked applicants about how they currently measure success and what metrics they already collect. That way, we understood what would be minimally burdensome for them to report to us (since they were already collecting it). This was very helpful in informing what metrics we decided to collect from grantees, and helped align our data collection with their existing data collection.
  2. We compiled all of our grantees’ metrics by creating a large spreadsheet / matrix with the metrics on one axis and the grantee organizations on the other axis. We then basically counted which metrics were most common between all the applicants. It took about 2-3 hours to do this, so you have a sense of the level of effort. A simplified example of the matrix I created is below:


Metric 1Metric 2Metric 3Metric 4Metric 5 
Grantee1111
1
Grantee2
1
11
Grantee 3
11

Grantee411
11
Grantee5111
1
Total # of Grantees that Use This Metric35324
% of Grantees that Use This Metric60%100%60%40%80%

In this very simplistic matrix, I’d choose Metric 2 and Metric 5 because a majority of the grantees collect these metrics. Also be sure to reflect about whether you really need Metric 2 and Metric 5 — just because grantees already collect this data, doesn’t mean that it is useful to your grantmaking and could just add noise to your own analysis. Less is more!

Also, if a grantee could not collect Metric 5 without a lot of additional work, we would have a conversation with them and consider simply exempting them from collecting that data.

For Metrics 1, 3 and 4, you could still ask grantees to collect that data, BUT just go through the reflective exercise of whether you really need that data and what you plan to do with it (e.g., what practical use do you have for this data? Will it inform your grantmaking strategies / priorities? Is this data that is critical to your stakeholders? Is this a need-to-have vs. a nice-to-have?).

This blog is to be continued in Part 2 and Part 3.

We invite you to share your thoughts, experiences, and ideas in the comments section below.

For links to our resources including our checklist of recommendations to incorporate DEI in grantmaking practice, our suggested dashboard of DEI metrics to track, our Stanford Social Innovation Review article, and a video presentation of our work, please go to our homepage.

Why It’s Important to Track Grantee Time to Complete Grant Reports

When we were managing the hope & grace fund, we thought we had developed a streamlined reporting process for our grantees– at the end of their grant, we asked them to:

  • Write a 1- to 2-page executive summary of the results of their grant and lessons learned / challenges encountered / possible next steps
  • Fill out a spreadsheet template with some basic metrics of the people they served (outputs — # of women, race/ethnicity, age group, etc.) and a couple extremely high-level outcomes.

One thing we neglected to do initially was to explicitly ask our grantees how much time they spent on the reporting. We allowed them to allocate up to 10 percent of their budget for reporting (with the assumption that they’d spend no more than 10 hours on reporting). But in one case, we found that our grantee had spent 40+ hours tracking data to fill out the metrics spreadsheet (and for a grant that was $25,000, that means they had only allocated at most $2,500 to this task). As such, we ended up giving them an additional $2,000 to compensate them for their time. 

Lessons learned:

  • Ask grantees to keep track of how much time it takes to fulfill reporting requirements and make sure that they are being compensated in a reasonable way for this. 
    • Potentially track this metric for large vs. small grantees and large vs. small grants
    • I’d suggest tracking this metric as part of your DEI grantmaking dashboard
  • Let grantees know in advance how much time it should take on average to complete the reporting requirements, so that they can budget accordingly
  • If the reporting is taking much longer than expected, help the grantee troubleshoot why this is occurring. Is it an issue with the reporting tool / metrics not being a good fit for their grant project? Is it a matter of the grantee lacking the capacity (and points to the need for further investment to build this capacity)?
  • Compensate for additional efforts when necessary, especially for smaller organizations which cannot absorb the costs.

For links to our resources including our checklist of recommendations to incorporate DEI in grantmaking practice, our suggested dashboard of DEI metrics to track, our Stanford Social Innovation Review article, and a video presentation of our work, please go to our homepage.