Grant Reports – Part 1: Metrics that Won’t Make Your Grantees Want to Poke Their Eyes Out

In my previous blog post, I mentioned how we had developed what we hoped would be a streamlined reporting process for our hope & grace fund grantees– at the end of their grant (which ranged from $20K to $100K for a one-year grant period), we asked them to:

  • Write a 1- to 2-page executive summary of the results of their grant and lessons learned / challenges encountered / possible next steps
  • Fill out a spreadsheet template with some basic metrics of the people they served (outputs — # of women, race/ethnicity, age group, etc.) and a couple extremely high-level outcomes.

In terms of the actual development and selection of the original set of grantee metrics, our goal was to minimize the burden on our grantees, especially the smaller ones who did not have dedicated evaluation staff and probably had more limited data collection capabilities.

The underlying reason for collecting these metrics was to be able to demonstrate to our internal and external stakeholders the aggregate impact of our grantmaking — our client was interested in sharing the results with the people who bought their skincare products and donated 1% of their purchases to the hope & grace fund.

Here’s what we did:

  1. In the grant application, we asked applicants about how they currently measure success and what metrics they already collect. That way, we understood what would be minimally burdensome for them to report to us (since they were already collecting it). This was very helpful in informing what metrics we decided to collect from grantees, and helped align our data collection with their existing data collection.
  2. We compiled all of our grantees’ metrics by creating a large spreadsheet / matrix with the metrics on one axis and the grantee organizations on the other axis. We then basically counted which metrics were most common between all the applicants. It took about 2-3 hours to do this, so you have a sense of the level of effort. A simplified example of the matrix I created is below:


Metric 1Metric 2Metric 3Metric 4Metric 5 
Grantee1111
1
Grantee2
1
11
Grantee 3
11

Grantee411
11
Grantee5111
1
Total # of Grantees that Use This Metric35324
% of Grantees that Use This Metric60%100%60%40%80%

In this very simplistic matrix, I’d choose Metric 2 and Metric 5 because a majority of the grantees collect these metrics. Also be sure to reflect about whether you really need Metric 2 and Metric 5 — just because grantees already collect this data, doesn’t mean that it is useful to your grantmaking and could just add noise to your own analysis. Less is more!

Also, if a grantee could not collect Metric 5 without a lot of additional work, we would have a conversation with them and consider simply exempting them from collecting that data.

For Metrics 1, 3 and 4, you could still ask grantees to collect that data, BUT just go through the reflective exercise of whether you really need that data and what you plan to do with it (e.g., what practical use do you have for this data? Will it inform your grantmaking strategies / priorities? Is this data that is critical to your stakeholders? Is this a need-to-have vs. a nice-to-have?).

This blog is to be continued in Part 2 and Part 3.

We invite you to share your thoughts, experiences, and ideas in the comments section below.

For links to our resources including our checklist of recommendations to incorporate DEI in grantmaking practice, our suggested dashboard of DEI metrics to track, our Stanford Social Innovation Review article, and a video presentation of our work, please go to our homepage.