Category Archives: Uncategorized

Are Phone Calls Less Burdensome than Grant Reports?

I recently attended Philanthropy California’s and the Trust-Based Philanthropy Project’s webinar on their Simplify and Streamline Practice.

One foundation commented that they had migrated from requiring grantees to submit grant reports, to holding phone calls instead, as part of their trust-based philanthropy practice.

My question is to what extent and whether phone calls are truly more equitable and less burdensome for grantees? I’d be curious to hear your experiences with grantees. Here are a few thoughts:

  1. A grant report might actually be more efficient if the grantee can submit a report they had already drafted for another funder. If all funders stopped requiring reports and switched to phone calls, then that would be a different story.
  2. Depending on how the phone call is framed, a grantee might feel intimidated. A grantee might feel compelled to spend significant time preparing for the phone call, and may even write out their responses ahead of time (which would seem as burdensome as writing a grant report). I think of myself when I’m interviewing for a job, I will spend hours preparing written responses ahead of time, if I think this is a high-stakes interview and I’m going to be judged on my performance.
  3. A grantee organization may also have numerous staff participate in the phone call, just to ensure that all funder questions can be answered. So if you had a 1-hour call with 5 grantee staff members, this would be equivalent to 5 man (or person) hours. If it takes less than 5 hours for someone to draft a grant report, then the phone call seems to be more time intensive.
  4. Based on personality type, maybe some would prefer to respond and communicate in writing more than verbally.

I’m curious to know how funders have asked for phone calls from their grantees, in such a way to make it low pressure and low burden. How long are these calls typically and how frequent?

I could see a shorter phone call (e.g., 15 minutes)signaling that this is more low-stakes conversation and doesn’t require significant advance preparation, but then is the grantmaker gathering enough detail from the call about how things are going? Thoughts?

Grant Reports – Part 3: Less Is More – When to NOT Collect Data

This is Part 3 of 3 of my previous two blog posts (Part 1 and Part 2) which began to describe our process of selecting grantee metrics.

Finally, once we had our draft set of grantee metrics, we then shared them with our client, advisory board, other experts, and grantees for their feedback. Grantee feedback is particularly important. In particular, we wanted to make sure that our grantees would be comfortable with collecting the data, and we asked grantees to flag if particular items would be difficult to collect. Also, we wanted to make sure that these grantees realistically think that they’d be able to achieve impacts in the given time frame of the grant — for example, if the grant period is one year, and it takes at least 5 years to observe a given outcome, then it makes no sense at this point to collect that outcome data.

We also allowed grantees to omit certain metrics if they were too cumbersome to collect — as long as we discussed and understood why it would be too challenging for them to collect the information. In those cases, if it made sense, we’d ask the grantee to provide proxy data items that they’d be able to collect easily.

Once we had our final set of metrics, we created a graphically appealing one-page dashboard which we refresh on an annual basis as a means to demonstrate to our stakeholders the impact of the fund.

Taking a step back and thinking about reporting and metrics more generally, I think we need to think carefully about why we are collecting this grantee information and whether we really need it. If it’s mainly to hold our grantee accountable for doing their work, would a site visit suffice instead? Or a phone call? I think a lot about the Whitman Institute’s  trust-based philanthropy model.

If there’s a need to present metrics to our board of directors, can we streamline the number of metrics that are really needed? And have an honest conversation with the board about the burdens placed on grantees? 

As an evaluator, I have seen people sort of go crazy with data and metrics — in some cases, more is not necessarily better!! We might collect 100 metrics, but only really use 5 of them to inform our decision making. Or another way to think about it — if we make a $20K grant and it costs $10K to collect the desired data, then would society have been better off if we had just used that additional $10K to serve more people?

If we apply an equity lens to monitoring and reporting, we need to acknowledge data is NOT free and that if we want it, we should compensate our grantees for it fairly. For example, as mentioned in a previous post, I found out that one of my grantees had spent 40+ hours collecting data for a $25K grant (and for that size grant, it should be more on the order of 10 hours or less), and ended up giving them another $2K to compensate them for their time.

We invite you to share your thoughts and ideas in the comments section below.

For links to our resources including our checklist of recommendations to incorporate DEI in grantmaking practice, our suggested dashboard of DEI metrics to track, our Stanford Social Innovation Review article, and a video presentation of our work, please go to our homepage.

Grant Reports – Part 2: Let Grantees Define Outcomes!

This is Part 2 of 3 in continuation of my previous blog post which began to describe our process of selecting grantee metrics.

As you know, it’s much easier to both track and measure outputs, as compared to outcomes. If you’re wondering what is the difference between outputs and outcomes: in brief, outputs quantify the activities that the grantee undertook — there are counts of products of a program’s activities, or units of service. A program’s outputs are intended to produce desired outcomes for the program’s participants. Outcomes are benefits experienced by participants during or after their involvement with the program, and may relate to knowledge, skills, attitudes, values, behavior, condition, or status.

So how do you collect outcome data without driving your grantee crazy and resorting to having to use a randomized controlled trial that will cost 5x your grant amount? And also we need to acknowledge that in many cases, outcomes may take a long time to manifest, AND that it can be challenging to attribute changes in an outcome to a particular program (e.g., think about the impact of tutoring programs on academic achievement — if a student demonstrates improvement, it’s hard to know what part was due to the tutoring program, school, home, or support from another source, short of doing a randomized controlled trial and/or finding a really robust comparison group).

There’s no magic bullet, but I can tell you what we did for the hope & grace fund, and our client and advisory board seemed satisfied with this outcome.

We looked at what outcomes grantees collected, and they were all over the map, since we had grantees who addressed women’s mental health and well-being through a wide range of programs. But we took a big-tent approach and came up with a higher-level outcome that encompassed all these specific outcomes. In our case, to capture a project’s impact on individuals’ well-being, we asked grantees to count the number of individuals who have positive changes in attitudes and behaviors related to their well-being , after receiving services from the project, which is intentionally vague and broad. And we then asked our grantees to define what “well-being” meant in accordance to their programs and decide how they’d collect this data (which could include simple surveys designed by the grantee where participants could self-report whether they observed an improvement in attitude/affect as a result of the program).

So well-being could take on any one or more of numerous definitions, as decided by the grantee, including any of these indicators, which are provided as examples:

  • improved family functioning
  • increase in feelings of personal safety / reduced exposure to violence
  • improved self-sufficiency (including employment / financial stability)
  • improved support networks
  • decrease in perception of internal/external stigma related to mental health
  • positive change in attitude/behavior  to seek help for mental health and use available mental health resources
  • reduced alcohol or drug use
  • whether an individual entered a substance use disorder treatment program 
  • increased sense of emotional support
  • increased sense of resilience

So, after we aggregated grantee reports, we were able to report that X% of the total Y individuals served by our grantees experienced some positive change in their well-being as a result of programs funded by hope & grace. This was sufficient to give us a directional sense of whether people were benefiting overall from our grantmaking and establish X as our baseline.

More thoughts on DEI and grantee reporting to be shared in Part 3 next week.

We invite you to share your thoughts and ideas in the comments section below.

For links to our resources including our checklist of recommendations to incorporate DEI in grantmaking practice, our suggested dashboard of DEI metrics to track, our Stanford Social Innovation Review article, and a video presentation of our work, please go to our homepage.

Grant Reports – Part 1: Metrics that Won’t Make Your Grantees Want to Poke Their Eyes Out

In my previous blog post, I mentioned how we had developed what we hoped would be a streamlined reporting process for our hope & grace fund grantees– at the end of their grant (which ranged from $20K to $100K for a one-year grant period), we asked them to:

  • Write a 1- to 2-page executive summary of the results of their grant and lessons learned / challenges encountered / possible next steps
  • Fill out a spreadsheet template with some basic metrics of the people they served (outputs — # of women, race/ethnicity, age group, etc.) and a couple extremely high-level outcomes.

In terms of the actual development and selection of the original set of grantee metrics, our goal was to minimize the burden on our grantees, especially the smaller ones who did not have dedicated evaluation staff and probably had more limited data collection capabilities.

The underlying reason for collecting these metrics was to be able to demonstrate to our internal and external stakeholders the aggregate impact of our grantmaking — our client was interested in sharing the results with the people who bought their skincare products and donated 1% of their purchases to the hope & grace fund.

Here’s what we did:

  1. In the grant application, we asked applicants about how they currently measure success and what metrics they already collect. That way, we understood what would be minimally burdensome for them to report to us (since they were already collecting it). This was very helpful in informing what metrics we decided to collect from grantees, and helped align our data collection with their existing data collection.
  2. We compiled all of our grantees’ metrics by creating a large spreadsheet / matrix with the metrics on one axis and the grantee organizations on the other axis. We then basically counted which metrics were most common between all the applicants. It took about 2-3 hours to do this, so you have a sense of the level of effort. A simplified example of the matrix I created is below:


Metric 1Metric 2Metric 3Metric 4Metric 5 
Grantee1111
1
Grantee2
1
11
Grantee 3
11

Grantee411
11
Grantee5111
1
Total # of Grantees that Use This Metric35324
% of Grantees that Use This Metric60%100%60%40%80%

In this very simplistic matrix, I’d choose Metric 2 and Metric 5 because a majority of the grantees collect these metrics. Also be sure to reflect about whether you really need Metric 2 and Metric 5 — just because grantees already collect this data, doesn’t mean that it is useful to your grantmaking and could just add noise to your own analysis. Less is more!

Also, if a grantee could not collect Metric 5 without a lot of additional work, we would have a conversation with them and consider simply exempting them from collecting that data.

For Metrics 1, 3 and 4, you could still ask grantees to collect that data, BUT just go through the reflective exercise of whether you really need that data and what you plan to do with it (e.g., what practical use do you have for this data? Will it inform your grantmaking strategies / priorities? Is this data that is critical to your stakeholders? Is this a need-to-have vs. a nice-to-have?).

This blog is to be continued in Part 2 and Part 3.

We invite you to share your thoughts, experiences, and ideas in the comments section below.

For links to our resources including our checklist of recommendations to incorporate DEI in grantmaking practice, our suggested dashboard of DEI metrics to track, our Stanford Social Innovation Review article, and a video presentation of our work, please go to our homepage.

Why It’s Important to Track Grantee Time to Complete Grant Reports

When we were managing the hope & grace fund, we thought we had developed a streamlined reporting process for our grantees– at the end of their grant, we asked them to:

  • Write a 1- to 2-page executive summary of the results of their grant and lessons learned / challenges encountered / possible next steps
  • Fill out a spreadsheet template with some basic metrics of the people they served (outputs — # of women, race/ethnicity, age group, etc.) and a couple extremely high-level outcomes.

One thing we neglected to do initially was to explicitly ask our grantees how much time they spent on the reporting. We allowed them to allocate up to 10 percent of their budget for reporting (with the assumption that they’d spend no more than 10 hours on reporting). But in one case, we found that our grantee had spent 40+ hours tracking data to fill out the metrics spreadsheet (and for a grant that was $25,000, that means they had only allocated at most $2,500 to this task). As such, we ended up giving them an additional $2,000 to compensate them for their time. 

Lessons learned:

  • Ask grantees to keep track of how much time it takes to fulfill reporting requirements and make sure that they are being compensated in a reasonable way for this. 
    • Potentially track this metric for large vs. small grantees and large vs. small grants
    • I’d suggest tracking this metric as part of your DEI grantmaking dashboard
  • Let grantees know in advance how much time it should take on average to complete the reporting requirements, so that they can budget accordingly
  • If the reporting is taking much longer than expected, help the grantee troubleshoot why this is occurring. Is it an issue with the reporting tool / metrics not being a good fit for their grant project? Is it a matter of the grantee lacking the capacity (and points to the need for further investment to build this capacity)?
  • Compensate for additional efforts when necessary, especially for smaller organizations which cannot absorb the costs.

For links to our resources including our checklist of recommendations to incorporate DEI in grantmaking practice, our suggested dashboard of DEI metrics to track, our Stanford Social Innovation Review article, and a video presentation of our work, please go to our homepage.

Unicorns Unite! How Nonprofits and Foundations can Build EPIC Partnerships

One of my favorite bloggers, Vu Le, along with Jessamyn Shams-Lau and Jane Leu (who are also pretty awesome!) have written a new book, Unicorns Unite.  Just based on the illustrations alone, I already love this book!! Get a preview of the book via this Medium blog post.

Basically they diagnose why and how nonprofits and grantmakers relate in dysfunctional ways (which I try to address and counteract via my checklist of recommendations for incorporating DEI in grantmaking) — it really boils down to power dynamics, lack of trust, and double standards. They then propose a framework for creating healthy dynamics to achieve the greatest good. This is the EPIC Partnership Framework:

E = Equally value all inputs, especially time and money. Partners do not allow any input to eclipse all others in importance, power, or prestige.

P = Prioritize needs of those we serve. Nonprofits put needs of clients, communities, and change first. Foundations put needs of nonprofits and communities first.

I = Increase trust and empathy. Partners identify as peers, trusted colleagues, teammates, and equals who learn from and challenge each other and the field to excel.

C = Commit to big, bold, and better. Partners think big, act boldly, and produce better results — through all-in teamwork.

Buy this book!!!!

Recap of Nov 16 Panel: Diversity, Equity, and Inclusion: Are We Getting It Right?

(I wrote this blog post and the original version is on the AAPIP blog here.)

The Northern California Grantmakers’ Peninsula Philanthropy Network and AAPIP recently co-sponsored an animated panel discussion where funders reflected on changes they have made and hope to make with regards to diversity, equity, and inclusion (DEI). Foundations represented on the panel comprised the Blue Shield of California Foundation, The James Irvine Foundation, The San Francisco Foundation, and the Tides Foundation (see below for details on the speakers).

The bottom line is that, while some funders have been galvanized to action by recent changes in the political climate, there is still much work to do, both internally and externally, with respect to DEI. Key funder recommendations emerged from our discussion including:

We should take more risks, move more quickly, and redefine our view of failure. Rapid-response grants are a good start, but we need to do more.

In response to the changing policy environment, the funders on our panel created rapid-response funds in the past year to fund advocacy efforts and other activity, using streamlined application processes (e.g., the grant application consists of one to five questions with a low character limit; grant decisions are made on a weekly basis; grants are awarded on a rolling basis within 30 days of the application submission, etc.). There was some hesitancy around these rapid-response processes, stemming from the tendency of many foundations to conduct due diligence at a much slower pace. To address concerns, one foundation implemented “guard rails,” which included establishing a set of criteria to accept/reject proposals and a small committee to quickly vet rapid-response applications. As a way to mitigate risk, this foundation also chose to fund networks of smaller grassroots organizations, rather than to fund these organizations directly. Another foundation has a set of pre-approved grantee organizations to which it can rapidly deploy funding, as the need arises.

When asked about the outcomes of taking risks to fund these rapid response grants, one panelist replied, “We learned the world didn’t end.” Other panelists agreed that they had no regrets with any of the rapid response grants they have awarded. Funders should also take lessons from rapid-response grantmaking to streamline their traditional grantmaking processes. However, the panelists stressed the importance of using traditional grants to support grantees’ core long-term strategies, while using rapid-response grants for short-term needs.

We also discussed how investments in the private sector are made on a much faster timeline than in the philanthropic sector, carry much more risk, and involve significantly larger sums of money, while still incorporating rigorous due diligence processes. In fact, a venture capital portfolio is considered too conservative if the “failure” rate of its investments is too low. In the philanthropic sector, we need to redefine how we view failure, especially if we want to find solutions that work.

We need to embed community voices into our work. Consider doing a listening tour of community stakeholders, and consider conducting greater outreach to connect grassroots organizations to funding opportunities.

The James Irvine Foundation conducted 14 listening sessions across the state of California with more than 400 people. The purpose of these sessions was to better understand the challenges and dreams of Californians who are working but struggling with poverty, and learn about solutions that could improve their lives. The feedback collected is helping to inform the foundation’s strategic focus and grantmaking to expand opportunity for these working Californians. Learn more about Irvine’s Community Listening Sessions here, and watch the “sizzle reel” summarizing this initiative here.

We should walk the talk, and examine how our organizations incorporate DEI values internally as well as externally.

All panelists stressed the importance of internalizing DEI values and of engaging with coworkers deeply in such conversations. One foundation is intentionally educating its board about DEI issues, including sharing with the board a recommended reading list on a quarterly basis. Another foundation actively considers how to incorporate the lived experiences of its own staff in its grantmaking strategy. A panelist also suggested that funders should screen and select their vendors, suppliers, and other contractors using a DEI lens.

Panelists cited several resources that have been useful as they have sought to incorporate DEI internally:

If you know of other resources, please share them in the comments below.

The panel of speakers comprised:

  • Nancy Chan (moderator), Director of Community Partnerships at Catalyte.io, and formerly Director of Consulting Services at Arabella Advisors

  • JC De Vera, Nurturing Equity Movements Fellow at The San Francisco Foundation

  • Kelley D. Gulley, Senior Program Officer at The James Irvine Foundation

  • Carolyn Wang Kong, Senior Program Officer at the Blue Shield of California Foundation

  • Edward Wang, Director of Corporate Philanthropy at the Tides Foundation

Nancy Chan is the Director of Community Partnerships at Catalyte.io, a tech company which uses predictive analytics to identify people from nontraditional backgrounds with high potential to become software developers. She was formerly a director at philanthropy consulting firm, Arabella Advisors, where she led its work related to DEI and grantmaking practice.

GrantAdvisor: Website Facilitating Dialogue Between Grantmakers and Grantees

The GrantAdvisor.org web service was just launched in California and Minnesota, and will be spreading to other locations:

GrantAdvisor is a web service that facilitates open dialogue between nonprofits and grantmakers by collecting authentic, real-time reviews and comments on grantseekers’ experiences working with funders to encourage more productive philanthropy.

Feedback is shared anonymously and once a funder receives 5 reviews, the data on that funder will be posted publicly and funders will have the opportunity to respond to reviews.

In addition to aggregating feedback on the grant application process, GrantAdvisor will provide a forum to share views on how funders influence their field. This allows funders to understand how they are perceived as leaders and influencers, not simply as grantmakers, and (not that the analogy isn’t obvious enough):

If you’re going on a trip, you check TripAdvisor; if you’re going to apply for a grant, you check GrantAdvisor! If you’re a hotel, you check TripAdvisor to learn valuable customer feedback you can’t get any other way!

Equitable Grant Agreement Language

A foundation’s standard grant agreement language indicated that the funder had the unilateral right to terminate the agreement and cut off funding to the grantee. However, one grantee raised issues with how the funder approached ownership of the work product and also its termination provisions.

Through extensive dialogue between the grantmaker and grantee, they restructured the language (see excerpt below) to present the agreement as an equitable partnership rather than a top-down relationship between the funder and the grantee, as well as to protect the grantee’s ownership rights over the work product.

This Grant Agreement may be terminated, in whole or in part, prior to the completion of the contract project activities when both parties agree that continuation is not feasible or would not produce beneficial results commensurate with the further expenditure of funds. The parties must agree on the termination conditions, including effective date and the portion to be terminated. The Organization shall not incur new obligations for the terminated portion after the effective date, and shall cancel as many outstanding obligations as possible. The Foundation shall make funds available to the Organization to pay for allowable expenses incurred before the effective date of termination.

This story shows how being willing to be collaborative and positive with grantees, approaching them from a posture of seeking to understand their context and needs, led to the best outcome for all.

We hope this will be helpful to you in your work.

Inequity of time

Below is an excerpt from a recent Nonprofit with Balls blog post, “Time inequity: What it is and why it’s no-good, very-bad,” which accurately captures the motivation for developing our checklist of recommendations to eliminate implicit bias in grantmaking processes. This blog is great, and its author, Vu Le, will be doing the closing plenary of the PEAK Grantmaking (formerly Grant Managers Network) conference this Wednesday at 10:30am-12 pm in Hollywood, CA. (Kelly Brown, director of D5 Coalition, and I will also be doing two back-to-back short talks on DEI in grantmaking, at the PEAK conference, at 3:45 pm and 4:35 pm tomorrow Tuesday, so come join us!)

Funders: We have an unfortunate joke in our sector that that the smaller a grant is, the more irritating and time-consuming the application is. This is hilarious, until we realize that many organizations led by marginalized communities can only access these small grants. I mentioned a while ago about how one of my Executive Director colleagues of color had been on the verge of tears because she had spent over 40 hours writing and rewriting a grant proposal and getting it rejected for the second time. It was for $5,000.

Funders need to be aware that time is not distributed equitably. Many of my colleagues of color who run nonprofits are getting paid part-time but are doing way more work than they’re paid. They often have other jobs. As community leaders, many have community obligations and crises that the rest of us simply don’t have to worry about. Do not waste their limited time. If your grant is less than 10K, it honestly should not be more than a 3-page narrative and one or two attachments. Or even better, just accept a grant proposal that they already spent 30 hours writing for another foundation. Save people the time, and allow them to use it to implement programs and services.

…organizations led by marginalized communities will have less hours in the day—because they tend to have fewer staff and more community obligations—to research your foundation’s priorities, seek support, write the proposal, and rehearse for the site visits. They may not be able to study and play the funding game as well as an organization that has more time in the form of a development team or contract grant-writer. If we want to address injustice, we have to focus on community needs, not simply reward whoever has the most time and resources to prepare the best application.

Donors: …please be aware that many smaller, grassroots organizations, a significant number of which are led by marginalized communities, do not have a development team or even a half-time development person. Which means that they may not be able to send acknowledgements for your gifts as fast, or be able to focus on cultivating a relationship with you as effectively as other organizations. …since leaders of color, leaders with disabilities, etc., may have less time because they have all sorts of other stuff to deal with, please try to be understanding and supportive.

Emphases above are mine.