Nonprofit Effectiveness - What Impact Do You Have? (April 2006)

Bookmark & Share
  • MySpace
  • Digg
  • Delicious
  • StumbleUpon

BBB Charity Effectiveness Symposium
April 10, 2006
Summary

                                                                                                                                                                     Printer Version

On April 10, 2006, more than 150 nonprofit leaders, funders, and other practitioners gathered to discuss evaluation and measuring organizational effectiveness at the conference "Nonprofit Effectiveness - What Impact Do You Have," hosted by The Better Business Bureau and The New School, with support from The New York Community Trust. Additional partners included The Council of Community Services for New York State, The Foundation Center, the New York Regional Association of Grantmakers, NYCharities.org and the Nonprofit Coordinating Committee.

The conference opened with a keynote address by Stephanie Strom of the New York Times followed by two panels. The two panels, one with speakers from grant making organizations and the second with speakers from nonprofit organizations and charities, provided conference attendees varied perspectives on evaluation and outcomes measurement.

Nonprofit organizations and grant making institutions face pressure from a myriad of sources to demonstrate accountability and program effectiveness. Foundations strive to show they are using their funds wisely and that they are having an impact on specific societal problems. Charities face pressure to demonstrate that their specific programs have both short term and long term impact. One source of that pressure for charities is the foundations who support them and who are increasingly looking for measurable outcomes. Another source of pressure for charities is internal - nonprofits are identifying the ways that systematic and thoughtful evaluations can benefit their program effectiveness and help them accomplish their missions.

However, despite the pressure to evaluate programs and the corresponding interest in doing it, there are many questions about how to create and implement evaluation programs. Many of the speakers highlighted the difficulty and expense of creating good evaluation methods and programs, and the challenges organizations face as they work to inculcate an organizational culture of evaluation. In addition, most nonprofits do not receive financial support from most funders for program evaluation. Several panelists encouraged nonprofit organizations to focus on the ultimate goal of using evaluation results to help both funders and the general public gain a better understanding of the societal impacts of their programs.

Another focus for some of the speakers was the issue of evaluating short term versus long term results. Many charities face the difficulty of showing significant impact within the confines of a one-year grant funding cycle, which is still the most common foundation timeline. Given that only a few foundations regularly make grants over longer periods of time, nonprofit organizations and funders alike struggle to identify appropriate measures that demonstrate success over a relatively brief timeframe. Some foundations do analyze long term impact, based on both their grantees reports and other available research. However, even those tend not to share their findings widely with grantees and others, but use the results to educate their own staff and board. Presenters also offered concrete suggestions for demonstrating longer-term impact and talked about changing the culture of the organization to support outcomes measurement and evaluation.

Summary of each Panel and Workshop:

9:00-9:30: Keynote Speaker
Stephanie Strom, New York Times Reporter on Philanthropic Activity

Ms. Strom opened her discussion by emphasizing that, given the recent focus on accountability in the nonprofit sector, measuring effectiveness can be a powerful tool enabling charities to achieve accountability in many ways. Measuring effectiveness is also a better way to approach accountability if the alternative is more regulation. The so-called "new money philanthropists," such as Michael and Susan Dell and Bill and Melinda Gates, are increasingly concerned with effectiveness, and are willing to pay to ensure that their money is having the impact they want. At the same time, she acknowledged that for some organizations, the impact of their work won't be seen for decades, and still others are having impact that is impossible to measure. Measurement is harder than many funders realize, and many organizations work on issues that are not easily described by statistics.

Assessment can be an important tool for demonstrating effectiveness and it is critical for nonprofit organizations to help funders understand the costs of evaluation, especially when funders ask for a level of evaluation beyond the scope of information that is regularly collected.

Evaluation and measurement add to the cost of doing business and alter organizational cost ratios. Given that different organizations face different challenges, depending on their work, the importance of donor commitment to evaluation is critical. The social service sector and advocacy organizations face some of the largest evaluation challenges because their work is often inherently difficult to measure. There will always be the need to continue to support the neediest in society.

That is not to say that donors will flee from failure and nonprofits should fear assessment. One has only to look at the hundreds of millions of dollars aimed to address the failures in education, yet people keep putting money into the system and organizations that work to improve education.

Several themes emerged from the question and answer portion of the key note session. Audience members expressed concern that evaluation and measurement require an increase in resources. Beyond educating funders about the real costs associated with evaluation, expertise is needed to help organizations determine the best methods of evaluation. Ms. Strom suggested tapping into some of the resources available to charities through consulting firms and nonprofits including Bridgespan, the Nonprofit Coordinating Committee and the Minnesota Council of Nonprofits. Additionally, funders already committed to evaluation; including the Dells and Gates, often share their findings and resources with other organizations through their websites.

To some extent, the nonprofit sector itself is responsible for the public's focus on administrative and fundraising costs as an indicator of organizational effectiveness. An opportunity was lost after 9/11 to explain to the public more about the costs of administrating programs beyond budgetary percentages, and it is imperative that nonprofit organizations continually question whether they are evaluating the correct measurements as a way to better explain their impact to the public. Anecdotally, the amount of donor dollars going to organizations appears to be stagnant, while at the same time, the nonprofit sector is exploding. The IRS adds more than 50,000 new nonprofit organizations every year, and therefore the comparison of an organization's effectiveness is one way that donors make decisions about what organizations to fund.

Ms. Strom noted a fundamental concern with the focus on demonstrating outcomes and measuring effectiveness is that it will change our notion of what the best charities should accomplish and therefore, who receives funding. Will programs and services that aim to serve the neediest in society continue to receive adequate funding when they cannot solve the larger problem of the homeless, hungry and substance-addicted? In short, will funders abandon the needy because organizations do not, or cannot, demonstrate that their programs are effective?

9:35-10:35: Funders: Views and Opinions Panel
Moderator: Charles Hamilton, The Clark Foundation
Panelists: Joyce Bove, The New York Community Trust
Edward Pauly, The Wallace Foundation
Michael Weinstein, Robin Hood Foundation

Mr. Hamilton started the panelists off with a discussion about the motivation behind their interest in evaluation and a practical question about how each foundation determined what measures it would require from a grantee. Joyce Bove pointed out that evaluation should be pragmatic, and decisions about what to evaluate at the NYCT are the result of the grantmaker and grant recipient working together to identify appropriate benchmarks. Ed Pauly used a recent survey to answer several of the fundamental questions about foundations' interest in and use of evaluation. The survey, done by The Urban Institute and Grantmakers for Effective Organizations, found that foundations are increasingly looking for specific measures of effectiveness from their grantees and that the principle mode of assessment used by foundations is reporting. Survey respondents identified several reasons for evaluation, including determining if the original objective was achieved, identifying outcomes, strengthening future grantmaking, learning about implementation, contributing to greater knowledge in the field, and strengthening public policy.

Funding institutions each have a different focus. The Wallace Foundation funds innovation in systems and therefore must recognize that innovation often fails and not punish the risk-takers when their evaluation shows failure. The Robin Hood Foundation considers evaluation to be a diagnostic measure, a way for donors and staff to improve methodology, a tool to demonstrate transparency, and a way to determine which groups' programs most effectively fight poverty. Mr. Weinstein urged charities to use evaluation as a tool for themselves to learn how to improve their programs.

Mr. Hamilton then turned the conversation to funding. The level to which grantmakers fund evaluation varies widely. The New York Community Trust usually considers the cost of evaluation to be included in the administrative overhead portion of a grant, although it will provide specific technical assistance grants when appropriate. The Robin Hood Foundation pays 100% of the cost of evaluation for all of its grantees, including the use of outside evaluators who come in and work with grantees to create the report wanted by The Robin Hood Foundation as well as additional reports sought by the grantees. The Wallace Foundation also funds its grantees' outcome requirements. Mr Pauly noted the very critical challenge to the sector's integrity that comes from funders seeking unrealistic outcomes from grantees who are reluctant to rock the boat for fear of losing funding.

Panelists agreed that it is important for organizations to look at not only what their funders want to know, but also at what measures will help them become more effective overall as an organization. Charities were urged to decide what they needed to know as managers, and to accept that they need to make that decision for themselves, irrespective of their funders' requests for specific data or results. At the same time, some of the panelists acknowledged that it is difficult for organizations to mediate between trying to provide the data that foundations want and the general lack of resources (time, money) needed to measure and improve organizational effectiveness.

Long-term impact is another area of concern. Yearly funding cycles often don't provide enough time for a program to meet pre-set outcome goals. To address this, the Wallace Foundation generally funds in five-year cycles, and recognizes that even their extended grant cycle is just the beginning for sustainable impact. The New York Community Trust makes one-year grants, with short-term benchmarking, but often funds organizations over longer periods of time, even 15 to 20 years in some cases. To estimate long-term impact, the Robin Hood Foundation uses "daisy chains" of research to build statistical bridges between what they can observe over a grant period and the long-term expected impact.

Measuring effectiveness trends, from the foundation point of view, include investment in public distribution of information and findings and pragmatic tools for evaluation. The Robin Hood Foundation, as well as other foundations, attempt to compare the ultimate value of different types of programs in accomplishing their goals. For example, Robin Hood attempts to assess which is more effective in fighting poverty: a soup kitchen, a microlending program, a charter school or a job training program.

During the question and answer period, several audience questions highlighted the tension between charities wanting foundations to provide sector wide knowledge and best practice information and foundations' reluctance, as individual foundations, to tell charities what constituted best practices, since they do not believe they are in a position to make those judgments. The panelists agreed that it is not the role of the foundation world to establish or dictate "best practices" for nonprofits to use, because, in part, of the great number of variables involved in a successful program, as well as the relatively limited knowledge any individual foundation obtains from its grantees. Although the panelists agreed that it is difficult for funders to provide specific feedback to nonprofit organizations for improving effectiveness, they also acknowledged that organizations would benefit from more research synthesizing program results across the nonprofit sector. The funders pointed charities to different resources as they search for knowledge about best practices and evaluation, such as the funder and charity affinity groups that are organized by issue or subsector. Mr. Hamilton concluded the wide-ranging discussion by noting a critical area of agreement - it is imperative that charities and foundations measure performance to ensure a high level of program accomplishment and that they find productive ways to communicate the findings in order to improve the nonprofit sector as a whole.

10:40 - 11:40 Charities: Views and Opinion Panel
Moderator: Harry Hatry, The Urban Institute
Panelists: Michael Clark, Nonprofit Coordinating Committee of New York
Robert Egger, DC Central Kitchen
Karen Hopkins, Brooklyn Academy of Music
Barbara Turk, YWCA of Brooklyn.

The organizations represented on this panel use a range of models for evaluating different types of programs. At the Brooklyn Academy of Music (BAM), the only arts organization on the panel, outcome measurement is often informal and includes both demographic analysis of the audience at performances, ticket sales, critical response and other common sense kinds of matrixes, along with the more formal evaluation of programs required for by federal grants. The Brooklyn YWCA developed specific and complex outcome indicators around the three key goals developed in the YWCA's recent strategic plan. It is currently at the stage of developing the baselines for each of the indicators. Ms. Turk noted that their ultimate goal is to understand how and if they are transforming people's lives. Ms. Turk encouraged groups to use the Robin Hood philosophy of knowing the research and literature in their field to help them determine which of their short-term metrics can be used as indicators of a long-term impact.

The DC Central Kitchen, measures outcomes and communicates on two levels. First, like others in the "anti-hunger" field, it measures all the obvious - numbers of people served, cost per meal, pounds of food rescued and provided, numbers of volunteers, etc. Mr. Egger stated that the real measure is how well they succeed in decreasing the need for a food kitchen. Therefore, he is more interested in finding ways to measure their impact combating the larger issues related to hunger and decreasing the long-term need for their services.

The Nonprofit Coordinating Committee (NPCC) provides support for small and medium nonprofit organizations to track outcomes, because most organizations will not receive funding for outcome tracking from their donors. The primary question is how we change our field. To that end, NPCC is creating practical tools for small and mid-size nonprofits to learn how to incorporate outcome tracking into their organizations for less than $10,000. NPCC has as one goal to take away some of the fear and uncertainty surrounding "evaluation" by showing nonprofits that their staff and clients often know what they are trying to accomplish, and encourage groups to start the process with brainstorming sessions. This is a practical step every organization has the capacity to take.

Several panelists noted a few caveats, agreeing especially that many of the things that are easy to measure are not necessarily the most important. Concerns were expressed from panelists and audience members about funders (government and foundations) seeking data that is ultimately irrelevant.

Mr. Egger encouraged organizations to use their measurements as one way to lure the public into thinking about and supporting their causes. The Food Kitchen touches a nerve when it talks about saving food and what it accomplishes through food recycling programs. People feel guilty about wasting food, and when shown the impact they can have by turning unused food over to a charity, they want to help. On the other hand, he also commented on the public's exasperation with the reality that most of the big social ills are just as prevalent today as they were forty years ago when this country began the War on Poverty. He believes that frustration is in part responsible for donors' desire to understand the impact of their dollars. Given how much money and resources are put into the nonprofit sector, over 1.6 trillion dollars, there should be far more regular coverage and communication by the media about it. Because of the lack of regular and systematic coverage, most of the public has little understanding of how the sector functions or what it accomplishes.

Mr. Clark added that we don't know the impact for most of the sector's work, because most organizations are small and therefore have not yet recorded and tracked their impact themselves, much less created the kind of data from which we could see the sector's impact as a whole.

Ms. Hopkins emphasized that most programs need more time before they really can be assessed than most funders are willing to provide. Our sector needs time to experiment, get it wrong and then get it right. Several other panelists also commented on the contradiction of using short term evaluation cycles while trying to assess long term impact.

Harry Hatry suggested we move from talking about outcomes measurement to outcomes management, because measurement is an ongoing process, and, fundamentally, we should be using our analysis to help us manage more effectively. He provided some specific practical tips about measuring programs and differentiating between groups of clients to more fully understand the strengths and weaknesses of human service programs. The use of the word "effective" and its emphasis on a causal relationship was discussed with Mr. Hatry concluding that "outcome" was a better choice.

All of the panelists discussed specific sets of quantitative measurements that they use at their organizations before the conversations moved, again, to the need for our sector to market our causes more. There was some discussion about which was more important - marketing or measuring - with some final comments that the two work together. Marketing has the most impact when there are measurements and outcomes that can be touted as part of the message. Ms. Hopkins noted that the reality of fundraising in this country was, in some ways, a perpetual tool to make nonprofits engage their communities and ultimately created stronger charities. In response to an audience question, the panelists also discussed the possibility of valuable qualitative analysis and what that would look like.

Some of the panelists cautioned against setting unrealistic goals, either for measurements or for the sector as a whole and noted that having some impact on a problem must constitute success.

12:00 - 1:00: Technical Assistance Workshop Sessions

Workshop I: Connecting Activities with Impact: Challenges for Short-Term Services

Leader: JuWon Choi, Vice President for Educational Services, The Foundation Center

In a climate in which "outcomes" and "impact" are the buzzwords of the moment, the Foundation Center provides a prime example of the challenge organizations undertake when they seek to demonstrate the impact of short-term services.

Tens of thousands of people participate each year in the Foundation Center's one-hour training sessions on various fundraising topics, with an overwhelming number reporting high levels of satisfaction with the content as well as the approach. Beyond the routine output data, the Foundation Center captures information which indicates whether the session addressed the stated objective, whether attendees took away something meaningful and useful for their work, and whether they found the session valuable enough to recommend to others. In a follow-up survey, the Center also asks whether the attendees applied the lessons they learned. All the responses are put into a database to which Foundation Center trainers have access.

With the composite data, and using the "daisy chain" concept, the Foundation Center extrapolates program impacts for its short-term services. Participants shared several other examples: New York Cares recently completed its strategic planning process, which designated volunteers as the organization's primary clients. Having identified its primary client base, the organization now focuses its efforts on tracking benefits to and from the volunteers, as opposed to the agencies where they are placed. To make the connection between activities and impacts for its program, The Make-A-Wish Foundation does four kinds of surveys: 1) a staff satisfaction survey, 2) a survey of its board participation (focused on involvement and composition), 3) a volunteer survey (administered to over 600 volunteers) and 4) a survey about the experience of each "Wish" family.

The final area of discussion during the workshop concentrated on how outcomes affect the planning and management of nonprofit services. It is crucial to create an organization-wide culture focused on outcomes and to ensure that management takes the lead by modeling outcome thinking. Not only does management need to adopt this approach, it must also encourage staff buy in to the process.


Workshop II. Developing the Capacity of Agencies to Collect and Monitor Outcomes

Leader: Carmen Price, FITA, Medical and Health Association of New York City, Inc.

Logic models are planning tools for organizing a program or service that provide concrete steps toward measurable outcomes related to an ultimate end goal. Aspects to include are:

Target Population
Numbers
Demographic
Ethnicity
Language
Race
Activities/Outputs
Group Sessions
Individual sessions
Community events
Outcomes
Immediate
Intermediate
Long Term/Impact

 

External Influences - on working toward solving the problems - can be positive or negative - (e.g., community support, availability of funding)
Assumptions - your best educated guess about the problem you'd like to solve, the people effected by it and the proposed solution

 

External Influences and Assumptions should be considered the foundation of the logic model because they ultimately impact the other components of it.

Examples of outputs are the number of activities done and the number of participants in activities. Outcomes measure changes in behavior, knowledge, and attitudes. Levels of outcomes:

  • Immediate - Knowledge and attitude after a session. (Kids come away thinking condoms are cool.)
  • Intermediate - Behavior that people achieve. This is a secondary change. (Kids actually go out and buy the condoms.)
  • Long Term, Impact - Behavior change over time. (Kids actually use condoms.)

Especially for community-level interventions, long-term follow-up may not be possible, but organizations can ask participants to fill in and return survey cards. Establishing achievable goals and outcomes is a key component to evaluation. Outcomes should be SMART: Specific, Measurable, Appropriate, Realistic, Time limited.


Workshop III: Creating an Evaluation Culture

Leaders: Leslie Graham, Nonprofit Connection
Carolina Grynbal, Nonprofit Connection

How does organizational culture impact the implementation of evaluation systems? Human service providers care more about serving people than tracking data. It is important to work with people's fears about evaluation. These fears include the idea that numbers depersonalize clients, and that evaluation is about auditing.

The group's discussion used the work of William Trochim of Cornell University as a springboard to look at the characteristic elements of evaluation cultures. Participants had the opportunity to access the real world opportunities and challenges that are present in organizations as they seek to implement effective evaluation processes.

An evaluation culture is action-oriented, seeks solutions to problems, is forward-looking, and emphasizes evaluation in every-day thought by all members of the staff. This means that people from all levels and areas of an organization should be involved in creating indicators and assessment tools, and organizations should consider involving community members and clients in the process as well. The process must be fair, open, and democratic, while stressing accountability. Buy-in at all levels is critical, since staff or board members can get in the way of a successful process.

Suggestions for real-world implementation include:

  • Focusing on program impact, not just funders' requirements
  • Starting with basic, relevant questions: what do you want to know?
  • Using an outside consultant as facilitator
  • Using existing data you are already collecting for other purposes

Workshop IV: Outcome Thinking and Management: Shifting Focus from Activities to Results

Leader: William Phillips, The Rensselaerville Institute

First and foremost, outcome thinking and management offer tools to help organizations to become more successful, rather than for external accountability. Outcome thinking often requires a significant shift in perspective from a "problem perspective," which focuses on why things are the way they are and who is responsible, or an "activity perspective," which asks what we are going to do next to respond to the situation. The "outcome perspective" looks at a successful future, and asks how to get there. With an outcome mindset, it is easier to chart the course, rather than getting caught up in the day-to-day activities. The difficulty is balancing doing the work to make a difference in people's lives and getting the funding to pay for the work.

Participants brought up specific issues in their own organizations, from teaching critical thinking to children to board development. Outcomes in both situations include increasing what participants know and changing the way they think. Organizations should take responsibility for defining their own success (to funders and others), while at the same time being open to suggestions for improvement.