Printer Version
A Charity Effectiveness Symposium
Presented by the
Education and Research Foundation of the
Better Business Bureau of Metropolitan New York
At Baruch College
February 28, 2008
About 230 leaders from nonprofits and foundations, as well as consultants, academics, and other practitioners from the philanthropy field, signed up to attend the BBB Education and Research Foundation’s second major Charity Effectiveness Symposium on February 28, 2008. “Building Capacity for Maximum Impact” was presented by the BBB Foundation with generous financial support and program committee participation from The New York Community Trust and United Way of New York City. The program was prepared with the support of the New York Regional Association of Grantmakers, and hosted by The Center for Nonprofit Strategy and Management at the Baruch College School of Public Affairs; these organizations also participated in the BBB Foundation’s program committee for the event. Additional event partners included: The Council of Community Services for New York State; The Foundation Center; NYCharities.org; the Nonprofit Coordinating Committee; the Association of Fundraising Professionals-Greater New York Chapter; and Women in Development, New York.
Program Agenda:
Keynote: Gara LaMarche, President and CEO of The Atlantic Philanthropies, opened the conference with a keynote address. His remarks were followed by two panels.
Panel I: The first panel, “What Can We Learn About Our Impact?”, was moderated by Gordon J. Campbell, President and CEO of United Way of New York City. Panelists included: Peter York, Vice President and Director of Evaluation, TCC Group; Lillian Rodríguez-López, President, Hispanic Federation; and Marilyn Gelber, Executive Director, Independence Community Foundation.
Panel II: The second panel was entitled “What’s Next? Planning for Future Impact.” Maria Mottola, Executive Director of the New York Foundation, moderated the panel discussion. Her panelists were: Jack Krauskopf, Director of the Center for Nonprofit Strategy and Management, Baruch College School of Public Affairs and Distinguished Lecturer; Barbara Blumenthal, Director of Applied Research and Senior consultant, Community Resource Exchange; and Carolyn McLaughlin, Executive Director, the Citizens Advice Bureau.
Workshops: Following the general sessions, attendees participated in three “Leader to Leader” workshop sessions:
- Julie Floch, Director of Not-For-Profit Services from Eisner LLP, presented “Accountability News: The New 990 and Sarbanes Oxley Tips.”
- Patricia Swann, Program Officer of The New York Community Trust, led a breakout session discussion about “How Small Nonprofits Can Demonstrate Effectiveness to Funders.”
- Yvonne L. Moore, Executive Director of the Daphne Foundation, led a workshop discussion entitled “When Measurement Is a Challenge: Tips on Evaluating Hard to Quantify Activities.”
Claire Rosenzweig welcomed attendees to the event and thanked them for their participation.
Summary of each Panel and Workshop:
8:40-9:30 am: Keynote Speaker
Gara LaMarche, President and CEO, The Atlantic Philanthropies
Ronna Brown, President of the New York Regional Association of Grantmakers, introduced Gara LaMarche, and moderated questions from the audience during the question and answer session that followed.
Gara LaMarche presented observations and recommendations about evaluative learning for nonprofits and grant-makers. At The Atlantic Philanthropies, this discipline is known as “SLAE” or Strategic Learning and Evaluation; it has an in-house staff and budget at his foundation. The Atlantic Philanthropies is known for making very substantial, multi-year grants for operating support, in order to give its grantees “running room” to build capacity and impact. The foundation often supports difficult to measure causes, such as advocacy programs. Over the next few decades, The Atlantic Philanthropies will spend down its assets and ultimately go out of business. These circumstances make it especially important for Mr. LaMarche’s foundation to build impact evaluation strategies into its work with grantees from the outset.
“You don’t know how you’re doing if you don’t have some form of measurement,” Mr. LaMarche said. Having a way of assessing impact is a vital best practice for all organizations. Rather than looking at measurement as a tool for rewarding or punishing grantees, however, Mr. LaMarche pointed to its importance as a tool that can help grantees and grant-makers to learn. He argued forcefully that “strategic learning” should be the main aim of charity effectiveness evaluation, and said that foundations should strive to be learning centers.
Many things go into that learning, and evaluation is just one part of that. Mr. LaMarche stated that one’s own professional judgement, informed by all the observation, experience, and data that can be obtained, is the base upon which other kinds of evaluation must rest. Nonprofit executives need be able to examine information about impact evaluation in a supportive atmosphere. This can happen when grant-makers, senior managers, and board members reward candor and self-examination, and look on setbacks not as things to be hidden or punished, but as learning opportunities to be studied and built upon.
“What is useful to learn?” According to Mr. LaMarche, this question is the fundamental issue that grantees should discuss with grant-makers in regular, honest conversations. The Atlantic Philanthropies uses an array of evaluation approaches with its grantees to pursue useful learning.
Comprehensive Evaluation System: A number of direct service grantee assessment efforts employ a comprehensive, integrated evaluation program, combining an internal evaluation system focused on quality with an external system, focused on effectiveness. This provides a range of analysis that permits staff to look at trends, as well as targeted questions about how to overcome barriers. For example, for a child-focused organization, having good data about why children did or did not attend a program could help a nonprofit to refocus its strategies for greater success.
Embedded Outside Evaluator: Another approach is to use an “embedded” outside evaluator, someone trusted by both the funder and the grantee, who reports on an initiative over a period of time. Such an evaluator might attend meetings, provide feedback, and participate directly in the nonprofit’s learning experiences. An “embedded” evaluator is free to say things to the grant-maker about effectiveness that the grantee might never dare to say – such as critical evaluation of funder-driven program designs, and observations about problems with partnerships that are driven primarily by funding opportunities rather than a natural need to form an alliance. The important thing is for grantees to learn from such evaluations in order to do better next time.
Case Studies: The Atlantic Philanthropies has an unusual capacity to support policy advocacy initiatives. There is less documented information about policy change strategies that work best in the context of particular situations. Their feeling is that case studies can provide models for others and buttress the case for other funders to join in this kind of grant-making. A library of case studies might provide a kind of “checklist” for strategic learning. Among lessons learned through case studies to date: to succeed in policy change work, it may be necessary to draw together unlikely partners – strange bedfellows - into unusual alliances.
Intensive Data Collection: The Atlantic has supported grantee efforts to step up data collection efforts, often aimed at improving quality and enabling an organization to increase in scale and reach.
Assessment of Organizational Capacity: The Atlantic is interested in helping its grantees evaluate and strengthen their capacity to address critical issues. For example, this may involve developing a stronger infrastructure to expand work, make others more aware of it, or making changes to operate more efficiently.
Cluster Evaluations: Sometimes building a field requires assessments of communities or “clusters” of grantees working towards a common objective. The foundation’s evaluators meet with teams of grantees several times a year, and encourage the nonprofits to gather together, exchange information, and share learning.
In the future, The Atlantic Philanthropies plans to share the lessons it has learned in publications and through its website.
From a funder’s perspective, Mr. LaMarche offered “a few things to keep in mind about evaluation.”
Shared understanding: First and foremost, evaluation should be based on shared understanding of what is important to measure and learn. Ideally, the organization should be asked to state what it thinks constitutes success, in stages or in total, how it plans to measure it, and what it needs to get it done. The funder should be the grantee’s partner in that process.
Tool for learning and not punishment: “Evaluation is a learning tool for the organization and the funder, not a stick to beat grantees with,” Mr. LaMarche commented, and “If evaluation is coupled with punishment, fear will overwhelm learning.”
Evaluation takes money: Funders should support grantee efforts to learn. It should not be an unfunded mandate. Nonprofits should not have to choose between providing services and devoting resources to assessment.
Measure only what is important: Data should never be collected for its own sake; more is not necessarily better. Funders should not commit the sin of making grantees jump through hoops, distracting them from their core mission, for the sake of unnecessary paperwork or reporting on trivial things.
Match expertise to context: Make sure that whoever conducts evaluation understands the context in which nonprofit is working. Different fields require different evaluation expertise.
Use your own judgment: Nonprofits should not use evaluation to outsource your own judgment. Use it to inform your own judgment, and then stand behind your decisions. Evaluation is a tool for learning, not a “magic 8 ball” to tell you what to do.
Cause and effect: Funders and nonprofits should have a reasonable sense of humility about cause and effect. No significant change is brought about by one organization working alone. The tendency of organizations to claim disproportionate impact is often driven by a need to impress funders or to make a publicly visible splash. It is more important to understand a group’s actual role and impact, which may be more effective behind the scenes. “When a neighborhood turns around or climate change is reversed, your $50,000 grant will have played a role, but not likely a role that can be isolated with scientific precision,” Mr. LaMarche noted.
Finally, Mr. LaMarche put forward his opinion that “logic models” and “theories of change” – schematic outlines used in evaluation - are too “reductionist” to be of much help in the evaluative learning process.
In closing, Mr. LaMarche addressed comments to donors who wish step up their philanthropy, but who are also concerned about making a demonstrable impact.
- First, he recommends that grant-makers start with what you believe: worry about what you care about first, how you measure it second.
- Other foundations have expertise about what works. There is little need to recreate due diligence research that has already been done. Check around to see what valuable research findings may be available.
- Many emerging philanthropists too quickly assume that nonprofits need to become more like businesses to succeed. Social investments can’t be measured only in dollars and cents, and the bottom line has many components.
- Funders should think about supporting policy and advocacy work, as well as service organizations. Advocacy can help shape the flow of the government’s massive investments.
- Funding advocacy takes staying power and a tolerance for gains over the long haul. Failure to reach a big goal may actually produce important incremental gains.
Mr. LaMarche concluded his formal remarks by citing a quote from Nietzsche, to illustrate the creative power of evaluation: “Evaluation is creation. Hear it, you creators! Evaluating is itself the most valuable treasure of all that we value. It is only through evaluation that value exists. Without evaluation, the nut of existence would be hollow. Hear it, you creators!”
Gara LaMarche then addressed a number of questions from the audience. Questions and responses are summarized below.
Q: How do you deal with reporting about the ways in which your project failed, and how do you as a funder look at that information?
A: In the real world, it is very hard to create cultures where that information is shared. It’s important to create a “safe place” to talk about failure; but given the risks, you do have to work at getting candor. There should be no penalty – up to a point – for failure, although unrelenting failure is clearly not what we’re striving for. If you take risks, you’re bound to fail sometimes. Admitting to failure can sometimes help build credibility. But the public sharing of failure is naturally very sensitive. There’s an element of shaming, which makes it harder to be transparent in a way that isn’t hurtful.
Q: What’s going to happen in the future with all of the case studies and knowledge that Atlantic has developed, when you go out of business?
A: It’s quite unusual for a foundation of this scale to go out of business. In the near future, Atlantic will be developing its exit strategy or “end of life” plan. What does the end look like? There are a lot of ways to go. The foundation has a serious obligation to be careful about how this is done. For example, in seven of the countries where Atlantic operates, it is the largest funder.
Q: How can you help support a greater trend in the field towards general operating support and what are your thoughts about evaluating that?
A: More funders should think about the benefits of providing general operating support, and bear in mind how difficult the fundraising job is for nonprofits. The nonprofit’s job is to convince the grant-maker that what you do aligns with the funder’s strategy. On the other hand, it is upsetting if nonprofits distort their work to try to make it look like a foundation’s giving parameters. Grant-makers need to recall that they are enablers of the work of others. The best kind of partnership with grantees is one in which there is an alignment, but the grantees are trusted to carry out the work. In the evaluation area, greater collaboration among foundations might be beneficial, to share due diligence and eliminate replication. But grant-makers often have the “disease of not-invented-here.” It is not always necessary for a funder to “own” a strategy to pursue it effectively with grant dollars, and foundations could do much more to model collaborative behavior.
Q: Is there a dichotomy between foundations wanting good metrics and the desire of foundations to fund what is new and less proven?
A: The important thing about metrics is the “dance of grant-making”: it should be an exchange between people who want money and the foundation, a shared understanding about what they are trying to achieve, not something imposed on the grantee. Doing something new can often be justified, but very often a there is a fetish for newness that doesn’t serve the field. Foundations can be seen as capricious when exiting one field in order to address another. But at same time, funders must respond to new realities and should not become “set in their ways.” Realistically, people get bored funding the same things all the time. But perhaps grant-makers should reconsider the common “three-years-and-out” time limit to funding the same organizations. It’s important to maintain balance between stability and flexibility.
9:40-10:30 am: What Can We Learn About Our Impact?
Moderator: Gordon J. Campbell, President and CEO of United Way of New York City
Panelists:
- Peter York, Vice President and Director of Evaluation, TCC Group
- Lillian Rodríguez-López, President, Hispanic Federation
- Marilyn Gelber, Executive Director, Independence Community Foundation
Gordon J. Campbell was introduced by Patricia Swann, Program Officer of The New York Community Trust. In turn, he introduced the members of his panel.
Mr. Campbell observed that we are all concerned about how to get at the root causes of problems that we face. No one organization can do that alone. Building partnerships is an important way of creating measurable, lasting change.
Mr. Campbell posed three key questions to his panel:
- What leads to impact?
- How can we measure impact?
- What can we learn about impact?
Questions, responses, and panel discussions are summarized below.
Campbell: What can we learn about impact? What do we mean by that word?
York: It’s an important question. It matters “who” is defining it. In the for-profit sector, impact gets measured in short-term outcomes, and the focus is on consumer behavior. In the nonprofit sector, we talk about the community impact of the work that we do. The challenge for nonprofits is to measure the impact on the end users – the people that we serve. Nonprofits might benefit by directing evaluation inquiries at understanding more about their short-term impact.
Gelber: Impact is not a static measurement: impact is movement, impact is change, and impact is influence. If you really want to make change, there are a lot of sectors, such as government and business, which have a part in that. It’s interesting that nonprofits are often held to broad change-the-world goals when in fact the work they do will have immediate impact – but it’s also complicated because there are so many factors at work. Foundations and grantees should agree up front on what they expect an impact will be. Sometimes it is statistical, sometimes it’s about pushing for change in a broader way. For example: her foundation developed a long, multi-year relationship with Habitat New York City, with lots of money up front, in order to help the grantee create an urban Habitat model. As a result, the project created housing ownership opportunities in a way that didn’t exist before in New York City. You need both “street” and “cerebral” approaches to making an impact: you need to understand the big issues and the environmental context in which you are working, and agree up front about how you are going to evaluate that later on.
Rodríguez-López: The Hispanic Federation is a capacity builder, so people are constantly asking them about metrics. In her world, she has to look at scale and see what is distinctive about what organizations are doing that makes them special and different. She considers how budgets relate to outcomes and impact in their particular area of focus, the Hispanic community. For example, recently, they launched a program on college prep and financing. There are many people that are already doing that, but they are not reaching the Hispanic community. The difference is the language of service (all Spanish); client familiarity with the organization and trust; and the less sophisticated client won’t feel dumb – there’s cultural appropriateness. Successful impact might occur when a few parents in a room of only 30 parents suddenly realize that counseling is available for their children. When negotiating with an organization, they set up the baseline metrics at the beginning. You may not get a quantum leap but you should be able to move the needle and see change. To measure an impact, it needs to be sustainable change.
Campbell: More and more, funders want to see an in-depth, long-term kind of impact on end users. For example, the focus could be after school education, but what’s happening to the kids’ test scores? In terms of homelessness, not just was that family safe, but also what have you done to get them permanent housing and a job? In terms of seniors, not just meals, but also what’s happening from a health and safety perspective?
York: A simple, direct impact that leads to change might not be “modest”. There’s a names-on-buildings tendency in philanthropy, where people want to own an impact. But making genuine, effective change happen might take a lot of collaboration that is hard to credit to any one player. Many kinds of desirable social goods are affected by huge numbers of variables. We need to get closer to the point of intervention and look at smaller increments of change with our outcomes measurements, to understand what works and what can be replicated.
Gelber: At NYRAG, there is something called the City Connect Committee which is focused on bringing philanthropy and public policy closer together. “Modest success” with a nonprofit may provide an important learning moment, which could then open the door to advocacy and policy work. This is complicated stuff. When possible, funders and nonprofits should take what’s learned on the street, elevate it a bit, and apply it to efforts to make systemic change – that’s a great way to increase the impact of philanthropy.
Rodríguez-López: It’s really about scale. Funders can work with grantees to help them develop measurements and outcomes that are realistic, based on the project model, what the organization is attempting to do and the change they want to make, and the nonprofit’s infrastructure. When funders and grantees do not have this conversation, this can lead to a disconnect and problems.
Campbell: What should a nonprofit do when seeking funding for a project that will meet a serious need – but when we also know we won’t get enough dollars to do an in-depth evaluation, and you have questions about whether you can hit the target outcomes?
York: We need to get better at supporting the capacity to do “product research” – to borrow best practices from the for-profit side. We sometimes don’t like to unpack our programs: we just say they are good. Doing “product research” means measuring the outcomes that are achievable, making sure that the bar is set high enough. Do the “arrow work” between strategy and outcome: measure program quality and quantity in detail, and understand which program elements result in achievable, successful outcomes. Nonprofits should divorce themselves from the idea that your programs are unique. It is valuable to bring groups that are tackling common problems together to develop common metric systems. We really don’t spend enough time understanding what works: doing that means we need to ask better questions.
Gelber: Thinking about this specific situation of an organization being asked to perform based on standards for which there’s no money to measure: you can do some quantitative analysis, but you can also tell the story about what happened. It’s important to understand the context and environment, and you’ll need to agree with the funder on sensible measurements. Being able to tell story effectively is especially critical for human service nonprofits.
Rodríguez-López: This is a conversation we need to have as a sector. It’s beginning: we have all been hearing much more from funders about end user outcomes. For example, it’s fairly easy to count the number of people who use a conference center, but much harder to understand the impact on people of having a conference center. It is a dialogue, and it will take some time to get this way of thinking and measuring into the culture of our sector.
Gelber: It’s very important to get your funders to see what you see, in some way. They need to visit and see what you are doing up close, so they can get a personal sense of the impact.
Campbell: How do we honestly assess when a program needs to be concluded? How can nonprofits take ownership of that decision?
Rodríguez-López: That’s a scary question for most people. People often want to exist in perpetuity. It’s hard to look at communities you are caring for and see how they have changed, grown, or shifted. Don’t just look back at infrastructure; step back and look at the big picture. Where can we have consolidations, mergers, collaborations, alliances? Some organizations never “reach scale” and in those cases, nonprofits may not be able to provide the quality of services that we’d all like to see.
Gelber: Employ fact-based philanthropy. You need to look at data continuously, look at issues, and consider problems that have not been addressed. Sometimes an organization has done its job - and so perhaps it is time for the funder to look somewhere else. Sometimes an organization has to change with changes in its landscape: the mission work may remain, but the nonprofit may need to change the focus of its efforts and adjust to shifts in its environment.
York: He is a fan of program service learning. No one program works or doesn’t work. It’s not at the program level that we should be asking whether to keep it or pull the plug. We have to be willing to dissect our programs - look at “elemental level.” What elements matter getting us to achievable impact? For example, we can ask: if we did lose funding, where would we cut? Ideally, it should not be a decision to cut the program, especially if the need for services is great. Instead, ask: what are core elements that are absolutely necessary to make this work? – and use the answers to guide you in your next steps.
Campbell: Why rely on nonprofits to measure impact? Of course they’ll be biased. Why not create mechanisms to ask the end users who is effective? (audience question)
Marilyn Gelber: She likes the idea of asking end users for feedback. At the same time, there is no substitute for a nonprofit that deeply knows their work to be asked to evaluate it. It doesn’t have to be an “either-or” choice.
York: The question is – who do you get data from? In his work, they get information from the users. We’re all flawed in our assumptions and interpretations of cause and effect. Nonprofits need to “own the space” of developing and getting metrics so that they can influence what questions are asked and answered. There is a movement is to look at the nonprofit sector as community investment model; nonprofits need to be empowered to enter that discussion. Nonprofits need to be in a position to respond to this “investment” movement with data and information.
Rodríguez-López: The misconception is that nonprofits aren’t looking at assessment data – we are. But funders need to pay for the cost of measurement, and work with nonprofits on asking the right questions. It costs money. It’s all very good, but it adds on to the bottom line in a budget. Most groups don’t have budgets for that for every program that they have.
Campbell: How do funders measure incremental progress in changing public policy? What are or what should be the methods? (audience question)
York: It is a challenge to evaluate the policy win. There are so many variables; it is very difficult to gauge impact. We can look at who we are trying to change, and ask the question - at the immediate effect level - who are we changing? You can measure shifts in attitudes and thinking in specific, key target audiences that can have an impact on policy. But you must also acknowledge that there are many other forces involved in achieving a policy win.
Gelber: Sometimes you think you have an impact – and the ultimate result is not what you thought it would be. You might get a policy win – and that win could turn out to be ineffective in terms of achieving the goal that motivated the policy. The lesson is: you have to keep working at the unglamorous implementation that follows a policy initiative. The ability to follow through on policy initiatives is important.
Rodríguez-López: What if your policy work is focused simply on having a voice? There’s not always a measurable outcome if your goal is to ensure that you have a presence in the policy mix, and a say in matters that affect you. It’s easy to know if you succeeded if your goal is to kill a bad bill. But the Federation looks at the “small” impact continuously. Metrics can be different at different times. It’s about asking: are we funding core elements and building things which are sustainable?
10:40 – 11:30 What’s Next? Planning for Future Impact
Moderator: Maria Mottola, Executive Director of the New York Foundation
Panelists:
- Jack Krauskopf, Director of the Center for Nonprofit Strategy and Management, Baruch College School of Public Affairs and Distinguished Lecturer
- Barbara Blumenthal, Director of Applied Research and Senior consultant, Community Resource Exchange
- Carolyn McLaughlin, Executive Director, the Citizens Advice Bureau
Fred Fields, Senior Director-Strengthening NYC Nonprofits from the United Way of New York City, observed that “sustainability” was a major theme underpinning the day’s discussions – meaning the ability to sustain impactful efforts over time. He then introduced Maria Mottola, Executive Director of the New York Foundation, as moderator of the second panel.
Maria Mottola began by remarking that she was moderating the panel of the “future”, focused on where we are going. She compared her panelists to characters from the old television show, “Star Trek”. Like those TV characters, they are “explorers” who often encounter crises where they must quickly evaluate the situation and reach answers.
- Barbara Blumenthal is like “Scotty” – she is like an engineer who diagnoses, evaluates, and fixes problems.
- Jack Krauskopf is like “Mr. Spock” – he uses scientific and academic expertise to address complex situations.
- Carolyn McLaughlin is like “Captain Kirk.” She keeps her eye on what’s happening and guides her team, so that they can fulfill their mission.
Moderator questions, panelist responses, and discussions are summarized below.
Mottola: Gara LaMarche brought up the fact that organizations often have to change their course, and you can’t always anticipate what things will affect your work. How do you build that into impact evaluation decisions? How has your organization had to change course in ways you could not anticipate?
McLaughlin: When welfare reform was passed, it was clear that the nature of her organization’s work would change. They turned towards workforce development work in the South Bronx very consciously. They also set up a family childcare network. Her nonprofit is a settlement house and provides services in many areas. This complicates evaluation, since they are not just focusing on just one area. They do a lot of work in the employment area, they work with homeless people, they have after school programs, they have early childhood programs, and they work with people who have AIDS. In terms of employment: they look at fairly well-established measures such as job placements, a profile of who gets placed in jobs (Are they the long-term unemployed? Welfare recipients? Immigrants?) and retention rates. Children’s programs can be much harder to assess. Sometimes there is no consensus in the field about what to measure. Different experts may point to a variety of factors, such as: children’s educational needs; general literacy versus test scores; music and art enrichment programs that kids can’t get in school; and so on. Funders in the area may also have slightly different areas of emphasis that don’t always fully match what the organization wants to do.
Mottola: How do you see changes coming and build in evaluation tools to help you meet those changes?
Krauskopf: To pick up the advocacy point, there is a need for nonprofits to have more of a say in what gets measured, both with private funders and with government. Some organizations are freer to speak out about such issues and to be advocates, when they do not get government support. Nonprofits that do get government funding often feel they must tread more carefully. There is a legitimacy that comes with being a service provider when you advocate. You know what happens in the community; you can see impact in a very intimate way. Different groups should try to determine how they can affect government public policy, and influence choices about what gets measured, depending on their different perspectives about changes that they see.
Blumenthal: She prefers to talk about nonprofit excellence; it feels better than “accountability.” Evaluation should help nonprofits create excellence in their work and also help make it visible to the outside. She wears a research hat sometimes, and works on the ground with many organizations, so she brings two different perspectives. CRE clients have different “areas of excellence.” Frequently, nonprofits get stuck in similar ways, or have similar holes in management capabilities; these can be barriers to excellence. She asks these questions:
- Do they know about high quality research that already exists in their program areas? Many nonprofits don’t.
- Do they have information systems and databases that can give them reports they can actually use? Most nonprofits don’t. People often have a ton of data, but not necessarily in a usable form. Often, it is sitting in a filing cabinet or in Excel files. Frequently this data is not compiled or shared with the staff or management team.
Learning as an organization how to use data is the biggest obstacle: how do front-line staff and senior managers generate the “data-driven” organization? Lack of information systems has been a big stumbling block. Now, she is seeing a growing demand from organizations for data management software applications that are affordable. There are new flexible outcome tracking and evaluation applications that can be customized for $10-$20,000 that can be running in a few months instead of many years from now after the world has changed again.
Excellence is not just about being data-driven. That’s a piece of it: but it is necessary for performance management, communicating internally, being clear about goals, sticking to priorities – a range of things groups can do better when they have good data in their hands. This levels the playing field. Until now, the funder perception of excellence was weighted to larger groups because those organizations had access to good data management tools. Ideally, funders will now be more encouraged to support community based-organizations, when those groups also have good tools, and they will be able to see excellence in small groups more transparently.
Mottola: How do you deal with conflicting requests for different kinds of data and reporting from private funders and government agencies? And how do you get staff buy-in to the data gathering and measurement process?
McLaughlin: Her organization has to maintain a lot of databases. They have 8 or 10 different ones that funders want them to use – with some exceptions, mostly government funders. Sometimes you have to hire someone to cope with these special tracking needs. Staff will do record keeping if they must do it. What often happens is that you get reports back later from the government, telling you how you are doing with comparisons, ratings, or other feedback. It’s hard to proactively use that data yourself. They have been talking about whether they should try to raise the computer skills of program and department directors to improve their management of data for internal purposes – or add a position – and how do you fund that? Very difficult problems can arise: for example, one funder had three different computer systems within a short time period, and the data needed for these systems was not the same. Staff can feel extremely burdened by such things. They do monthly reports that go up through the hierarchy of the agency. A lot of the time they are forced to measure against funder-defined contractual obligations rather than goals they have set themselves. There’s not one uniform database. They aren’t “that brave.” They bought an expensive program, customized it, and it turned out not to be as useful as they hoped. It’s challenging. They are now finishing a new strategic plan: the focus is on their own infrastructure to support growth and they want to get measurements on infrastructure, not just on programs: for example, they want to be able to assess HR issues such as the value of training, as well as technology.
Mottola: Who gets to shape the discourse around what gets evaluated? Foundations are very pluralistic as Gara said – every foundation is very different, and government agencies ask for different things. But nonprofits are often more alike each other than foundations are. How can different nonprofits work together to educate foundations and government agencies about how they want their work to be measured? What are the opportunities?
Krauskopf: This is a great question. We’re clearly at a point where performance measurement is a part of the life of any nonprofit, whether you have private funding or a government contract. Baruch did a survey of Executive Directors of human service agencies and asked what they thought about performance measurement. It is viewed as a good thing generally, but costly - and risky, compared to the perception of low rewards. Much about performance measurement needs to be perfected. Nonprofits need to play a role in determining what the indicators are that will measure their work and influence how they get paid versus achievement targets. He is not sure what the right forum for this would be. It’s tough for nonprofits that have to respond to so many different measures and maintain multiple systems. The problem needs to be high on the agenda for funders and government agencies.
Mottola: Foundations could do a better job of making public how they evaluate their programs. These are currently very internal documents, but some information might become more transparent.
Blumenthal: There’s a particular challenge with multiple funder-required databases. Ideally you have your own database, which should be used for evaluative learning. Is there a connection between these two things? The technology is there – you can sometimes export data from one database into another, if the systems facilitate it. Then the staff only has to enter the data once.
Mottola: We’re at a convening organized by the BBB, which runs NYPAS and has a charity rating system. How does that affect the way you look at evaluation and the way donors look at charities? (audience question)
McLaughlin: It’s not been a hard standard for us to meet. They are reasonable guidelines.
Krauskopf: He drew attention to an item in the attendee packet: a research paper from Baruch Professor Greg Chen, on whether BBB standards relate to fundraising success. Professor Chen’s finding was that nonprofits which do better on BBB standards also are generally more successful with their fundraising, which is good news. He noted that evaluations and performance measurements require administrative people and infrastructure. He cautioned “standard-setters” about having absolute standards for the amount of administrative support that nonprofits need. In his opinion, nonprofits are probably less infrastructured than they need to be to operate in today’s world.
Mottola: Some foundations, like Edna McConnell Clark and Robin Hood, have models of working in a partner role with organizations and giving a higher level of grant. How does that model of support which requires a lot of evaluation up front affect the way we will view success in the future? (audience question)
McLaughlin: When the funder brings in an evaluator, pays for it, and shares the results – that model works out fine. The staff provides the data but you’re not trying to manage the whole process. Or the funder may not put in an evaluation team, but they look at the amount of money that you are bringing in to your clients, the people that use your program, and use that as a measure of how successful it is. That’s fine too. It’s sometimes a challenge to get all the needed data.
Blumenthal: What’s the value of doing a particular kind of evaluation? It’s helpful to frame it through stories. If you get data on your program – what can you tell from that data about how your program is working? For example, for a school program, over time, has attendance gone down on Tuesdays? If yes, then what’s happening on Tuesdays? But there are limits to organizational self-evaluation. One group found that staff training was more effective on a rolling basis, rather than as day long activities throughout the year. They learned this by comparing their data with other organizations. Large sample comparative studies that are longitudinal and have control groups are very important to improvement across many groups in a field. We can all learn from each other.
Mottola: What role does the board play in helping to shape the way a nonprofit’s programs are evaluated? (audience question)
McLaughlin: Her organization’s board is pushing them to set goals and track against them regularly for programs and in broader areas. The senior staff have really bought into that process, so the board has been helpful in encouraging this.
Mottola: With startups, it’s often important to help the board understand their role as evaluators as well as ambassadors for the work.
Blumenthal: Board members often ask good questions that the agency has not thought of, so the staff must scramble to answer them. In a perfect world - why not just do that for yourselves so you can say “yeah, we thought about that” and you’re prepared.
Mottola: In a way, evaluation is a bit like breathing. You just need to do it – thinking about it too much can get in the way of breathing. Evaluating our work is a part of what we do all day. We don’t want thinking about it to paralyze us.
An audience member asked what role Maria Mottola, as a funder, might play in her Star Trek model of her panel. Who are the funders? In conclusion, Ms. Mottola responded that funders are the “aliens” that nonprofit leaders are visiting. Just like on Star Trek, it’s difficult to figure out whether the aliens will be friendly or hostile when you beam down. And the variety of different kinds of “aliens” in the foundation world is incredible, too, just as it was on the old television program.
Claire Rosenzweig thanked the participants and invited them to complete and return evaluation forms. Attendees took a short break for lunch, and then participated in several workshop sessions.
12:00 – 1:00: Workshop Sessions
Workshop I:
Accountability News: The New 990 and Sarbanes Oxley Tips
Presenter: Julie Floch, Director of Not-for-Profit Services, Eisner LLP
Julie Floch began her presentation by highlighting concepts from the Sarbanes Oxley Act of 2002, a federal law that regulates commercial organizations and which was enacted in response to corporate scandals. She then discussed how key concepts from the Sarbanes Oxley Act have been incorporated by the IRS into the newly redesigned IRS Form 990 for nonprofits.
Sarbanes Oxley concepts
The Sarbanes Oxley Act regulates governance in commercial organizations - public company “issuers” as opposed to “non-issuers”. Nonprofits are in the “non-issuer” world. Major business scandals led to this legislation some years ago. When you look at the corporate scandal stories, you find that governance broke down somewhere in the firm. Sarbanes Oxley attempts to correct that problem, with the assumption that a well-governed business is far less likely to get into difficulty. Some highlights of the Sarbanes Oxley Act requirements include:
- An audit committee. It should be comprised of people who understand financial statements and are financially literate.
- Conflict of interest policy. The policy defines a conflict of interest and sets out procedures for disclosing and managing it. If someone is in management or on a board of directors, but at same time has a relationship with others that could put them in conflict with the entity he or she is governing - that should be disclosed. Having a conflict of interest doesn’t necessarily make people bad or unable to serve, but those persons should recuse themselves from any decisions that could be perceived as enriching themselves improperly at the expense of the organization they govern.
- Code of ethics. A formal code of ethics sets out the values that the company believes in and says what it will do to promote ethics in their business culture.
- Document destruction and retention policy. There should be formal policies in the organization that determine how records are kept, and when and how they are destroyed. Questions business should ask and answer in a policy: What do we keep? How do we keep it? Who makes the decisions and who carries out the decisions?
- Whistleblower policy. This type of policy establishes protections for people who uncover and report fraud. Whistleblowers are the main way that fraud is discovered.
- Independent board members. An independent board member gets no remuneration from the organization – no compensation. The theory is that uncompensated directors will act in an organization’s best interest rather than their own, if they stand to gain nothing from its operations.
- Compliance with regulators. The law requires regulatory compliance, but now there is pressure to demonstrate that compliance has happened.
- Strong financial oversight by board. In the wake of many public scandals that happened because boards failed in this responsibility, there is renewed interest in ensuring that board members carry out this responsibility faithfully and well.
- Internal controls. Internal controls have to do with how information is processed in an organization, and it affects “who does what.” Controls may include segregation of duties, so that different people are doing different functions, and there is oversight about how things work. If a single person assumes too much responsibility or has too much control – that’s considered dangerous, and can lead to concerns. Commercial organizations are now required to undergo audits of their internal controls.
Redesigned Form 990:
The new 990 form clearly shows what the IRS regards as important best practices for the nonprofit sector. It applies to calendar year 2008 for filing in 2009. The form is an 11 page core form, with 16 pages of supplemental schedules. It represents a complete overhaul of the 990. When the IRS embarked on the Form 990 redesign process, it recognized that the current form is not sufficient for everybody’s needs. The Form 990 was launched in 1942 at only 2 pages. As questions and concerns about nonprofits arose in the marketplace, the form changed in response over the years. It is now a document that goes way beyond just capturing compliance with the tax code. The new Form 990 is intended to meet these IRS goals:
- Enhance nonprofit compliance with the tax code.
- Promote accountability and transparency; other people use the Form 990 besides IRS, for many purposes. The IRS wants users to be able to understand what organizations are reporting about themselves on the form.
- Reduce the reporting burden and make the form simpler to use. (Some nonprofits would question whether the new form will reduce their reporting burden.)
There are many questions on the new Form 990 that are not driven by the tax code. They are driven by things that people in the marketplace want to know about issues like governance. The Sarbanes-Oxley inspired belief is that a better governed organization leads to a better complying organization.
Nonprofits wanted to have a place on the form to provide explanations. For this reason, the newest incarnation of the Form 990 provides a supplemental schedule O, as a place where organizations can provide more details about their answers to questions. Julie Floch provided some highlights of the new Form 990. Many of the new features seem to be directly inspired by the Sarbanes Oxley Act.
Page 1 – this is a snapshot of the key components that the IRS thinks organizations would want to communicate about what it is. Ex: it asks how many voting members are on the governing body, how many of the governing members are “independent” (meaning accepting no compensation), and how many volunteers an organization has.
Page 3 – this is a checklist of required schedules. One highlight is question 12. It asks: did the organization receive an audited financial statement for this tax year, prepared according to GAAP? This is a brand new question. There’s no federal requirement in this area; however, there may be state-level requirements. In New York, if you have revenues above $250,000 and raise money from the public, an audited financial statement is required.
Other highlights: the form asks whether the organization made a loan, provided a grant, or gave other assistance to an employee or director? The question uncovers whether people personally benefited by virtue of their position in the organization. If they did, it doesn’t necessarily make them bad people, but the IRS wants to know more about it. Nonprofits might want to provide more details about this kind of situation in Schedule O.
Compliance questions in the Form 990 ask whether your organization is complying with other (nontax related) federal requirements.
Governance: Of all the changes this is the most controversial. There are three sections to this: governing body management, policies around governance, and disclosures. These also are not tax-code driven.
For example, the new Form 990 asks - did the org become aware during the year of a material diversion of assets (ex, fraud)? Nobody knows yet what “material” means. Now, if you become aware of this, you must tell the IRS in the Form 990. Since this is a public document, that means you are required to tell everybody. If you answer yes to the question it could actually be good – it might mean that your organization has good controls, you found a problem, and rectified it. And if you check no, that doesn’t mean you have good controls. Both answers might need some kind of explanation. You can provide details and elaborate about internal controls on Schedule O.
Do your committees with authority to act on behalf of governing body and your board document your meetings contemporaneously in writing? “Contemporaneously” means within the later of the next meeting or 60 days after the action was taken. The IRS view is that organizations which make clear decisions, and which document their decisions, will be governed better.
Was a draft Form 990 provided to the nonprofit’s governing body before it was filed? This is a big issue. It could be difficult and time-consuming to go through the document elaborately with board members.
The new Form 990 asks about your organization’s policies. Again, much of this is lifted straight out of Sarbanes Oxley. Questions to consider: in addition to having a policy – do you regularly monitor and enforce it? What do you do to follow the policy? Disclosure of conflict may well be good, but then you will have to explain it. It is dangerous to have a policy that you then don’t follow.
Whistleblower policy – goes to capturing fraud issue.
Document retention and destruction policy - how do you keep your records?
Compensation policy - how is compensation decided? Is there a review and approval of compensation by independent persons? Do you use comparability data and contemporaneous substantiation to set compensation for the CEO, top management official, other officers, or key employees? Describe the process.
Disclosures – what information do you disclose and how is it made available to the public?
Contact information – The Form asks you to identify the person keeping the books and records. The address provided can be your business address.
Independent board members - receive zero direct personal benefit as a result of service.
Audit committee or committee responsible for financial statements – right out of Sarbanes Oxley. This question asks the nonprofit to reveal how its finances are monitored.
Schedule D – this schedule asks you to reconcile the information presented in your tax filing with information in financial statements prepared according to GAAP. The rules for preparation of the two might be different; so the organization is asked to explain any differences here.
There are many other new things in the Form 990. The main gist is this: nonprofits should be thinking about the new questions and considering how you will portray your governance to the IRS, potential donors, and public through this new form. The danger of simply answering yes or no to the new questions is that “yes” might not always be the “right answer.” Sometimes “no” is the appropriate answer. For either answer you might need to provide more details. Schedule O gives you a way to explain your answers.
Question: What if you have a June 30 fiscal year end? When do you file the new 990?
If your organization has a June 30 fiscal year end, for example, then you would use the new Form 990 for the fiscal year ending June 30, 2009.
Question: Responding to all this could be cost prohibitive in terms of time and labor, especially for very small organizations.
The IRS position has been that nonprofit organizations should get used to being asked these kinds of questions.
Question: Will the BBB be expanding its standards to address any of the things in the new Form 990?
If you look at BBB charity standards and other standards, a lot of these things are out there already as tenets of good governance. [BBB note: transparency, ethical practices, and adherence to applicable laws are all BBB “recommended practices” in addition to the BBB charity accountability standards.]
Question: What gives IRS the authority to ask these questions on its new Form 990?
The IRS says – if we don’t do ask these questions, who will? Just remember, you can choose how you answer the questions, but your answers will be made public.
Workshop II:
How Small Nonprofits Can Demonstrate Effectiveness to Funders
Leader: Patricia Swann, Program Officer, The New York Community Trust
Workshop attendees came from organizations with a wide range of budgets, from less than $250,000 to over $2 million per year.
Pat Swann began by observing that small organizations have a number of advantages. The small nonprofit is usually less hierarchical, more personal, and it’s often easier to communicate effectively. This can help to facilitate implementation of evaluation processes and make it simpler to get feedback for evaluations. Scaling up is not always good for an organization. There are different ways to extend an organization’s impact which don’t involve scaling up: for example, you can partner with other organizations.
Smaller organizations can sometimes be more nimble. If a small organization discovers problems with an evaluation methodology it will probably be easier for that group to make a quick change than it would be for a much larger nonprofit.
The group discussed the cost of technology. Some participants said that the cost of obtaining technology is burdensome, and noted that small organizations frequently do not have on-site technology support help.
Attendees remarked that smaller nonprofits frequently have staff burnout problems. Staffs are small. Their people have to do many different jobs and handle huge work loads. Some participants noted that it is motivating for staff to talk about why they are doing what they do – it helps bring heart into the work. Nonprofit executive directors of small organizations can and should encourage staff members to demonstrate leadership.
The culture of the organization matters: several attendees pointed out that it’s important to develop staff camaraderie, to build staff capabilities. It also helps to create a culture of learning that invites staff to find consensus about why they are evaluating their work. When staff members perceive measurement tasks as being in their self-interest, you can get better staff buy-in for evaluation processes.
One attendee from a domestic violence shelter said that it can be quite hard to get certain types of outcome information from this type of service client. When people leave the shelter, understandably, they want to move on. Even so, this group organized a meeting with directors and program directors at the shelter level, to try to understand better what outcomes they wanted to see and which outcomes they could truly measure. This was a positive experience.
Pat Swann asked participants to comment about whether or not it’s true that smaller nonprofits are closer to the end user than larger groups – and so better able to track impacts on their clients.
Attendees responded with a range of answers. Some reported that they do think their small organizations get close to users. Others remarked that many small nonprofits are still stuck in the same trap of activity counts without a focus on results. Someone has to ask the hard question: what are we accomplishing?
It was noted that some organizations are so small, and have such a need for dollars, that it is truly difficult for them to cope with burdensome reporting requirements. Some groups are not quite sure how to think about outcomes.
Attendees discussed the idea of “participatory evaluation” in which stakeholders provide feedback and input about program impact and goals. Some of the organizations in the room do this form of evaluation and a few reported that it was beneficial for them. Participatory evaluation can be a way of promoting staff learning. You can get data this way - but it’s not always easy to analyze and interpret your own data. It’s valuable to draw on data and research findings from other groups. This can help you determine realistic indicators for change that would work as assessment points for your organization.
It’s not always easy to get money from funders for evaluations. One participant commented that opportunities to discuss data and findings with foundations are not necessarily common. Often you have exchanges with funders about results measurements only when the grant is up for renewal. Participants in the workshop echoed comments made in the earlier panel discussions about how different funders may require very different kinds of reporting. There’s a need for greater standardization.
Barbara Blumenthal of CRE suggested that attendees take a look at a useful publication, “Good Stories Aren’t Enough: Becoming Outcomes-Driven in Workforce Development” by Martha Miles. This publication can be viewed at: www.ppv.org/ppv/publications/assets/203_publication.pdf
Attendees remarked that foundation program officers who have worked in small nonprofits are better at understanding the challenges that these groups face – they “get it.” Smaller nonprofits need help and time to go through the process of identifying their own realistic goals and outcome measurements. Funders can do much to make this process easier for small nonprofits.
Workshop III:
When Measurement Is a Challenge: Tips on Evaluating Hard to Quantify Activities
Leader: Yvonne Moore, Executive Director, Daphne Foundation
Yvonne Moore opened the workshop by observing that there is more than one way to evaluate. Nonprofits need to try to “unpack” their programs, and learn from other groups as well. Competition for charitable dollars is keener than ever, so there are plenty of incentives to do effectiveness assessments. You need to commit time to evaluate your organization – and you’ll need to know what elements you want to evaluate. Don’t be shy about asking grant-makers to give you funds for evaluation purposes. It’s all about committing to excellence and strengthening your programs.
Nonprofit leaders should try to align executive director, board, staff, and funder expectations with the organization’s mission and evaluation plan, and these expectations should be communicated clearly to all these stakeholders. Take a look at the capacity of your organization to do evaluation now. Do you have skilled staff or other people who can take it on? Will you be able to stick with an evaluation plan, once you’ve mapped it out, to monitor and maintain data collection procedures?
Attendees mentioned a number of evaluation challenges. Many nonprofits found it hard to demonstrate impact due to the type of services provided, or the difficulty of establishing cause and effect links between programs and ultimate end users. It is hard for nonprofits to secure funding from grant-makers for infrastructure and administration, and likewise difficult to make decisions about scaling the organization to make the greatest impact. Groups with multiple funders have problems trying to reconcile the different reporting system requirements that different funders may have, and want avoid having to duplicate measurements of the same things. There are few guideposts to help charities choose between evaluation goals, tools, and data collection processes – and when doing evaluation involves making a bit investment, mistakes can be costly.
Participants discussed the evaluation needs of advocacy organizations. It was suggested that organizations working in this area look at published research and try to establish baseline data as a first step. It’s also important to talk with the funder and come to an understanding about what success looks like. It was observed that advocacy work is about building relationships to get to the next level. Groups should track both big and small accomplishments in this process; this helps to create an evaluation culture.
Attendees discussed the evaluation needs of an organization that manages community green spaces. It was suggested that the group first think clearly about what they want to accomplish for themselves, and then figure out how to show impact against those goals. Published research about the advantages of green space might help build their case. They could survey the availability of healthy food in the neighborhoods where their green spaces are used to grow vegetables. They might partner with sympathetic groups to build volunteer strength. They could also assess what might happen to the land, community and volunteers if their group were not there providing services: this could demonstrate the depth of community benefit provided by the group to its clients.
For smaller organizations: while outside evaluation consultants can provide valuable insights, groups that can’t afford them can do many of the same evaluation tasks for themselves. Staff can develop questions for investigation and low-cost ways to pursue answers. Surveys and interviews are valuable tools. Stories and anecdotes about impact can be helpful too. Groups should keep in mind that impact should be viewed relative to context: for example if a former juvenile delinquent starts to attend and enjoy an arts program on a regular basis, that attendance may represent a significant change for this person. Impact needs to be kept in perspective.
A small, volunteer-driven organization in the audience asked about how to evaluate and motivate volunteers. It was recommended that nonprofits create formal job descriptions for their volunteers, screen recruits carefully before investing a lot of time to train them, and hold them accountable, much as they would for paid employees. Volunteers need good boundaries and standards for achievement, too – they should just be standards that are realistic and appropriate for them.
Another organization offers recreational programs to seniors; they can quantify activities fairly easily, but outcomes are harder to measure for them. It was suggested that the group investigate the impact on end users if its program were not there. For example, in the absence of recreational and social opportunities, would the seniors experience increased depression or other health issues?
The group discussed how nonprofit executive directors should be evaluated. It was suggested that they evaluate themselves formally, and use those evaluations as a tool for discussions with board members. Areas to evaluate could include: how is the Executive Director leading the mission forward? Does the Executive Director have enough resources to achieve goals and targets? What is he or she doing to build the organization’s capacity?