Category: Impact Data

  • ChatGPT can evaluate complex public comments

    [fusion_builder_container hundred_percent=”no” hundred_percent_height=”no” hundred_percent_height_scroll=”no” hundred_percent_height_center_content=”yes” equal_height_columns=”no” menu_anchor=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” status=”published” publish_date=”” class=”” id=”” background_color=”” background_image=”” background_position=”center center” background_repeat=”no-repeat” fade=”no” background_parallax=”none” enable_mobile=”no” parallax_speed=”0.3″ video_mp4=”” video_webm=”” video_ogv=”” video_url=”” video_aspect_ratio=”16:9″ video_loop=”yes” video_mute=”yes” video_preview_image=”” border_size=”” border_color=”” border_style=”solid” margin_top=”” margin_bottom=”” padding_top=”” padding_right=”” padding_bottom=”” padding_left=””][fusion_builder_row][fusion_builder_column type=”2_3″ layout=”2_3″ spacing=”” center_content=”no” link=”” target=”_self” min_height=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=”” background_color=”” background_image=”” background_image_id=”” background_position=”left top” background_repeat=”no-repeat” hover_type=”none” border_size=”0″ border_color=”” border_style=”solid” border_position=”all” border_radius=”” box_shadow=”no” dimension_box_shadow=”” box_shadow_blur=”0″ box_shadow_spread=”0″ box_shadow_color=”” box_shadow_style=”” padding_top=”” padding_right=”” padding_bottom=”” padding_left=”” margin_top=”” margin_bottom=”” animation_type=”” animation_direction=”left” animation_speed=”0.3″ animation_offset=”” last=”no”][fusion_text columns=”” column_min_width=”” column_spacing=”” rule_style=”default” rule_size=”” rule_color=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=””]

    Like everyone else, I have been captivated by the capability of the ChatGPT AI.  I have been trying to think through the potential uses in my work.  For controversial development projects or public policies we often have public hearings at which hundreds of people provide comments. Even with a transcript of the comments, it is often frustratingly difficult to DO ANYTHING with the comments because it is just overwhelmingly too much text.

    Sometimes some poor intern is given the task of going through all the comments and creating a summary.  I wanted to see if OpenAI’s GPT3.5 AI could do this job.

    To test the idea, I used public comments collected by BART during a public meeting to select a development team for a housing project at the site of the North Berkeley BART station.  BART had two potential development teams present to the public and then they asked members of the public to complete an online survey. Each person provided 1-5 ratings on several factors and also had a chance to provide richer feedback in an open ended text box. The scoring results were immediately useful but the detailed text feedback is much harder to make use of.

    The combination of numeric scores and text created an opportunity to test the AI’s ability to evaluate sentiment from open ended text (which might not be so conveniently accompanied by scores in other contexts).

    Following a methodology and very simple code published on twitter by Shubhro Saha, I set up a google sheet with BART’s published data linked to the OpenAI API (text-davinci-003).  The spreadsheet has a column with the text comments from users – one per row. I created a new column that used Saha’s code to feed the user’s comments to GPT and return a 25 word summary of each comment.  The result was stunning.

    [/fusion_text][/fusion_builder_column][fusion_builder_column type=”1_3″ layout=”1_3″ spacing=”” center_content=”no” link=”” target=”_self” min_height=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=”” background_color=”” background_image=”” background_image_id=”” background_position=”left top” background_repeat=”no-repeat” hover_type=”none” border_size=”0″ border_color=”” border_style=”solid” border_position=”all” box_shadow=”no” box_shadow_blur=”0″ box_shadow_spread=”0″ box_shadow_color=”” box_shadow_style=”” animation_type=”” animation_direction=”left” animation_speed=”0.3″ animation_offset=”” last=”no”][fusion_imageframe image_id=”181573|full” max_width=”” style_type=”” blur=”” stylecolor=”” hover_type=”none” bordersize=”” bordercolor=”” borderradius=”” align=”none” lightbox=”no” gallery_id=”” lightbox_image=”” lightbox_image_id=”” alt=”” link=”” linktarget=”_self” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=”” animation_type=”” animation_direction=”left” animation_speed=”0.3″ animation_offset=””]https://streetleveladvisors.com/wp-content/uploads/sites/4/2022/12/Screen-Shot-2022-12-17-at-4.10.09-PM.png[/fusion_imageframe][fusion_text columns=”” column_min_width=”” column_spacing=”” rule_style=”default” rule_size=”” rule_color=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=””]

    Open AI’s GPT 3.5 AI illustrates the potential for AI to review and analyze open ended public comments including the ability to make potentially complex judgements about the content of those comments.

    [/fusion_text][/fusion_builder_column][/fusion_builder_row][/fusion_builder_container][fusion_builder_container hundred_percent=”no” hundred_percent_height=”no” hundred_percent_height_scroll=”no” hundred_percent_height_center_content=”yes” equal_height_columns=”no” menu_anchor=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” status=”published” publish_date=”” class=”” id=”” background_color=”” background_image=”” background_position=”center center” background_repeat=”no-repeat” fade=”no” background_parallax=”none” enable_mobile=”no” parallax_speed=”0.3″ video_mp4=”” video_webm=”” video_ogv=”” video_url=”” video_aspect_ratio=”16:9″ video_loop=”yes” video_mute=”yes” video_preview_image=”” border_size=”” border_color=”” border_style=”solid” margin_top=”” margin_bottom=”” padding_top=”” padding_right=”” padding_bottom=”” padding_left=””][fusion_builder_row][fusion_builder_column type=”1_1″ layout=”1_1″ spacing=”” center_content=”no” link=”” target=”_self” min_height=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=”” background_color=”” background_image=”” background_image_id=”” background_position=”left top” background_repeat=”no-repeat” hover_type=”none” border_size=”0″ border_color=”” border_style=”solid” border_position=”all” box_shadow=”no” box_shadow_blur=”0″ box_shadow_spread=”0″ box_shadow_color=”” box_shadow_style=”” animation_type=”” animation_direction=”left” animation_speed=”0.3″ animation_offset=”” last=”no”][fusion_imageframe image_id=”181567|full” max_width=”” style_type=”” blur=”” stylecolor=”” hover_type=”none” bordersize=”” bordercolor=”” borderradius=”” align=”none” lightbox=”no” gallery_id=”” lightbox_image=”” lightbox_image_id=”” alt=”” link=”” linktarget=”_self” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=”” animation_type=”” animation_direction=”left” animation_speed=”0.3″ animation_offset=””]https://streetleveladvisors.com/wp-content/uploads/sites/4/2022/12/Picture2.png[/fusion_imageframe][fusion_text columns=”” column_min_width=”” column_spacing=”” rule_style=”default” rule_size=”” rule_color=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=””]

    With some exceptions, the results were both more accurate and more readable than what is generally produced when humans are assigned this same task. And even when humans are able to do it accurately, this is actually very hard and time-consuming work. OpenAI performed the task instantly and at a cost well under $1.  Now, the pricing could change, but I don’t see a reason to think it would.

    The glaring exception was that in cases where the person provided no text comment (ie. Where the comment field was blank), the AI simply made up a summary of an imaginary comment. These summaries were very consistent with the kinds of things that people typically say in meetings like this but were NOT based on actual things that real people said in this particular forum.  While this seems like weird behavior, it is a well understood aspect of large language AI models like ChatGPT.  What the AI is doing is not really answering questions, it is predicting the most likely text that would complete a query. When we feed it a public comment, the most likely summary is one that fairly accurately matches that comment.  But when we feed it nothing, the most likely summary seems to be based on typical comments instead of any one actual comment. In the absence of real data, the model bullshits! And it does so very convincingly.  Someone reading the summaries and ignoring the full comments would never guess that these blank line summaries were false.

    It is easy enough to fix this specific problem by not asking for summaries when the comment field is blank, but it points to a much more pervasive reliability challenge. This is the reason that Open AI says that the AI should not yet be used for tasks that really matter. There is simply no way to know when it is bullshitting us.

    Nonetheless, I think that the summaries it produces from the actual comments were valuable enough to be very helpful right now. Compared to having no summary of this important feedback, having a summary with some degree of unsupported embellishment is much better than nothing.

    But the tool appears to be capable of much more than simple summarization.  In addition to summarizing, I asked OpenAI to consider whether each comment was more positive, negative or neutral and again it did an impressive job.  The model accurately picks up on subtle clues about whether a comment is supportive or critical. Similarly, it was able to provide separate summaries of positive vs negative comments. If a commenter mentioned both pros and cons of the development team’s proposal, the model was able to pull them apart.  This enables us to craft a master summary that highlights the range of positive comments and also the range of concerns that the community raised in their comments without manually reading each one.  It also seems like we could easily create an index of which comments referenced which aspects of the team or their proposal without relying on keywords.  Ie. we could easily ask whether a comment referenced building height or density and expect it to flag comments that mention how many stories a building was even if they didn’t use the specific words “height” or “density.”

    [/fusion_text][fusion_imageframe image_id=”181568|full” max_width=”” style_type=”” blur=”” stylecolor=”” hover_type=”none” bordersize=”” bordercolor=”” borderradius=”” align=”none” lightbox=”no” gallery_id=”” lightbox_image=”” lightbox_image_id=”” alt=”” link=”” linktarget=”_self” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=”” animation_type=”” animation_direction=”left” animation_speed=”0.3″ animation_offset=””]https://streetleveladvisors.com/wp-content/uploads/sites/4/2022/12/Picture3.png[/fusion_imageframe][fusion_text columns=”” column_min_width=”” column_spacing=”” rule_style=”default” rule_size=”” rule_color=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=””]

    In some ways the most impressive thing about this experiment was that I was able to query the AI in plain English. The script I downloaded simply takes any input text and feeds it to OpenAI and returns the text that the GPT3.5 model provides.  I didn’t have to learn any coding or script or read any API manual. I didn’t have to carefully construct queries in some obscure format. I just fed it the user’s comment and added “is this a positive comment?” and it returned an answer that was nearly always the right answer.

    To push the system, I went a couple steps further until I finally felt like I was exceeding its capacity.  For example, ChatGPT knows what a NIMBY is.  It was able to identify comments that stressed concern for parking or neighborhood character as “NIMBY” comments whether the commenter was supporting or criticizing the proposed development team. I provided no definition of “NIMBY” but asked why it thought comments were NIMBY and its answers were often convincing.  Similarly, it knows what a YIMBY is. It accurately categorized comments that were enthusiastic about more building, more density, less parking as YIMBY comments without any training from me.

    As a side note, I recognize that the term “NIMBY” is used pejoratively to delegitimize concerns about neighborhood character.  I would prefer to use a different term for this analysis but in this context there really is a strong split in the community between neighbors who are concerned about negative impacts on their own quality of life and others who are supportive of the potential of more intensive development to address the broader housing shortage in spite of possible impacts on immediate neighbors.  My personal opinion is that certain aspects of neighborhood character are worthy of preservation and standing up for your neighborhood can be important even though I recognize that so often ‘neighborhood character’ is simply being used as a code for a desire to exclude on the basis of race and class. The AI model lumps racist NIMBYism together with any more enlightened neighborhood concerns – because this is how the term is most widely used online. It seems like it would be easy to provide the model with more nuanced, less loaded terms but I just didn’t take the time to do that for this experiment.

    But whatever terminology we use, this is the defining conflict around the North Berkeley BART Station and so many other development projects.  If the AI can sort the comments into two buckets, one for those who express concerns about impacts on current neighbors and one for those who express support for more building generally, it would be informative to then see how each development team scored within each of these two groupings. BART provided a tally of the overall average scoring for both teams and the development team that was ultimately selected was the top scorer (though, obviously many factors beyond these public surveys led to that choice).  But did the YIMBYs and NIMBY’s score the two teams differently?

    [/fusion_text][fusion_imageframe image_id=”181573|full” max_width=”” style_type=”” blur=”” stylecolor=”” hover_type=”none” bordersize=”” bordercolor=”” borderradius=”” align=”none” lightbox=”no” gallery_id=”” lightbox_image=”” lightbox_image_id=”” alt=”” link=”” linktarget=”_self” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=”” animation_type=”” animation_direction=”left” animation_speed=”0.3″ animation_offset=””]https://streetleveladvisors.com/wp-content/uploads/sites/4/2022/12/Screen-Shot-2022-12-17-at-4.10.09-PM.png[/fusion_imageframe][fusion_text columns=”” column_min_width=”” column_spacing=”” rule_style=”default” rule_size=”” rule_color=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=””]

    Unfortunately, this is where I think we exceed the capacity of the current tool.  For comments with a clear YIMBY or NIMBY character, the AI was able to reliably identify that leaning and provide an accurate account for why each comment was either NIMBY or YIMBY.  But the problem came when dealing with comments with no clear leaning (the majority of comments). Here, as with the blank comments, in the absence of clear evidence, the AI model simply makes up answers.  It characterizes a positive comment about how the team included a homeless shelter operator as a NIMBY comment because the commenter is desiring to improve their neighborhood by housing the homeless!  Submit the same comment again and the result flips and now it is considered YIMBY because it is supportive of the team which is proposing a development project and therefore the commenter must be pro-building. Because these large language AI models are based on probabilities, when the answer is not clear, the model rolls the dice to pick an answer.  Ask again and it rolls again.

    The result is that, while I think the YIMBY/NIMBY analysis is tantalizing and shows the potential of the technology to really open up very meaningful analysis of open ended public comments, I don’t think it can be relied upon for this today. Perhaps, more training (of either the AI or me the user) would enable a more reliable result.  There may be ways to ask the model to evaluate these comments that would help it do a better job ignoring the comments that are unclear and focus only on those with clear agendas.

    In spite of the limitations, I think that the technology shows incredible promise for unlocking the very real value that is often lost in detailed public comments. Everyone seems to agree that public engagement and input is important but, in part because it is so hard to digest these comments, public agencies often find themselves undertaking complex, expensive and time consuming engagement efforts that result in enormous files of essentially unreadable data.  AI that can understand what people are trying to say can consolidate that information and transform it into a format that can more effectively influence public decision making.

    Even with the current technology, it seems entirely practical to build a surveybot that would ask people for open ended comments on a development project and then summarized those comments and showed them to the user giving them a chance to correct any misunderstanding before submitting them.  People have grown accustomed to the idea that their comments on surveys are mostly ignored but a survey that could highlight the most common themes among people’s open comments would be very valuable to policymakers.

    [/fusion_text][/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]

  • Comparing Shared Equity Resale Formulas

    [fusion_builder_container hundred_percent=”no” hundred_percent_height=”no” hundred_percent_height_scroll=”no” hundred_percent_height_center_content=”yes” equal_height_columns=”no” menu_anchor=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” status=”published” publish_date=”” class=”” id=”” background_color=”” background_image=”” background_position=”center center” background_repeat=”no-repeat” fade=”no” background_parallax=”none” enable_mobile=”no” parallax_speed=”0.3″ video_mp4=”” video_webm=”” video_ogv=”” video_url=”” video_aspect_ratio=”16:9″ video_loop=”yes” video_mute=”yes” video_preview_image=”” border_size=”” border_color=”” border_style=”solid” margin_top=”” margin_bottom=”” padding_top=”” padding_right=”” padding_bottom=”” padding_left=””][fusion_builder_row][fusion_builder_column type=”2_3″ layout=”2_3″ spacing=”” center_content=”no” link=”” target=”_self” min_height=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=”” background_color=”” background_image=”” background_image_id=”” background_position=”left top” background_repeat=”no-repeat” hover_type=”none” border_size=”0″ border_color=”” border_style=”solid” border_position=”all” border_radius=”” box_shadow=”no” dimension_box_shadow=”” box_shadow_blur=”0″ box_shadow_spread=”0″ box_shadow_color=”” box_shadow_style=”” padding_top=”” padding_right=”” padding_bottom=”” padding_left=”” margin_top=”” margin_bottom=”” animation_type=”” animation_direction=”left” animation_speed=”0.3″ animation_offset=”” last=”no”][fusion_text columns=”” column_min_width=”” column_spacing=”” rule_style=”default” rule_size=”” rule_color=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=””]

    This general purpose educational tool was designed to help community leaders understand the relative performance of different shared equity resale formulas. So much of what sets one model apart from the other is dependent on the assumptions you make about interest rates, home price inflation and income growth. This tool allows a side-by-side comparison between several models, and allows you to change these input assumptions and immediately see changes in the relative performance of each of the models in terms of both ongoing affordability and equity building for homeowners.

    The tool is intended to help policy makers and community members to evaluate questions like:
    · When housing costs are rising rapidly, which approach preserves affordability best?
    · Which approach provides the greatest asset building opportunity in the face of rising interest rates?
    · If incomes grow more slowly than we expect, which approaches will be most impacted?

    You can make the analysis more relevant to your local conditions by customizing a number of background assumptions like cost of building a new affordable unit, the level of subsidy available, and the monthly housing costs that homeowners will face.

    The latest version of the tool is an interactive Excel file.  The tool includes 8 commonly used shared equity resale formulas and 5 custom models which can be modified to match existing or proposed local program designs. The excel version also allows the user to save up to 5 alternative economic scenarios to understand how the formulas perform under different potential futures (ie. rising interest rates, falling home prices, etc.)  The tool is locked so that it is safe for inexperienced users to play with alternatives but not password protected to allow power users to make small or large modifications. The excel file is released under an open source license which allows for free sharing and modification.

    Download the excel file here.

    [/fusion_text][/fusion_builder_column][fusion_builder_column type=”1_3″ layout=”1_3″ spacing=”” center_content=”no” link=”” target=”_self” min_height=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=”” background_color=”” background_image=”” background_image_id=”” background_position=”left top” background_repeat=”no-repeat” hover_type=”none” border_size=”0″ border_color=”” border_style=”solid” border_position=”all” border_radius=”” box_shadow=”no” dimension_box_shadow=”” box_shadow_blur=”0″ box_shadow_spread=”0″ box_shadow_color=”” box_shadow_style=”” padding_top=”” padding_right=”” padding_bottom=”” padding_left=”” margin_top=”” margin_bottom=”” animation_type=”” animation_direction=”left” animation_speed=”0.3″ animation_offset=”” last=”no”][fusion_text columns=”” column_min_width=”” column_spacing=”” rule_style=”default” rule_size=”” rule_color=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=””]

    The Resale Comparison Calculator allows side by side comparison of the most common types of shared equity resale formulas, showing how well they preserve affordability for future buyers as well as their performance in building wealth for homeowners.

    [/fusion_text][/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]

  • The Most Interesting Things from Thursday’s Housing Forum at City Hall

    The Most Interesting Things from Thursday’s Housing Forum at City Hall

    From The Stranger

    Posted by on Mon, Feb 17, 2014 at 4:23 PM

    shutterstock_87467168.jpg

    Last week, Dominic urged you to attend a forum organized by the city council around affordable housing in Seattle. Why did he want you to go hang out at City Hall and watch PowerPoints? Because the affordability of housing, and how to better achieve it, is one of the most hotly debated topics in the city.And you know why: Because if you’re a renter, or a prospective home-buyer, and you make less than the median income (around $60,000 a year for a single-person household), you may have noticed recently that shelter is expensive as all hell, and only getting expensiver.

    But the things that really stood out most in my mind from the housing forum were not part of any PowerPoint. They were a couple of offhand comments by a consultant, Rick Jacobus:

    • First, he mentioned that data shows that mixed-income neighborhoods are good for everyone—both the higher- and lower-income people who live in them. Which is an important reminder for people who keep arguing that the only solution is to just have developers keep building whatever and wherever they want, without much restriction, and let the market take care of it—meaning let the centrally located, amenity-filled neighborhoods with expensive land prices house the rich, while the poor and middle-class are pushed out into outlying, less-accessible, transit-starved neighborhoods where land prices are cheap.

    I have a message for y’all market-solutions-only-forever people: Your city sounds terrible.

    • Second, someone asked Jacobus about the inherent conflict between affordable housing requirements and density. If you’re not a housing/land-use nerd, this is basically a fight between well-intentioned density activists, who say that adding more housing will drive prices down (they sometimes sound just like the market-will-solve-everything people I mentioned above), and well-intentioned affordable-housing activists, who say you should straight-up require developers to build some moderately-priced housing while they’re also building fancy-schmancy units for the rich. He answered carefully, saying that while studying Seattle’s housing issues, he heard that argument a lot. But, he continued, you don’t hear that argument anywhere else. In other cities, he said, people who fight for affordable housing requirements and people who fight for density are on the same side, and the developers use the fact that they’ll be paying for affordable housing as a way to sell density to wary residents.

    Seattle, it would seem that we keep having entirely the wrong conversation here.

    Way wonkier stuff coming soon, but for now, I leave you with one more important thing I learned: Eating a banh mi in the back of a conference room and wearing fleece don’t mix. (Crumbly sandwich + fleece = CRUMB MONSTER.) Hot tip, y’all! Don’t forget.

  • Salesforce Foundation: 3 ways to make the case for Tech Funding

    Salesforce Foundation: 3 ways to make the case for Tech Funding

    From the Salesforce.com Foundation Blog

    For profit businesses are routinely able to raise significant capital in the expectation that a new technology will create higher profits over the long term. Nonprofits, by definition, can’t make this same promise and, therefore, find it much harder to raise the kind of money necessary to invest in transformative technology.

    But the technology itself holds the same promise to totally transform everything that nonprofits do – it is just taking us much longer to realize that promise. We know how to sell donors on delivering services and even changing policy but we have always had a harder time convincing people to fund institutional capacity and technology is essentially a new kind of organizational capacity that is now competing with everything else for scarce resources.

    When we are raising money for tech, we need to make the case that the investment will pay for itself in one of three ways: either by lowering costs, by raising revenue or by increasing our social impact. Sometimes, our projects will offer all three benefits.

    1. Lower Costs

    In many ways, nonprofits are no different from other businesses: many technology investments will simply allow us to do what we do for less money over time. While this increased efficiency can make organizations more sustainable, this category may be the hardest to get donors excited about because it may not directly translate to observable differences in our services.

    Making the case for this kind of investment involves calculating a payback period – the period of time over which an investment in technology will pay for itself. Be careful not to assume that these savings last forever, though. Every technology has a useful life and more innovative technologies often become outdated quickly.

    2. Increase Revenue

    Technology that helps organizations build stronger connections or more effectively communicate with their donors can drive real increases in fundraising. Similarly, technology that helps organizations do a better job of capturing the social impact that they are having (whether through formal measurement and statistical metrics or simply human stories) can increase revenues enough to easily justify their costs.

    Making the case for this kind of investment is also just a matter of calculating the payback period but now this is much harder to do because it is harder to predict the impact on revenue. So instead, turn the math around and calculate the level of annual increase in fundraising that would be necessary to ‘break even’ on the investment over the expected life of the technology. Help funders see how easy it would be to exceed that level.

    3. Multiply Impact

    While there are plenty of examples where technology investment leads to long term cost savings or revenue improvement for nonprofits, we can’t always expect that. In so many other situations we see the potential of technology to make a difference in our work but we know that the technology will increase our ongoing costs not lower it. Too often we back away from these opportunities – we try to do more with less when we should be doing more with more!

    A 2010 survey found that, while 95% of nonprofit leaders consider IT to be critical to their finance and accounting activities, less than half said IT was critical to their service delivery and programs and only 26% said it was critical to their public education and advocacy.

    Making the case for investments that increase impact is much harder. Just as start up entrepreneurs have to convince investors that a given technology is likely to create radical new business opportunities, social entrepreneurs have to convince donors that new technologies have the potential to radically transform our social change work. But because we are not likely to find one ‘Angel Investor” who will make a very large bet on the technology, we have to also show how relatively modest incremental investment can gradually unlock the potential of the technology and create change that is more than simply incremental.

    One of the reservations that funders have with funding capacity building of any kind is that these kinds of investments can be a black box – when money is being spent on something other than service delivery it is harder to know whether it is being spent on the right things. It we want to avoid the nonprofit starvation cycle we have to shine light into that black box and help funders to see the inner workings so that they can understand why the specific technology investments we are pursing can help us do more of the good that they are looking to us to do in the world.

  • Pay for Success: Overcoming Information Asymmetry

    Pay for Success: Overcoming Information Asymmetry

    From the blog of the National Council on Crime and Delinquency
    June 2, 2014 | by Rick Jacobus, Director of Strategy and F.B. Heron Foundation Joint Practice Fellow at CoopMetrics
    If you read much of the recent flurry of writing about Pay for Success, you will notice a regular pattern where authors acknowledge that widespread implementation will require “better data” and then quickly change the subject. Surely better data is on the way. We live in an age where it is easy to take this kind of inexorable progress for granted, but given the level of enthusiasm for Pay for Success, it is worth considering what it will realistically cost to get good enough data.

    Certainly the whole potential of Pay for Success rests on data. In order to offer strong financial incentives for success, a government agency must be able to know that their private partner has succeeded. And measuring the “success” of a social program is notoriously hard. We all know it when we see it, but it is not simple to write out a clear and unchanging definition for any given program. A youth employment program cannot simply be judged by the number of youth who get jobs—we need to say something about the quality of those jobs, the level of challenge facing the youth who enter the program, the local economy’s strength, etc.

    This is an example of what economists call information asymmetry. George Akerlof, who won the Nobel Prize for his work on information asymmetry, wrote a paper in 1970 about the market for used cars. Some used cars are in great shape and others are what Dr. Akerlof called “lemons:” they look fine but have been poorly maintained or have other hidden problems. Sellers know which kind of car they have, but buyers cannot immediately tell which is which. Sellers of above-average cars generally have to settle for average price, and buyers have to risk paying average price for a below-average car. A key point is that buyers can partially overcome this asymmetry by investing in information about a potential car; they can hire a mechanic to examine it. But there is also a limit—a simple inspection might weed out the worst cars, but the difference in value between an average and an above-average car may not be enough to justify a more complete inspection.

    Information asymmetry has historically been one reason that we have created nonprofit organizations. Take childcare: A childcare provider knows whether they are providing quality care or not, but it is difficult for parents to tell the difference. It would be very easy for an unscrupulous operator to boost profits by cutting important corners. It is not that they have more incentive to cut corners than someone who makes toothpaste, but because the parents who pay are not the day-to-day users of the service, it is easier to hide the cost-cutting. Organizing a child care center as a not-for-profit organization does not overcome the information asymmetry, but it does accommodate it by reassuring parents that at least the center does not have any incentive to provide low-quality care.

    Like parents, philanthropic donors are not present daily to see whether an organization is doing everything it can to make the most difference. Instead, they have to settle for knowing that the groups to which they give are trying to make a difference and do not have a profit motive to cut corners. The downside of this approach is that donors, like used car buyers, may sometimes have to accept average performance.

    Rather than accommodating information asymmetry, Pay for Success tries to overcome it. This is like pushing water uphill—it can be done, but you have to invest energy to do it. The very idea of Pay for Success requires a significant investment in information. In place of a government agency directly funding a social service agency and accepting average performance, a social impact bond (SIB) requires several layers of intermediaries and generally two levels of professional evaluation: an evaluator who works directly with the program to measure impact and an independent assessor who reviews the data on behalf of the government agency.

    McKinsey & Company developed a proforma to analyze the financial benefits of a hypothetical SIB focused on juvenile justice [fusion_builder_container hundred_percent=”yes” overflow=”visible”][fusion_builder_row][fusion_builder_column type=”1_1″ background_position=”left top” background_color=”” border_size=”” border_color=”” border_style=”solid” spacing=”yes” background_image=”” background_repeat=”no-repeat” padding=”” margin_top=”0px” margin_bottom=”0px” class=”” id=”” animation_type=”” animation_speed=”0.3″ animation_direction=”left” hide_on_mobile=”no” center_content=”no” min_height=”none”][1]. They found that even if an SIB-backed intervention produced significant savings for government agencies, the SIB structure was far more costly than directly funding the same services. In their model, a $14.4 million direct investment in preventive services would save the government $14.4 million in corrections costs over a period of about eight years. A successful SIB that funded the same $14.4 million program would incur an additional $5.7 million in research and administrative costs, success fees, and investor profits, and McKinsey & Company estimates that it would therefore take 12 rather than eight years before the public savings justified the increased cost.

    This extra cost sets the bar pretty high for the performance gains that the SIB must deliver. Information technology improvements will continue to make investment to overcome information asymmetry practical in more and more situations, but when the cost of collecting data is taken into account, the social problems that lend themselves to an SIB will be harder to find than they would be if perfect information were free. Once we have found them all, there will still be many important social problems that are worthy of public investment.

    If we want to confront some of our most complex social challenges, we have to come to terms with the reality that a significant level of information asymmetry is a fact of life and we cannot wish it away by calling for better data. For some social problems, sizable investment in information may make it practical to offer financial incentives to the best-performing programs. For the rest, we do not have to give up on using data to drive improved performance, but sometimes it might be more cost-effective to focus on raising the performance of the average program instead of providing financial incentives for above-average performance.

    [1] McKinsey & Company. (2012). From potential to action: Bringing social impact bonds to the US. Retrieved from http://www.rockefellerfoundation.org/news/publications/from-potential-action-bringing-social[/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]


© Copyright 2012 – 2025 Street Level Advisors LLC  |  All Rights Reserved  |  Powered by WordPress

510.653.2995      Email Us