Why doesn’t Charity Navigator care about impacts?

A couple of months ago, we talked about inputs, activities, outputs, and outcomes, saying that the closer you can get to measuring whether you actually help people the better you are.  

Charity Navigator measures the amount spent on activities.  So according to them, you would be better off spending $20 to help 10 people than $10 to help 20 people.  As we talked about yesterday, this biases against the use of volunteers; because they are not paid, any services delivered by them don’t count.

To be fair, there’s usually a correlation between amount spent and outcomes (that’s why it’s so unfortunate that Charity Navigator biases against larger organizations, as we discussed on Monday).  However, when that goes off the rails, Charity Navigator is the last to know.

Take Cancer Fund of America.  Here’s Charity Navigator’s three-star rating of them as of 2013, which is buoyed by four stars in accountability and transparency and very strong growth in program expenses.  At the time, it was rated higher than the two stars given to a clear slouch in the fight against cancer: the American Cancer Society.

You may recognize Cancer Fund of America from their profile in America’s Worst Charities, a project of the Tampa Bay Times and Center for Investigative Reporting or from the 2013 CIR report.  Or the lawsuit from the FTC and all 50 states and DC against them as a sham.  Or them shutting their doors earlier this year.

Or you might recognize them from Charity Navigator’s blog, where they talk about Cancer Fund of America being a sham.  When giving donors advise on how to avoid a scam, they say

“Take the time to research the charity’s finances, governance practices and results. You’ll find much of this analysis, for free, at Charity Navigator.”

As a watchdog, it’s one thing to let the sheep get eaten.  It’s another to recommend the wolf to the sheep.  And when you both recommend the wolf to the sheep and use that recommendation as an example of sage sheep-protecting advice, congratulations — you’ve reached Charity Navigator level.

To be fair, a lot of people missed this for a lot of years.  But Charity Navigator continued to rate Cancer Fund of America highly well after it was seen to be a scam because it looked only at program expenses and it didn’t care at all what those program expenses were or if they were having an impact.

And, with the release of CN 2.1, it still doesn’t.

You might say that everyone makes mistakes.  That certainly is true.  But most responsible people admit their mistakes, apologize for them, and change to avoid them in the future.  Charity Navigator held themselves out as a paragon on the very charity they missed and made no changes.

But, you probably are saying, it’s really hard to measure impacts.  Like really, really hard.  Especially for an outside organization.  And I would agree with that.  Charity Navigator is trying to police the entire nonprofit world with six analysts — it’s not easy at all.

But that misses two important points.  First, it is one thing to give advice that is unhelpful; it’s another to give advice that is actively destructive.  Charity Navigator:

  • Suppresses organization size (and impact) by ignoring the costs of scale
  • Ignores the ways a nonprofit can have an impact beyond its initial online constituency by eschewing joint cost allocation
  • Rewards hoarding money
  • Advises against giving to nonprofits who need it most
  • Advises against giving to nonprofits who are more efficient with their program expenses

I can forgive challenges in trying to tackle the challenging nonprofit world.  But your motto can’t be “when in doubt, do harm” in a reverse Hippocratic oath.  At the point that Charity Navigator says “don’t give your money to the losers at the American Cancer Society — go with this scam instead,” then crows about it rather than apologizing to the people it deceived, it is an actively negative force.  Maybe that will change with new leadership, but as I mentioned on Monday, I’ve taken too many runs at Lucy’s football to easily trust in another.

And second, others are doing this far better.  Last month, GuideStar released its platinum ratings, which are the result of a concerted effort to look at the actual impacts of nonprofits.  Causes are able to talk about the impacts in ways that are relevant to them and their donors and compare their impacts to other organizations.

It’s still a work in progress, launching only last month, but it’s still far better than anything from CN, because it tries to quantify the change nonprofits make in the world.  Additionally, in the GuideStar profile, there’s everything that’s in a Charity Navigator profile from financials, but without the misleading guidance.

I urge all nonprofit leaders to take a look at what is required to get platinum here.  

There’s also Great Nonprofits, which allows constituents to review the impact of a nonprofit.  This might have sounded odd 25 years ago, but by now, all of us have bought something on Amazon or tried a Yelp restaurant based on the review.

These efforts are important because donors do deserve to know what impact they are having and to be protected from scams.  They are crawling through the desert looking for it.  And when they find the oasis is actually a mirage, they are trying to drink the sand that is Charity Navigator.

So let’s give out water, as clear and pure as we can make it.

Why doesn’t Charity Navigator care about impacts?

Why does Charity Navigator advocate hoarding?

Another part of Charity Navigator’s financial rating system is to have reserves for a rainy day.  In order to get a top rating from Charity Navigator, an organization must have 12 months of operating expenses in the bank (depending on the sector: lower for food banks and relief organizations; higher for foundations, libraries, parks, foundations, etc.)

I fully agree that nonprofits should build reserves.  There are natural ebbs and flows of both revenues and expenses in the nonprofit world; usually one is ebbing while the other is flowing.  Additionally, funders sometimes have demands that are off-mission, counterproductive, or political.  You need a bank of money so that you can say no to quick bucks at the long-term expense of your mission.  And you want to be able to take advantage of unforeseen opportunities as well as preparing for unforeseen difficulties.

Because it has a basis in need, this may be CN’s most helpful financial metric, which is much like the Taller Than Mickey Rooney Award: you can still be very short and win.

Remember, Charity Navigator purports to be a guide to where you should give your money as a donor.  And there are two problems with it as guidance to donors.

First, it rewards the hoarding of money.  There literally is no upper limit on how much you can have in reserves and still max out your reserves rating.  In fact, they brag about this on their site:

“Givers should know that other independent evaluators of charities tend not to measure a charity’s capacity. Indeed, charities that maintain large reserves of assets or working capital are occasionally penalized by other evaluators. In our view, a charity’s financial capacity is just as important as its financial efficiency. By showing growth and stability, charities demonstrate greater fiscal responsibility, not less, for those are the charities that will be more capable of  pursuing short- and long-term results for every dollar they receive from givers.” (here)

The simple question is would you rather fund an organization that ten months of expenses in the bank or ten years?

My inclination, as I would think most carbon-based lifeforms’ inclinations would be, is the ten-month organization.  If you have ten years of reserves in the bank, you are almost certainly not funding worthy projects and not investing in your infrastructure or growth.

BBB asks that you cap your reserves at three years to make sure you aren’t doing this — it seems like a better practice that CN is actively avoiding.  Or, put well from ACEVO, Charity Finance Group and the Institute of Fundraising:

“Charities need to justify their reserves. Holding a high level of cover for risks and unforeseen events appears sensible, but is this right if worthwhile projects are going unfunded? Charity funds are meant to be spent; therefore charities should be able to provide solid, considered justification for keeping funds back as reserves and not spending them.”

Nonprofits do not exist for the purpose of existing.  They exist to solve a societal problem.  If you have too much in reserves, you are privileging your existence over your mission.  And to that I say

shame-bell-lady-from-game-thrones

Second, it advises against giving to those who may need it most.  Let’s say there are three nonprofits, both with substantial (let’s say 15 months worth of) reserves.  Disaster strikes: a grant that makes up more than half of each of their budgets dries up.  Thankfully, they have prepared for this rainy day and, after consulting with their boards, they each take action:

  • Nonprofit 1 has a policy that they will not go below 12 months of reserves, per Charity Navigator’s advice.  They spend some from their reserves, but have to make small cuts in program and large ones in infrastructure to keep their bank balance solid.
  • Nonprofit 2 also believes strongly in a solid reserve, but not at the expense of their programmatic activities.  They dip into their reserves, taking them below the 12-month mark (knowing they will be penalized even more if they cut program activity), and prop up their program activities to a stable level.  However, they make no move to upgrade their fundraising abilities to replace their revenues.
  • Nonprofit 3’s board says that moments like this are the reason you have reserves.  They do what it takes to replace their program expenses and invest in ways to increase their unrestricted giving to replace the missing funds.

Charity Navigator ranks these organizations 1, then 2, then 3.  As a donor, three is the only one I’d want to give to — the only one that cares enough about their programs to sustain them for the short term and work to salvage them for the long term.

At the point where you are advising people to give to nonprofits in the exact wrong order than the one you should be, you probably need to rethink your financial metrics and do more than rearrange the deck chairs.

But that’s not the worst part.  Not only does Charity Navigator advise against giving to those who need it most; it advises against giving to those who have the most impact (or at the very least, is impact agnostic).  We’ll talk about that more tomorrow.

Why does Charity Navigator advocate hoarding?

Why Charity Navigator ignores standard accounting practices

Or, perhaps this would be better titled “why does Charity Navigator ignore standard accounting practices?”.

For those blissfully unfamiliar with nonprofit accounting regulations, allow me to burden you for a moment.  When a nonprofit combines fundraising and program activities, it is required to allocate part to fundraising and part to the program expense.  A nonprofit can do this if it meets three criteria:

  • Purpose: does the program part of the expense benefit the mission and societal good?
  • Audience: is the communication going to an audience that needs that communication for the societal benefit?
  • Content: is the content genuinely of value to the mission?

So it is not something done lightly.  But when Charity Navigator does its rankings, it takes out joint cost allocation.

Geoff Peters, in his excellent post Can’t we replace Charity Navigator, puts it thusly:

“Charities are required to follow GAAP Accounting rules and FASB standards in order to receive a clean audit opinion.  Yet when they follow those rules, Charity Navigator reverses the joint cost allocation which is required under audit standards and restates (misstates) the finances of the charity.  Then the media picks up on that misstatement and publishes it as if it were true.  This results in damage to the charity’s reputation and is based on false information but since the media accurately quoted Charity Navigator’s misstatement, they cann300px-prometheus_adam_louvre_mr1745_edit_atomaot be sued.”

The idea that Charity Navigator thinks that it knows better than the IRS, BBB, FASB, GAAP, and many other acronyms in laughable hubris, the type that Greek myths punish with your liver being continually eaten and regrown.

But sometimes the vocal minority has a point.  So let’s look at whether joint cost allocation makes sense.

Charity Navigator says:

We believe that donors are not generally aware of this accounting technique and that they would not embrace it if they knew a charity was employing it, nor does Charity Navigator. Therefore, as an advisor and advocate for donors, when we see charities using this technique we factor out the joint costs allocated to program expenses and add them to fundraising.  The exceptions to this policy are determined based on a review of the 990 and the charity’s website (in some cases we review data provided to us from the charity directly).  We analyze these items to see if the organization’s mission includes a significant education/advocacy program or other type of program that would directly be associated with joint costs.  If that is the case, we inspect in further detail the charity’s expenses in regards to those specific programs.”

There are several reasons this logic is spurious:

  • First, is it just me or does this say, in essence, “as an advocate for donors, we think they are stupid.  We think they can’t figure out that if they get a mailing, someone has to pay for it.”?
  • The fact that some people are aware of something is a reason to educate recipients of data, not to judge the providers of it.  For example, I believe not many people know how poor Charity Navigator’s ratings are, so I’m dedicating a week of posts to it.
  • As we stated earlier, in order to count something as a program expense, a nonprofit has to establish that it is an essential part of their mission.  What Charity Navigator is saying by backing it out is that they think that almost all nonprofits are lying about this.
  • In order to ferret out those few CN doesn’t believe are lying, they will analysis the mission and the communications to figure out if it is a legitimate purpose.  Call me crazy, but I think the nonprofit is probably a better judge of this.

This last point is especially true when you consider that CN has six program analysts and rates over 7500 charities.  With a little math, you can figure out that they spend about 90 minutes analyzing any one nonprofit, assuming no bathroom breaks or meetings.  Considering that the nonprofit’s staff, board, and audit committee spends a bit more of time on this than CN, then it’s certified by the federal government (not a strong argument, but one nonetheless), it appears that CN should take the proverbial long walk off a short pier trying to refigure every nonprofit’s mission for them.

It also goes to the question of what a world looks like in which you don’t do this type of allocation.  Let’s say you want to educate people about an important piece of legislation or ways to screen for prostate cancer.  How should you reach these people?

Yes, email is great; that will help you preach to the converted.  How do you reach people who don’t believe in what you do?  Earned media is great.  But if you are failing to recall all of those stories on your local news about how to screen for prostate cancer, it’s probably because they didn’t exist.  Not everyone has a sexy issue.

The truth be told, more people hear about more missions through their mailboxes and phones than through any other means.  

And those are expensive, so they have asks attached to them.  It is natural that all of the fundraising parts of these asks should be considered fundraising.  But we are dependent on people knowing about and acting on our issues in a variety of ways.  And that which is program expense should be program expense.

CN was sincere when they said they didn’t think that mail had a place with nonprofits.  They could not be more wrong.  Choking off joint allocated media doesn’t just strangle fundraising; it also throttles mission.

And thus, their guidance is better off ignored, or better yet repudiated.

Why Charity Navigator ignores standard accounting practices

Meet the new Charity Navigator. Same as the old Charity Navigator…

Last week, Charity Navigator released its new 2.1 rating system after reassessing its financial ratings.  This was an approximation of my reaction:

248578-full

I’d heard about Charity Navigator 3.0 and thought that maybe they would finally start focusing on nonprofit results.  But then they tried to use their lack of expertise to judge the logic models of experts in the field.  I argued then as now that I would trust the American Heart Association on how to prevent heart disease more than Charity Navigator, just as I would trust the doctor or nurse in the ER more than an intern at the hospital’s accounting firm.

I was hopeful with this effort was euthanized.  And I’d hoped that new leadership and time would change things.  But then their new leadership said that (in essence) mail may not make sense for nonprofits (story here).  As if we need less money to go to noble causes, not more.

Then they surveyed nonprofit leaders about the effectiveness of their metrics.  I gladly participated, telling them what were their most and least helpful metrics (answer: they were all tied for least helpful, in that they are not just not helpful but counterproductive).  Again I hoped for change.

What we saw on the first were a couple of tweaks: the manicure to a patient with a sucking chest wound.

This is frustrating, but I believe in the idea of giving insight into nonprofits for those who want it.  Like companies, people, or governments, there are good nonprofits and great nonprofits and scam nonprofits and blah nonprofits.

And I like that nonprofits are stepping up to do this.  When governments have to get involved, we too often see a cleaver used instead of a scalpel.

Charity Navigator’s own accountability and transparency metrics are very strong and helpful (other than the misguided view of what a privacy policy is).  When you don’t have things like an independent board, systems to review CEO compensation, regular audits, and so on, there’s a good chance you might be a scam (or very young as an organization).

But Charity Navigator continues to prop up the overhead myth, as described here.  While this is the most grievous sin, it is by no means the only one.  Thus, this week, we’ll take a look at why you should not only ignore the Charity Navigator financial metrics, but actively do the opposite.

With the new Charity Navigator ratings, program expenses are on a rating scale instead of using the raw value.  This focuses even more on the fallacious overhead rate, giving greater emphasis to differences among nonprofits.  In my post on the overhead myth, I talked about how a focus on overhead generally will prevent a non-profit from making the investments needed to grow.  Now, let’s look at a specific case of how focusing on fundraising expenses hurts growth, that of diminishing marginal returns.

Perhaps like me your econ class was at 8 AM, so let me explain with a thought experiment. Let’s say you were going to do a mailing to only one person.  You’d clearly pick out the best possible donor to send to — the person who gives you a significant donation every single time.

Now, let’s say you found an extra couple of quarters in your couch and wanted to mail a second person.  You’d find another person who is almost as good as the first — maybe they give a significant donation 99% of the time.

Let’s repeat this 10,000 times.  Now you are getting into people who are either less likely to donate or who will likely give smaller gifts.  Your revenues per mailing sent will still be very good — it will more than pay for itself by a wide margin — but not as good as that first person.

Now repeat 100,000 times.  The potential donors are getting even more marginal here.  But your expenses have barely gone down.

This is diminishing marginal returns in action.  As you try to reach more people and grow, your outreach becomes more probabilistic and less profitable.  

But it’s still profitable. (In the real world, you would hopefully be looking at this along the donor axis rather than the piece axis, asking if each piece added to lifetime value, but let’s not gum up the thought experiment).

If you had a magic box into which you could put $1, and get $1.10 in lifetime value out, should you do it?  Many of we fundraisers would be putting money into that box like a rat designed to get a, um, pleasurable experience when it pushed a level.  And we’d be right to.  More money = more mission.

But by focusing on the cost of fundraising, CN would have you cut off at the 10,000 mark (or not to mail at all).  Less money, fewer donors, less mission.

That’s the starvation cycle in action on the fundraising side of things.  And it’s made even worse by Charity Navigator’s stubborn refusal to allow for joint cost allocation of joint fundraising/programmatic activities, which we’ll cover tomorrow.

Meet the new Charity Navigator. Same as the old Charity Navigator…

Online advertising metrics basics

You may be saying “Mr. Direct to Donor, why would I read this?  My online advertising budget is limited to whatever I can find between the couch cushions.”

First, please call me Nick.  Mister Direct to Donor is my dad (actually, Dr. Direct to Donor, DDS, but that’s another thing).

Second, knowledge of the basic online advertising metrics, along with a deep knowledge of what you are willing to pay for each type of constituent or click, can help you bootstrap an online marketing budget by making investments that will pay off in shorter timeframes that you can get offline (usually).

So, first things first.  Online advertising is dominated by CP_s.  The CP stands for “cost per” and the _ can be filled in by C for click, A for acquisition, or M for thousand.

(Yes, I know.  It should be “T is for a thousand.”  However, youmille-feuille_franc3a7ais_1 can do that most American of things — blame the French — for this one.  M technically stands for mille, which is French for one thousand.  You may have encountered this in the dessert mille-feuille, which is French for a cake of a thousand sheets, or in the card game Mille Bourne, which is based on being chased by a thousand angry Matt Damons.)

The big question for advertising is “one thousand what?”.  In this case of CPM, it’s impressions.  You are paying whatever amount (the average is $3-4 right now) for one thousand people to see your ads.  It’s basically like every other advertisement you’ve ever seen (pre-Internet) where you buy a magazine ad from a rate card or TV ads based on how many people are watching.

With this new thing called the Internet, however, you don’t need to pay this way in almost any case.  You can measure at a greater level of interaction, so most advertisers will allow you to pay per click, especially in the areas of greatest interest to we nonprofit marketers like search engine listings, remarketing, and co-targeting.

But even that is not enough control for some, who wish to pay to acquire a donor (or constituent) and that’s where cost-per-acquisition comes in.  This is not as popular as CPC, as the publisher of the ad is dependent on you to convert the donation or registration, but has maximum advantage for you as an advertiser.

What you are buying in each successive step closer to the act that you want to achieve (usually donation) is certainty.  With CPA (also CTA or cost to acquire), you know exactly how much you are going to pay for a constituent; with CPC, you know how much you are going to pay, assuming this batch of people converts like the last batch; with CPM, you are spraying and praying other than your front-end targeting model.

The beauty of this level of control is that it can be used to justify your budget.  There are vendors who will run CPA campaigns where they get all of the initial donations from a donor.  Assuming they are reputable, these can be of great value for you, because you then get to keep the second and subsequent donations (that you will get because of your excellent onboarding and stewardship process).  Others will charge a flat CPA; if your average gift is usually over that CPA, you can pull in even these first donations at a profit.  Some are even doing this for monthly donors, where you can calculate a payout and logical lifetime value.

Once you have those running, you now have the budget (because of your additional net revenue) to look at CPC ads.  If you have your donation forms running and effectively tested, you should be able to net money on these as well, by targeting well and testing various ad copy/designs and offers.

So use your knowledge of ads to help bring in some extra money that can be used for… more ads (if profitable)!

Online advertising metrics basics

Judging your online file

We’ve gone over email, Web, and constituent metrics so far — now we need to look at how your online file stacks up.

imageThe easy, and lazy, metric is file size.  There is a certain type of person — and I’m not going to name any names or get political here — that always thinks that bigger is better.  And that yuge…  I mean huge… is better than bigger.

I would be lying if I had not seen this used as a case for investment at some points.  There is no small amount of power in standing up in front of a board (which is, sadly, more older white men than we should probably have as a society at this point) and saying “your X is too small.”  That X stands for “email file size” is not entirely relevant at that point.

These people, whoever they may be, because I have too much class to single out any one particular person, are wrong.  File size is a vanity metric.  It makes you feel good (or bad) but doesn’t impact performance.

Deliverable file size is a skosh better.  Here, you subtract out people who have unsubscribed, hard bounces, people who haven’t opened an email in a significant amount of time, and other malcontents.  At least this can’t be gamed (in the long term) by going on fiverr.com and paying five bucks for thousands of email subscribers, Facebook likes, Twitter followers, etc.

But ideally, you want to take your file size and overlay your value per constituents.  If your advocacy constituents are worth $1 and your information-requesters are worth ten cents, a file that is 90,000 advocacy folks and 10,000 requesters will be worth a lot more than vice versa.  So, deliverable file size by actionable segment is probably the thing to shoot for.

But more than that, you need to look at those segments by how you get there and where you are going.  This means looking at the positive side (growth) and negative side (churn).

I’ve professed my love of the M+R metrics report here, but there’s one thing I don’t 100% agree with.  They say:

Our job is not to block the exits; our job is to throw the doors open and welcome people in.

They put this in the proper context in the next line: “You should be paying more attention to growth than churn.”  But this doesn’t mean you should be paying no attention to churn.  You want to make sure that people aren’t leaving in droves, especially if they implicate one of your acquisition strategies.  For example, if 90 percent of the people who sign a petition aren’t on your file in six months, you are either doing a bad job of retaining them or you likely didn’t want them anyway.

But, as M+R says, don’t lose a lot of sleep over churn.  The two recommendations I have are:

1) Customize your exit plans.  Many of the people who unsubscribe from you don’t want to unsubscribe as much as they want a different email relationship with you.  That may be something you are able to provide with segmented emails, fewer emails, etc.

2) Do electronic change-of-address maintenance of your file so you can recapture all of the people you want to get back.

I also like to look at online list by origin.  Sometimes, increasing online subscribers from one means of acquisition (e.g., e-append) can mask weaknesses in others (e.g., organic conversion).  There is no ideal here, but it’s good to see some diversity in origin.

Finally, make sure you are measuring the share of online revenue you get from email. You want to stay in a Goldilocks zone here.  Too little from email and your emails aren’t effective in driving people to take the most important action on your site.  Too much from email and you aren’t attracting new people to the organization.

Judging your online file

What is the value of an email address?

There are any number of ways to acquire an email address.  Change.org or Care2 will run cost-per-acquisition campaigns with you.  You can do online advertising (paid or Google Grant-ed) that drives people to your site.  You can e-append your offline constituents in the hopes of further cultivating your relationship with them.  And there’s organic — getting people to come to your site, then getting to sign on the line that is dotted.

These all have one thing in common: they cost.  They cost in time, treasure or both.  So you need to know whether the effort is worth it.  And for that, you need to be able to put a price tag on a constituent.

This is anathema to some.  Witness our (fake) debate on whether we want more donors or better donors: there are some intangibles that are, well, intangible.

But we are judged against numbers.  Our goal is to raise money and make friends, in that order.  So let’s quantify what we can.

While we are attaching caveats to this, let’s also stipulate that you should do this exercise both for your average email address (since you won’t always know from whence your constituent came) and for as many subsegments as you can reasonable do.  The value of a Care2 advocacy person will be different from an organic advocacy person, which will be different from someone who is looking for information on your site, which will be very very different from an offline donor or a walk donor that you are working to make a multichannel or organization donor.  Each will have its own value and price.

So I’m going to describe the exercise for how you would do a generic email address; the principles are the same for any subsegment.

The first step is to determine the lifetime value of the person’s online donations.  Again, I’m going to eschew attribution modeling as very complex — do it if you can, but if you can’t, you are in the right place.

denslows_three_bears_pg_3

You might think, as I once did, that the way to determine this is to take the online donations you have for a year and divide by the number of email addresses.  However, this ignores that many of your donations are made by people who are not online constituents (and may never be).  So this estimate will be far too high.

You might think, as others I’ve seen do, that you can derive this by totalling the amount given by the amount given to ask emails throughout the year.  However, this ignores that your email may stimulate a desire to give that is fulfilled on another device, another day, and even by another method (more on that later).  Counting just donations given directly to emails will give you an estimate that is too low.

So those are the Papa Bear and the Mama Bear solutions; what does Baby Bear say is just right?  I would argue that you should count the donations given online by those who were signed up for and receiving emails at the time of their online gift.  This too will be an overestimate — you might have received some of those gifts if you didn’t have those folks as constituents.  However, it’s much closer than the Papa Bear model and, as you will see from having run your numbers on revenue per page from yesterday, a constituent gift is far more likely than a person-off-the-street gift.

You also need to factor in the lift that online gives to other channels.  I recently saw an analysis of an e-append that still has double-digit increases in both response rate and average gift of the mail donors four years later.  And this included people who had since unsubscribed.  So properly written and targeted emails can be a strong retention tool.

You can look at your file and see what the offline donation and retention rates are for people for whom you have email addresses and those who don’t.  The challenge is that these are likely to be different types of people.  You ideally want to compare someone to themselves before you had their email address as well as a control audience.

That’s why I like to look at e-appends of the past for this.  You can determine:

  • Value of average donor before e-append who got appended
  • Value of average donor before e-append who didn’t appended
  • Value of average donor after e-append who got appended
  • Value of average donor after e-append who didn’t appended

From that, you should be able to derive the lift that email itself gave.  (If you need the formula, email me at nick@directtodonor.com; it’s a bit boring to go through in detail here.)

Similarly, for events with online registration, the good news is that a lot of walkers fake their email addresses or don’t give you one.  How is that good news?  It gives you a nice experiment.  Take the people who gave you their emails versus those who don’t and their return rates and gifts given/raised amounts.  My guess is that being on your email list should increase both retention and value.  These too can go into the lifetime value hopper.

Now you have a formula to got back to your analysis of pages.  Maybe those advocacy participants of today are likely to be your donors of tomorrow.  Or maybe your Change.org advocates didn’t convert the way you would like in the long-term.  These will help you make choices around investments, pages, and people.  Hope it helps!

What is the value of an email address?

Web metric basics

We talked yesterday about email metrics; now it’s Web site metrics’ turn.

We start here with the most generic of all online metrics: traffic.  No less an authority that FiveThirtyEight says that we still don’t know how to measure Web traffic. The difference is how unique visitors are measured versus total visits.  If you are an advertiser, you want to make sure the 1,000,000 visits a person is claiming to her/his site aren’t just a guy hitting reload over and over again.  This can be done by cookie, by IP address.  

My advice on this is sacrilegious for a metrics guy: don’t worry too much about it as long as you are using a consistent system for measurement.  I’ve used mainly Google Analytics for this, because it’s free, but any system will have its own way of determining this.

From this number, you can derive revenue per visitor by simply dividing your annual online revenues by your number of visitors to determine revenue per visitor.  This is a nice benchmark because you can see what all of your optimization efforts add up to; everything you do to try to get someone to a donation page, what you do to convert them, your average gift tweaking, the value you derive from your email list — all of it adds up to revenue per visitor.

But more than that, revenue per visitor also allows you to see what you are willing to invest to get someone to your site.  Let’s say your revenue per visitor is right at the M+R Benchmarks Report 2016 average of $.65 per visitor.  If the average blog post you do results in an extra 1000 visitors to your site, you should in theory be willing to pay up to $650 to write, deliver, and market that blog post (because revenue per visitor is an annual figure, so acquiring someone at cost that you can then engage in the future is a beautiful thing).

I say in theory because revenue per visitor varies based on the type of content or interaction.  I’ll talk about this at the end because we need to go through the other metrics to break this down more efficiently.

A close cousin to revenue per visitor is site donation conversion rate, or how many of the people who come to your site donate.  Instead of dividing your annual online revenues by visitors, you’ll divide the number of donations by visitors.  This is one of two key inputs to revenue per visitor (the other being average gift) and is a good way of testing whether wholesale changes to your site are helping encourage people to give.  

I recently worked with someone who put a thin banner at the top of their site encouraging donation.  He was disheartened because less than half a percent of the people who came to the site clicked on the banner.  I asked him if the total clicks were additive to donation clicks (that is, they represented people who wouldn’t have clicked to donate otherwise) or substitutive (that is, total donation clicks didn’t go up; they just moved from another donate button to this bar).  We were able to tell not only because of the donation clicks went up over baseline, but because the site donation conversation rate went up.  Now we are working on a strategy to test this bar throughout the site and with different context-specific asks.

Drilling down from the site donation conversion rate is the page donation conversion rate.  This is people who donate to a donation page divided by visitors to your donation page.  It’s a standard measure of the quality of your donation page.  This and average donation on a donation page combine to create the revenue per page.  

Revenue per page is not only a good way of measuring which donation form is better — it’s a good way of getting a feel for the valuable content on your site.  See how many of the people who come to a page end up donating directly from the page (you can do sophisticated attribution models to determine this — going directly to a donation is a quick and dirty way of doing it) and what their average gift is.  Divide that by the number of visitors you have to that page and you can see what the revenue per page is on a non-donation page as well.

This is great information to have.  Let’s say the value of a visitor to your home page is 10 cents, to a program page is 20 cents, and to an advocacy page is 40 cents.  This helps you make decisions about your content.  Do you need better calls to action on your program page?  What should be your next home page feature? (Answer: probably something about advocacy)  Where should you direct the bulk of your Google Grant traffic?  Etc.

However, there is one thing missing from all of this.  You will note that I said site donation conversion rate and page donation conversion rate.  Usually metrics folks won’t put donation in there — it’s implied.

But there’s another conversion rate that’s vitally important and that’s conversion to a constituent.  Remember that the conversion to donation process often is a series of smaller steps.  You need constituents who subscribe to your email newsletter, volunteer for your activities, and read your social media posts (OK, maybe not that last one).  A person has given you permission to talk to them is a valuable thing and should not be forgotten about.

So there’s also a site constituent conversion rate and page constituent  conversion rate — how good are your pages at capturing people.  Only when you have this to add to your revenue per page do you have a true measure of page quality.

But wait!  How do you add people converted to revenue?

That’s the topic for tomorrow as we discuss how to value a constituent.

Web metric basics

Email metric basics

Every field does its best to be impenetrable to outsiders and the world of online testing is no different.  We measure KPIs instead of “important things.” The differences among CPA, CPC, CPM, CPR, CTA, CTR, and CTOR are all important (for example, one of these can save your life, but isn’t an online metric) and there are TLAs* that I haven’t even talked about.

So this week I want to look at measuring online successes (and not-yet-successes), but first, we need to get our terms straight so we know what we are trying to impact, starting with email metrics.

For me, this is easiest picturing the steps that would go from email to action.  An email is:

  • Sent
  • Delivered
  • Opened
  • Clicked upon (or unsubscribed from)
  • Responsible for a completed action

Almost all of the other important metrics are ratios of the number of people who did this (or the number of unique people who did this — unique meaning the number of people who did something, not the number of total times something was done.  For example, 1000 people clicking on a link once and one person clicking on a link 1000 times will have the same click-through rate, but very different unique click-through rates).

The most important of these ratios are:

Delivery rate: emails delivered divided by emails sent.  This is inexact, as different email providers provide different levels of data back to you as to whether an email was a hard bounce (email permanently not delivered) or soft bounce (temporary deliver issues like full email box or email message too large).  But as long as you are using the same email program to send your emails, you will have consistent baselines and be able to assess whether it’s getting better or worse.

Open rate: emails opened divided by emails sent.  There are a couple minor problems with this.  First, opens can’t be calculated on text emails.  That is, only HTML emails have the tracking to determine whether they were opened or not.  Second, some email clients allow people to skim the contents of an email in a preview pane and count it as an open.  Third, some email clients don’t count an open as an open (even if the person interacts with the message) if it is only in a preview pane.  So it’s an inexact science.

However, open rates are still a good, but not perfect, measure for testing the big three things that a person sees before they open a message:

Why isn’t it a perfect measure?  Because it’s hackable.  Let’s say your control subject line is “Here’s how your gift saved a life.”  If you test the subject line “Your gift just won you a Porsche,” it might win on open rate, but you’ve lied to your donor (unless you have an astounding back-end premium program).  That will spike your unsubscribe rate and lower your click-throughs**.

So you probably want to look at this in combination with click-through rates (CTR).  This is another one of those metric pairs that prevent you from cheating the system that I love so much.  Click-through rate is number of people who clicked (assuming you are using unique click-through rate) by divided by emails sent.  It’s a good way of measuring how well your content gets people to (start to) take action.

Another good way to look at how your email content performs is click-to-open rate (CTOR).  This is number of people who clicked (assuming you are using unique CTOR) by divided by opens.  As you might guess, it fits very nicely with the previous two metrics.  Let’s say two emails both had 1% click-through rates.  One of them might have had a 20% open rate and a 5% click-to-open rate; the other might have had a 5% open rate and a 20% click-to-open rate.  In this case, you’d want to see if you could take the subject, sender, and pre-header of email #1 and combined it with the body copy of email #2.

You also need to look at unsubscribe rate (number of unsubscribes divided by number of emails sent), but not as much as many would think. If it starts going too high, you may want to worry about how you are acquiring your subscribers; likewise, it’s good to compare unsubscribe rates across types of constituents (that is, do your advocacy supporters unsubscribe faster than your white paper downloaders?  Perhaps it’s time for more segmentation and better advocacy content).  But don’t let it drive the boat.

Finally, you want to look at conversion rate: those who took the action divided by those who clicked through.  While not strictly an email metric, I include it here because a person could try the same underhanded tactic with the Porsche to boost click-through rates (to bait and switch the clicker) and because it’s so vital to be measuring and optimizing against.

But that’s another post.

If you want to benchmark your metrics, I strongly recommend M+R’s benchmarks study here.  And please sign up for my free newsletter here.  We have strong metrics (40% open rates and 8% click-throughs) so others are (hopefully) finding it to be useful nonprofit direct marketing content.
* Three-letter acronyms

** Also, it’s wrong.  Don’t do it.

Email metric basics

Implications of more donors versus better donors

Let’s say you’ve organizationally had the debate that we’ve been following the past three days and you have come down on the side of better donors: you’ve taken into account all of the long-time and non-financial benefits of lower-dollar donors and still can’t make the average $10 or less donor work for you organizationally.

Here are the steps you can take in your program to skew your results toward getting fewer, better donors.  Note that if you decide the other way — neither of these approaches are right or wrong — just do the opposite of everything listed below.

Up your ask strings.  As we’ve seen in two different studies of ask strings (here and here), increasing the bottom number on your ask string increases average gift.  If you are in a Pareto efficient model like we talked about on Monday, there will likely be a resultant decrease in response rate.  

Like this study indicates, I would do this with single donors and not try to get my multi-donors to elevate when they aren’t ready to.  There, I think you would be wise to keep the highest previous contribution as the base donation, but increase your multiply.

Change your defaults.  This can be the default online (where you have the radio button start on $50 instead of $25) or the amount you circle on a direct mail piece with the social proof “Most people give X”.  Moving the default up should get you fewer higher-value donors.

Move up your list selects.  When you rent or exchange with outside lists, even if a list works well for you with no qualifier on it, you can request only $5+ or $10+ donors to that organization.  It will cost a little bit more to get that list, but you will be able to cut some of the potential tippers out of your program.

Incidentally, there is a trick you can do here with a list that performs well and offers a higher-value list select (say, $50+): rent the list twice.  Once, rent it with a $10+ select and the other with a $50+ select.  Then, you can separate out your ask strings to those two lists and mail the $50+ list twice (like multis) with an appropriate ask string.

Work with your modeling agencies and coops.  They will be more than happy to build you a model that maximizes gift instead of maximizes response rates.

Invest in telemarketing upgrades.  Upgrading seems to work better when people talk with other people.  I would counsel doing this with a monthly giving ask with the appropriate audience — it’s literally the gift that keeps on giving.

Shift your lapsed reacquisition selects.  Because you “own” those names, you have the most freedom to play around with who you are trying to reacquire.  You may be able to change the complexion of your file by communicating less deeply (say, moving from 12 months to six months) among under $10 and more deeply (say, moving from 36 months to 48 months) among your $50+ donors.

Use ZIP modeling.  This can work with both acquisition and donor communications.  In both cases, you can get more aggressive about your ask strings with wealthy ZIP codes.  In acquisition, you may even choose to omit the bottom half (or whatever) percent of ZIP codes from some lists.  As with tighter donation selects, you will pay a bit more for those names, but you will get higher average gifts.

Invest in your second gift apparatus.  This is probably a good idea regardless, but if you are going to bleeding donors intentionally, you are going to need a way to make sure you are converting those you do bring on.  This may be an investment you only make for $20+ donors or the like, but a welcome series for this audience will help you keep the donors you want to keep.

Thanks for reading.  Be sure to sign up for my newsletter to keep up with the latest debate.

Also, I’d appreciate it if you’d let me know at nick@directtodonor.com if you like the debate format.  If so, we can try this with some other hot topics in nonprofit direct marketing.  If not, then we need never speak of this again.

Implications of more donors versus better donors