Online advertising metrics basics

You may be saying “Mr. Direct to Donor, why would I read this?  My online advertising budget is limited to whatever I can find between the couch cushions.”

First, please call me Nick.  Mister Direct to Donor is my dad (actually, Dr. Direct to Donor, DDS, but that’s another thing).

Second, knowledge of the basic online advertising metrics, along with a deep knowledge of what you are willing to pay for each type of constituent or click, can help you bootstrap an online marketing budget by making investments that will pay off in shorter timeframes that you can get offline (usually).

So, first things first.  Online advertising is dominated by CP_s.  The CP stands for “cost per” and the _ can be filled in by C for click, A for acquisition, or M for thousand.

(Yes, I know.  It should be “T is for a thousand.”  However, youmille-feuille_franc3a7ais_1 can do that most American of things — blame the French — for this one.  M technically stands for mille, which is French for one thousand.  You may have encountered this in the dessert mille-feuille, which is French for a cake of a thousand sheets, or in the card game Mille Bourne, which is based on being chased by a thousand angry Matt Damons.)

The big question for advertising is “one thousand what?”.  In this case of CPM, it’s impressions.  You are paying whatever amount (the average is $3-4 right now) for one thousand people to see your ads.  It’s basically like every other advertisement you’ve ever seen (pre-Internet) where you buy a magazine ad from a rate card or TV ads based on how many people are watching.

With this new thing called the Internet, however, you don’t need to pay this way in almost any case.  You can measure at a greater level of interaction, so most advertisers will allow you to pay per click, especially in the areas of greatest interest to we nonprofit marketers like search engine listings, remarketing, and co-targeting.

But even that is not enough control for some, who wish to pay to acquire a donor (or constituent) and that’s where cost-per-acquisition comes in.  This is not as popular as CPC, as the publisher of the ad is dependent on you to convert the donation or registration, but has maximum advantage for you as an advertiser.

What you are buying in each successive step closer to the act that you want to achieve (usually donation) is certainty.  With CPA (also CTA or cost to acquire), you know exactly how much you are going to pay for a constituent; with CPC, you know how much you are going to pay, assuming this batch of people converts like the last batch; with CPM, you are spraying and praying other than your front-end targeting model.

The beauty of this level of control is that it can be used to justify your budget.  There are vendors who will run CPA campaigns where they get all of the initial donations from a donor.  Assuming they are reputable, these can be of great value for you, because you then get to keep the second and subsequent donations (that you will get because of your excellent onboarding and stewardship process).  Others will charge a flat CPA; if your average gift is usually over that CPA, you can pull in even these first donations at a profit.  Some are even doing this for monthly donors, where you can calculate a payout and logical lifetime value.

Once you have those running, you now have the budget (because of your additional net revenue) to look at CPC ads.  If you have your donation forms running and effectively tested, you should be able to net money on these as well, by targeting well and testing various ad copy/designs and offers.

So use your knowledge of ads to help bring in some extra money that can be used for… more ads (if profitable)!

Online advertising metrics basics

Judging your online file

We’ve gone over email, Web, and constituent metrics so far — now we need to look at how your online file stacks up.

imageThe easy, and lazy, metric is file size.  There is a certain type of person — and I’m not going to name any names or get political here — that always thinks that bigger is better.  And that yuge…  I mean huge… is better than bigger.

I would be lying if I had not seen this used as a case for investment at some points.  There is no small amount of power in standing up in front of a board (which is, sadly, more older white men than we should probably have as a society at this point) and saying “your X is too small.”  That X stands for “email file size” is not entirely relevant at that point.

These people, whoever they may be, because I have too much class to single out any one particular person, are wrong.  File size is a vanity metric.  It makes you feel good (or bad) but doesn’t impact performance.

Deliverable file size is a skosh better.  Here, you subtract out people who have unsubscribed, hard bounces, people who haven’t opened an email in a significant amount of time, and other malcontents.  At least this can’t be gamed (in the long term) by going on fiverr.com and paying five bucks for thousands of email subscribers, Facebook likes, Twitter followers, etc.

But ideally, you want to take your file size and overlay your value per constituents.  If your advocacy constituents are worth $1 and your information-requesters are worth ten cents, a file that is 90,000 advocacy folks and 10,000 requesters will be worth a lot more than vice versa.  So, deliverable file size by actionable segment is probably the thing to shoot for.

But more than that, you need to look at those segments by how you get there and where you are going.  This means looking at the positive side (growth) and negative side (churn).

I’ve professed my love of the M+R metrics report here, but there’s one thing I don’t 100% agree with.  They say:

Our job is not to block the exits; our job is to throw the doors open and welcome people in.

They put this in the proper context in the next line: “You should be paying more attention to growth than churn.”  But this doesn’t mean you should be paying no attention to churn.  You want to make sure that people aren’t leaving in droves, especially if they implicate one of your acquisition strategies.  For example, if 90 percent of the people who sign a petition aren’t on your file in six months, you are either doing a bad job of retaining them or you likely didn’t want them anyway.

But, as M+R says, don’t lose a lot of sleep over churn.  The two recommendations I have are:

1) Customize your exit plans.  Many of the people who unsubscribe from you don’t want to unsubscribe as much as they want a different email relationship with you.  That may be something you are able to provide with segmented emails, fewer emails, etc.

2) Do electronic change-of-address maintenance of your file so you can recapture all of the people you want to get back.

I also like to look at online list by origin.  Sometimes, increasing online subscribers from one means of acquisition (e.g., e-append) can mask weaknesses in others (e.g., organic conversion).  There is no ideal here, but it’s good to see some diversity in origin.

Finally, make sure you are measuring the share of online revenue you get from email. You want to stay in a Goldilocks zone here.  Too little from email and your emails aren’t effective in driving people to take the most important action on your site.  Too much from email and you aren’t attracting new people to the organization.

Judging your online file

What is the value of an email address?

There are any number of ways to acquire an email address.  Change.org or Care2 will run cost-per-acquisition campaigns with you.  You can do online advertising (paid or Google Grant-ed) that drives people to your site.  You can e-append your offline constituents in the hopes of further cultivating your relationship with them.  And there’s organic — getting people to come to your site, then getting to sign on the line that is dotted.

These all have one thing in common: they cost.  They cost in time, treasure or both.  So you need to know whether the effort is worth it.  And for that, you need to be able to put a price tag on a constituent.

This is anathema to some.  Witness our (fake) debate on whether we want more donors or better donors: there are some intangibles that are, well, intangible.

But we are judged against numbers.  Our goal is to raise money and make friends, in that order.  So let’s quantify what we can.

While we are attaching caveats to this, let’s also stipulate that you should do this exercise both for your average email address (since you won’t always know from whence your constituent came) and for as many subsegments as you can reasonable do.  The value of a Care2 advocacy person will be different from an organic advocacy person, which will be different from someone who is looking for information on your site, which will be very very different from an offline donor or a walk donor that you are working to make a multichannel or organization donor.  Each will have its own value and price.

So I’m going to describe the exercise for how you would do a generic email address; the principles are the same for any subsegment.

The first step is to determine the lifetime value of the person’s online donations.  Again, I’m going to eschew attribution modeling as very complex — do it if you can, but if you can’t, you are in the right place.

denslows_three_bears_pg_3

You might think, as I once did, that the way to determine this is to take the online donations you have for a year and divide by the number of email addresses.  However, this ignores that many of your donations are made by people who are not online constituents (and may never be).  So this estimate will be far too high.

You might think, as others I’ve seen do, that you can derive this by totalling the amount given by the amount given to ask emails throughout the year.  However, this ignores that your email may stimulate a desire to give that is fulfilled on another device, another day, and even by another method (more on that later).  Counting just donations given directly to emails will give you an estimate that is too low.

So those are the Papa Bear and the Mama Bear solutions; what does Baby Bear say is just right?  I would argue that you should count the donations given online by those who were signed up for and receiving emails at the time of their online gift.  This too will be an overestimate — you might have received some of those gifts if you didn’t have those folks as constituents.  However, it’s much closer than the Papa Bear model and, as you will see from having run your numbers on revenue per page from yesterday, a constituent gift is far more likely than a person-off-the-street gift.

You also need to factor in the lift that online gives to other channels.  I recently saw an analysis of an e-append that still has double-digit increases in both response rate and average gift of the mail donors four years later.  And this included people who had since unsubscribed.  So properly written and targeted emails can be a strong retention tool.

You can look at your file and see what the offline donation and retention rates are for people for whom you have email addresses and those who don’t.  The challenge is that these are likely to be different types of people.  You ideally want to compare someone to themselves before you had their email address as well as a control audience.

That’s why I like to look at e-appends of the past for this.  You can determine:

  • Value of average donor before e-append who got appended
  • Value of average donor before e-append who didn’t appended
  • Value of average donor after e-append who got appended
  • Value of average donor after e-append who didn’t appended

From that, you should be able to derive the lift that email itself gave.  (If you need the formula, email me at nick@directtodonor.com; it’s a bit boring to go through in detail here.)

Similarly, for events with online registration, the good news is that a lot of walkers fake their email addresses or don’t give you one.  How is that good news?  It gives you a nice experiment.  Take the people who gave you their emails versus those who don’t and their return rates and gifts given/raised amounts.  My guess is that being on your email list should increase both retention and value.  These too can go into the lifetime value hopper.

Now you have a formula to got back to your analysis of pages.  Maybe those advocacy participants of today are likely to be your donors of tomorrow.  Or maybe your Change.org advocates didn’t convert the way you would like in the long-term.  These will help you make choices around investments, pages, and people.  Hope it helps!

What is the value of an email address?

Web metric basics

We talked yesterday about email metrics; now it’s Web site metrics’ turn.

We start here with the most generic of all online metrics: traffic.  No less an authority that FiveThirtyEight says that we still don’t know how to measure Web traffic. The difference is how unique visitors are measured versus total visits.  If you are an advertiser, you want to make sure the 1,000,000 visits a person is claiming to her/his site aren’t just a guy hitting reload over and over again.  This can be done by cookie, by IP address.  

My advice on this is sacrilegious for a metrics guy: don’t worry too much about it as long as you are using a consistent system for measurement.  I’ve used mainly Google Analytics for this, because it’s free, but any system will have its own way of determining this.

From this number, you can derive revenue per visitor by simply dividing your annual online revenues by your number of visitors to determine revenue per visitor.  This is a nice benchmark because you can see what all of your optimization efforts add up to; everything you do to try to get someone to a donation page, what you do to convert them, your average gift tweaking, the value you derive from your email list — all of it adds up to revenue per visitor.

But more than that, revenue per visitor also allows you to see what you are willing to invest to get someone to your site.  Let’s say your revenue per visitor is right at the M+R Benchmarks Report 2016 average of $.65 per visitor.  If the average blog post you do results in an extra 1000 visitors to your site, you should in theory be willing to pay up to $650 to write, deliver, and market that blog post (because revenue per visitor is an annual figure, so acquiring someone at cost that you can then engage in the future is a beautiful thing).

I say in theory because revenue per visitor varies based on the type of content or interaction.  I’ll talk about this at the end because we need to go through the other metrics to break this down more efficiently.

A close cousin to revenue per visitor is site donation conversion rate, or how many of the people who come to your site donate.  Instead of dividing your annual online revenues by visitors, you’ll divide the number of donations by visitors.  This is one of two key inputs to revenue per visitor (the other being average gift) and is a good way of testing whether wholesale changes to your site are helping encourage people to give.  

I recently worked with someone who put a thin banner at the top of their site encouraging donation.  He was disheartened because less than half a percent of the people who came to the site clicked on the banner.  I asked him if the total clicks were additive to donation clicks (that is, they represented people who wouldn’t have clicked to donate otherwise) or substitutive (that is, total donation clicks didn’t go up; they just moved from another donate button to this bar).  We were able to tell not only because of the donation clicks went up over baseline, but because the site donation conversation rate went up.  Now we are working on a strategy to test this bar throughout the site and with different context-specific asks.

Drilling down from the site donation conversion rate is the page donation conversion rate.  This is people who donate to a donation page divided by visitors to your donation page.  It’s a standard measure of the quality of your donation page.  This and average donation on a donation page combine to create the revenue per page.  

Revenue per page is not only a good way of measuring which donation form is better — it’s a good way of getting a feel for the valuable content on your site.  See how many of the people who come to a page end up donating directly from the page (you can do sophisticated attribution models to determine this — going directly to a donation is a quick and dirty way of doing it) and what their average gift is.  Divide that by the number of visitors you have to that page and you can see what the revenue per page is on a non-donation page as well.

This is great information to have.  Let’s say the value of a visitor to your home page is 10 cents, to a program page is 20 cents, and to an advocacy page is 40 cents.  This helps you make decisions about your content.  Do you need better calls to action on your program page?  What should be your next home page feature? (Answer: probably something about advocacy)  Where should you direct the bulk of your Google Grant traffic?  Etc.

However, there is one thing missing from all of this.  You will note that I said site donation conversion rate and page donation conversion rate.  Usually metrics folks won’t put donation in there — it’s implied.

But there’s another conversion rate that’s vitally important and that’s conversion to a constituent.  Remember that the conversion to donation process often is a series of smaller steps.  You need constituents who subscribe to your email newsletter, volunteer for your activities, and read your social media posts (OK, maybe not that last one).  A person has given you permission to talk to them is a valuable thing and should not be forgotten about.

So there’s also a site constituent conversion rate and page constituent  conversion rate — how good are your pages at capturing people.  Only when you have this to add to your revenue per page do you have a true measure of page quality.

But wait!  How do you add people converted to revenue?

That’s the topic for tomorrow as we discuss how to value a constituent.

Web metric basics

Email metric basics

Every field does its best to be impenetrable to outsiders and the world of online testing is no different.  We measure KPIs instead of “important things.” The differences among CPA, CPC, CPM, CPR, CTA, CTR, and CTOR are all important (for example, one of these can save your life, but isn’t an online metric) and there are TLAs* that I haven’t even talked about.

So this week I want to look at measuring online successes (and not-yet-successes), but first, we need to get our terms straight so we know what we are trying to impact, starting with email metrics.

For me, this is easiest picturing the steps that would go from email to action.  An email is:

  • Sent
  • Delivered
  • Opened
  • Clicked upon (or unsubscribed from)
  • Responsible for a completed action

Almost all of the other important metrics are ratios of the number of people who did this (or the number of unique people who did this — unique meaning the number of people who did something, not the number of total times something was done.  For example, 1000 people clicking on a link once and one person clicking on a link 1000 times will have the same click-through rate, but very different unique click-through rates).

The most important of these ratios are:

Delivery rate: emails delivered divided by emails sent.  This is inexact, as different email providers provide different levels of data back to you as to whether an email was a hard bounce (email permanently not delivered) or soft bounce (temporary deliver issues like full email box or email message too large).  But as long as you are using the same email program to send your emails, you will have consistent baselines and be able to assess whether it’s getting better or worse.

Open rate: emails opened divided by emails sent.  There are a couple minor problems with this.  First, opens can’t be calculated on text emails.  That is, only HTML emails have the tracking to determine whether they were opened or not.  Second, some email clients allow people to skim the contents of an email in a preview pane and count it as an open.  Third, some email clients don’t count an open as an open (even if the person interacts with the message) if it is only in a preview pane.  So it’s an inexact science.

However, open rates are still a good, but not perfect, measure for testing the big three things that a person sees before they open a message:

Why isn’t it a perfect measure?  Because it’s hackable.  Let’s say your control subject line is “Here’s how your gift saved a life.”  If you test the subject line “Your gift just won you a Porsche,” it might win on open rate, but you’ve lied to your donor (unless you have an astounding back-end premium program).  That will spike your unsubscribe rate and lower your click-throughs**.

So you probably want to look at this in combination with click-through rates (CTR).  This is another one of those metric pairs that prevent you from cheating the system that I love so much.  Click-through rate is number of people who clicked (assuming you are using unique click-through rate) by divided by emails sent.  It’s a good way of measuring how well your content gets people to (start to) take action.

Another good way to look at how your email content performs is click-to-open rate (CTOR).  This is number of people who clicked (assuming you are using unique CTOR) by divided by opens.  As you might guess, it fits very nicely with the previous two metrics.  Let’s say two emails both had 1% click-through rates.  One of them might have had a 20% open rate and a 5% click-to-open rate; the other might have had a 5% open rate and a 20% click-to-open rate.  In this case, you’d want to see if you could take the subject, sender, and pre-header of email #1 and combined it with the body copy of email #2.

You also need to look at unsubscribe rate (number of unsubscribes divided by number of emails sent), but not as much as many would think. If it starts going too high, you may want to worry about how you are acquiring your subscribers; likewise, it’s good to compare unsubscribe rates across types of constituents (that is, do your advocacy supporters unsubscribe faster than your white paper downloaders?  Perhaps it’s time for more segmentation and better advocacy content).  But don’t let it drive the boat.

Finally, you want to look at conversion rate: those who took the action divided by those who clicked through.  While not strictly an email metric, I include it here because a person could try the same underhanded tactic with the Porsche to boost click-through rates (to bait and switch the clicker) and because it’s so vital to be measuring and optimizing against.

But that’s another post.

If you want to benchmark your metrics, I strongly recommend M+R’s benchmarks study here.  And please sign up for my free newsletter here.  We have strong metrics (40% open rates and 8% click-throughs) so others are (hopefully) finding it to be useful nonprofit direct marketing content.
* Three-letter acronyms

** Also, it’s wrong.  Don’t do it.

Email metric basics

Implications of more donors versus better donors

Let’s say you’ve organizationally had the debate that we’ve been following the past three days and you have come down on the side of better donors: you’ve taken into account all of the long-time and non-financial benefits of lower-dollar donors and still can’t make the average $10 or less donor work for you organizationally.

Here are the steps you can take in your program to skew your results toward getting fewer, better donors.  Note that if you decide the other way — neither of these approaches are right or wrong — just do the opposite of everything listed below.

Up your ask strings.  As we’ve seen in two different studies of ask strings (here and here), increasing the bottom number on your ask string increases average gift.  If you are in a Pareto efficient model like we talked about on Monday, there will likely be a resultant decrease in response rate.  

Like this study indicates, I would do this with single donors and not try to get my multi-donors to elevate when they aren’t ready to.  There, I think you would be wise to keep the highest previous contribution as the base donation, but increase your multiply.

Change your defaults.  This can be the default online (where you have the radio button start on $50 instead of $25) or the amount you circle on a direct mail piece with the social proof “Most people give X”.  Moving the default up should get you fewer higher-value donors.

Move up your list selects.  When you rent or exchange with outside lists, even if a list works well for you with no qualifier on it, you can request only $5+ or $10+ donors to that organization.  It will cost a little bit more to get that list, but you will be able to cut some of the potential tippers out of your program.

Incidentally, there is a trick you can do here with a list that performs well and offers a higher-value list select (say, $50+): rent the list twice.  Once, rent it with a $10+ select and the other with a $50+ select.  Then, you can separate out your ask strings to those two lists and mail the $50+ list twice (like multis) with an appropriate ask string.

Work with your modeling agencies and coops.  They will be more than happy to build you a model that maximizes gift instead of maximizes response rates.

Invest in telemarketing upgrades.  Upgrading seems to work better when people talk with other people.  I would counsel doing this with a monthly giving ask with the appropriate audience — it’s literally the gift that keeps on giving.

Shift your lapsed reacquisition selects.  Because you “own” those names, you have the most freedom to play around with who you are trying to reacquire.  You may be able to change the complexion of your file by communicating less deeply (say, moving from 12 months to six months) among under $10 and more deeply (say, moving from 36 months to 48 months) among your $50+ donors.

Use ZIP modeling.  This can work with both acquisition and donor communications.  In both cases, you can get more aggressive about your ask strings with wealthy ZIP codes.  In acquisition, you may even choose to omit the bottom half (or whatever) percent of ZIP codes from some lists.  As with tighter donation selects, you will pay a bit more for those names, but you will get higher average gifts.

Invest in your second gift apparatus.  This is probably a good idea regardless, but if you are going to bleeding donors intentionally, you are going to need a way to make sure you are converting those you do bring on.  This may be an investment you only make for $20+ donors or the like, but a welcome series for this audience will help you keep the donors you want to keep.

Thanks for reading.  Be sure to sign up for my newsletter to keep up with the latest debate.

Also, I’d appreciate it if you’d let me know at nick@directtodonor.com if you like the debate format.  If so, we can try this with some other hot topics in nonprofit direct marketing.  If not, then we need never speak of this again.

Implications of more donors versus better donors

Round 3 of the more donors versus better donors debate: intangibles

For our viewers joining the program already in progress, for the past two days, Betty (arguing in favor of better donors over more donors) and Mo (arguing in favor of more donors over better donors) have been debating.  Today, the final round of the debate: intangibles.

Mo: The implications of focusing on fewer donors scares me.  My thinking is that you will draw the line at five dollar donors and cut quantity and donor volume accordingly.  Then, when you have fewer people on file and higher per piece costs, you’ll have to move that line up to ten dollars.  And so on down a death spiral.

Betty: As we’ve established, the amount that I’ll save by not having to have expensive means of communication to donors that aren’t going to pay back helps our bottom line.  If anything, focusing on higher-value donors is a way of getting out of a death spiral by cutting out the people who helped us get there.

And we know that number of donors on file is a false metric.  It ignores that some people are worth inherently more to the organization than others.

My concern is that there’s only so much time and attention you can give to a direct marketing program.  Too much of it goes to the Sisyphean task of trying to get $5 donors to become profitable.  Why not focus on what matters?

Mo: Because bulk matters too.  When we go to lobby for legislation, officials ask how many members we have.  They notice if we are a force.

And upgrading got us where we are.  Look at your current high-value donors.  They were $5 donors 20 years ago.  It was cultivation and upgrade strategies that made us what we are.

Betty: That’s fine and dandy back when acquisition could turn a profit.  But every year, acquisition becomes a little harder and a little more expensive.  This isn’t kindergarten where everyone has to have a turn.  We are accountable to all of the donors that we have that we use their donations wisely.  If we aren’t getting net money from a person, we owe it to all of our donors to let them go.

Mo: Why not just customize your donor stream for them where you can make a profit?

Betty: You should if you can, but you can’t always.  And keeping them in the mail stream does something else: it starts making your pieces that win in tests the ones that are tailored toward a lower common denominator.  That’s the death spiral you should worry about: the temptation to cut costs by doing things like not personalizing pieces that don’t matter as much to the most marginal segments of your file.

Verdict: I’d like to know what you think at nick@directtodonor.com.  Personally, I buy some of Betty’s arguments here.  There always is a threshold at which you need to cut some donors off.  Rationally, then, it seems like there should be a threshold at which you should try not to acquire them.  What that threshold is will vary from organization to organization.

So tomorrow, we’ll talk about the implications of if and where you choose to draw the line.

Round 3 of the more donors versus better donors debate: intangibles

More donors versus better donors: long-term and external benefits

To review, yesterday, Betty (arguing in favor of better donors over more donors) won a slight victory over Mo (arguing in favor of more donors over better donors) in talking about costs of fundraising.  Today, they will debate again: this time on the topic of external benefits of donors.

Mo: The case here is manifest.  To put a value on a constituent that comes only from what they give through direct marketing is myopic.  Having more donors means having more people that support you and having more people that support you means:

  • More awareness of your mission in the community
  • More volunteers
  • More advocates

Betty: It’s nice to believe that there are something things you can’t put a price on, but you can.  You can get awareness with PSAs and earned media. You can advertise for volunteers (and incidentally, thinking someone who gives $5 at time is dedicated enough to your mission to be your top volunteer is wishful at best.)  And you can get online advocates for $1.50 a pop from Care2 or Change.org.  If you want real change, the high-dollar donors in a congressperson’s district will hold more sway; they are who you get through consciously soliciting for value.

Mo: That works for some districts, but if you are doing the things that you need to do to get high-value only donors like zip selects, you are going to be ignoring a lot of districts that are just plain poor.  And you are going to be ignoring them with your message, mission, awareness, and advocacy.

But if you want to boil it down to dollars and cents, let’s go there.  Some smaller donors make for extremely effective peer-to-peer fundraisers.  You rarely know who is a deacon at the church and can pass the hat at the plant.  And casting your net broadly gives you a greater opportunity to get those types of donors.

Betty: You may have a point on peer-to-peer fundraising, but low-dollar peer-to-peer fundraisers are likely to bring in more low-dollar donors.  Now you have twice the problem.

Someone who gives more money at the outset is also likely to give more outside of a traditional single-channel direct marketing program.  They are the ones who will become the multichannel givers, major donors, and monthly givers.

Mo: Yes, if you go exclusively for the people who eat with multiple forks and pinkies out, you will get more of those high-value upgrades.

But you will rarely get bequests.  There is a great case study from the ASPCA.   Because they had focused on higher-value donors, they were not getting as many bequests.  In fact, they were excluding the 70+-year-old, $10 and under givers that were their best planned giving prospects.  So they made a conscious choice to go back and reacquire these donors, sending them (only) the best house mailings and working to upgrade them to bequest giving.

The verdict: Have to give this one to Mo on points.  A traditional lifetime value calculation ignores the value of donors as volunteers and advocates, which do have their own quasi-monetary value.  And bequest giving often comes from “tippers” on your direct marketing file of a certain age who give to help you in their lifetime, but are saving a nest egg for donation at the end of their lives.

This is certainly not to say that higher-average-gift donors don’t have greater major donor prospects; it’s just saying that a portfolio approach of quantity will have hidden benefits that should be uncovered.

More donors versus better donors: long-term and external benefits

More donors versus better donors: cost of fundraising

Previously on Direct to Donor…  the question was raised as to whether it is better to have fewer, better (that is, higher value) donors or more, lower value donors.  And now, today’s episode…

point counterpointWe’ll try this debate style.  Betty will be arguing for our better, fewer donor model (aka the Ravenclaw strategy) and Mo will be arguing for our more donors regardless of how much they give (aka the Hufflepuff strategy).

Betty:  Simply put, many donors just don’t pay for themselves.  Let’s say you have a robust multichannel solicitation program that costs you about $5 per person to run.  If your $10 donors don’t average more than a half a gift year (which may be pushing it, assuming that a healthy portion of them are first-time donors), these donors are literally losing you money every time you communicate with them.

Mo: Then don’t mail them so much.  Solicitation costs are something that are under your control.  Lower-dollar donors don’t have to have the same cadence as a higher-dollar donor.  Nor do you have to send the same packages or use more expensive means like telemarketing to keep your lower-dollar donors.  Try to convert them to less expensive means like giving online.

In fact, because volume is a big predictor of communication costs for means like direct mail, you save money on all segments by having more people on file.

Betty: First, let’s dispense with the notion that a $8 offline donor is suddenly going to become a $50 online donor.  Honestly, at that level, you wouldn’t even pay to e-append them.

Second, bulk of donors will save you per piece, but only by a couple cents per piece.  That doesn’t compensate for the vast differences in net per piece value from a strong donor.  In fact, that’s why you can communicate much deeper into your file with higher-dollar donors; even a small chance of getting a gift from a $100+ donor is better than a good chance of getting a gift from a $5 donor.  

And in very strong average gift segments, you can be making over a dollar, two dollars, five dollars, or more per communication to your strong segments, a virtual impossibility with lower dollar segments.  So your fundraising efficiency is much greater.

Mo: Fundraising efficiency should not be a metric.  You can tell it’s unimportant and misleading because Charity Navigator measures it. (rim shot)  What you want is to be able to maximize the net revenue you can deliver to the mission of the organization.  And thus you want to have these donors.  There are some segments of donors that like to give $5 at a time, but they will do it to every other or every third communication you send them.  While it’s not a home run, getting on base often means something.

And these donors are much cheaper to get.  Sometimes they are half of the cost of acquiring a larger-average gift donor.

Betty: But because they make smaller gifts and usually have smaller response rates, they are far less able to make back the investment.  A quality donor is a gift that keeps on giving and lower quality donors simply aren’t.

Mo: But you don’t know the hidden gems when you acquire them.  Having more donors is likely panning for gold.  And so you want quantity.

Betty: That would be true if donors generally upgraded.  However, if someone gives you the same amount three times, chances are you are going to be getting that amount for the rest of their useful donor life.  Upgrading is good to try to do, but you can’t count on it for the bulk of your audience.  And loyalty goes up as average gift goes up, so you really can tell from average gift whether someone is more likely to become a good donor for you.

The verdict: This one is a split decision.  The case for more donors makes some good points and you should be doing whatever you can do to minimize your costs with low-dollar audiences.

But, by a nose, we have to give this to the case for better donors.  There is a point in every file where donors just stop being profitable.  For some, it’s at $5; for some, it’s at $15.  At that point, you don’t have a good way to make money for your mission from them.  And when you can’t fund your mission from them, you should aim not to acquire them.

“But wait!” Mo says.  “What about the non-monetary benefits of having more donors?”  Well, that will be tomorrow’s debate.

More donors versus better donors: cost of fundraising

The choice: more donors or better donors?

There is a forthcoming study in the Journal of Marketing Research looking at the choice of defaults in donation asks (e.g., which radio button you have auto-clicked on your Website for donation level).

One of the findings was that for many scenarios, changing this default impacted average gift and response rate, but didn’t change revenue.  That is, the average gift and response rate moved in exactly offsetting ways.  So which is the winner?  

Let’s leave aside the fact that there is an obvious correct answer to this*.  It brings up an interesting conundrum: all other things being equal, would you rather have fewer, better (which we will operationalize to higher average gift) donors or more, lesser donors (lower average gift)?

So, let’s say your campaign is bringing in $100,000: would you rather have 2,000 $50 donors or 5,000 $20 donors or some other scenario?

This is a realistic question.  If you graph out your acquisition success by outside list that you are using, chances are you will get something like this:

paretoefficiency

This is actually a good sign.  It means that you are using the best lists, as you are approaching something close to the Pareto efficient frontier (a fancy way of saying you can’t grow any more; you can only make tradeoffs).  After all, if there were a list that was in the upper right here — high response rate and high average gift — you’d be doubling down on that.  But since there isn’t, do you invest more in the upper left or lower right?

This has far-reaching implications.  For example, what metric do you use to determine the success or failure of an acquisition piece?

Yes, in a perfect world, you would use lifetime value.  But we don’t live in a perfect world (if you doubt this, watch a presidential debate at random; this could make Dr. Pangloss open a vein).  Lifetime value takes time to manifest and you need to know what you are making a decision on tomorrow.

So, for your preliminary work, do you go toward net cost to acquire a donor, which will reward getting a large number of smaller donors?  Or do you go to something like net per piece, which will reward fewer larger donors?

(Or, as I’m starting to do, do you look at the donors that a campaign is bringing in and their initial give, then projecting out their average gifting as a poor man’s model for lifetime value?  This is a better solution, but again, don’t let logic get in the way of a good thought experiment.)

This week, I’d like to explore this thought experiment in some detail (in part because it’s something I’m struggling with as well), laying out the case for both approaches and seeing what the implications of this are.
* The correct answer is to set up ask strings and defaults based on previous giving history and/or modeling; customization cuts this particular Gordian knot.

The choice: more donors or better donors?