The four American stories

Your English teacher probably told you at some point about types of stories: man versus man, man versus nature, man versus society, etc.  English teachers like this may or may not be why so few movies pass the Bechdel test.

hickey-bechdel-11-0

Anyway, there is a taxonomy of storytelling I prefer to these types of conflicts — it’s the four American stories discussed by Robert Reich in Tales of a New America.  Those stories are:

  1. The mob at the gates.  The enemy is out there and we are in here.  We are a beacon to others, but we are fragile unless we arm ourselves against the barbarian horde who want to destroy us and our way of life.
  2. The triumphant individual.  This is the person who made her own bootstraps and pulled herself up by them.  These stories include pluck, grit, gumption, not to mention moxie and spunk.  It’s hard works, late nights, and early mornings.  It’s Abe Lincoln and Ben Franklin and Horatio Alger.  It’s Rocky and Rudy and the venerated entrepreneur.  
  3. The benevolent community.  This is neighbors coming together to help.  It’s the end that we are generally good at heart and will come together as one people to solve the tough problems.  
  4. The rot at the top.  These can be aristocrats, bureaucrats, banks, the 1%, the conspiracy of the day.  These people in high places are corrupt, decadent, and reckless and the keep their boot on us all.

In their raw forms, these are the intersections of two dichotomies: optimistic versus pessimistic and few versus many.

Few stories are only one of these and powerful ones intertwine them.  A classic example of these is the ebb and flow of the fortunes of George Bailey in It’s a Wonderful Life.  In one scene, he defends his bank on his own (triumphant individual) against a bank run (the mob at the gates) engineered by Mr. Potter (the rot at the top) and triumphs when his neighbors agree their money is best kept in their neighbors’ houses (the benevolent community).  

So, how do you craft your nonprofit’s story and people’s places in it?  Some implications:

  • You want to yin with your yang in temperament.  An unrelentingly positive communication leaves no thought that there is still a need.  An unrelentingly negative one makes a person want to take a bath, not try to create their own hero story.  I would recommend making sure you are both heroing your donor (whether you cast them as the triumphant individual or part of the benevolent community depends on their personal bend) and talking about the threat you face, whether from without or within.
  • Like temperament, I also find that it’s good to have few versus many in opposition.  True, two evenly matched individuals or equally sized armies can make for good stories.  But when one individual stands up for what is right against the masses (there’s a narrative reason we like when the crowd isn’t chanting Rocky’s name at the beginning of the fight, especially when fighting godless Commies) or when a people throw the bums out, you have a truly gripping story.  
  • Everything that isn’t in a category here is noise.  You’ll note that there isn’t a storytelling category for talking about how great your programs are, for the same reason that the benevolent community story of a barn-raising doesn’t dwell on the awesomeness of the hammers.

Tomorrow, we’ll talk about some tricks to make your story more compelling.

The four American stories

How long should a story be?

Long enough, and no longer.  There!  That was a quick post.

I just realized that I’ve referred many a time about telling quality stories, but haven’t gone into a lot of detail on how.

So that starts today with length of your story.  I like this topic partly because I get to quote Jeff Brooks’ Fundraising’s Guide to Irresistible Communications:

“I’ve tested long against short many times.  In direct mail, the shorter message only does better about 10 percent of the time (a short message does tend to work better for emergency fundraising).

But most often, if you’re looking for a way to improve an appeal, add another page.  Most likely it’ll boost response.  Often in can generate a higher average gift too.

It’s true in email as well, though not as decisively so.”

In addition to emergencies, I’ve personally found shorter to be better with appeals where urgency is a main driver (e.g., reminder of matching gift deadline; advocacy appeals tied to a specific date) and institutional appeals like a membership reminder.

Other than that, length is to be sought, not avoided.

This is counterintuitive; smart people ask why our mail pieces are so long.  And it’s not what people say themselves.  There is a recent donor loyalty study from Abila where they indicate that only 20% of people read five paragraphs in and only seven percent of people are still reading at the ten paragraph market.

Here’s a tip: if you are reading this, this data point is probably not correct.

The challenge with this data point is that they didn’t test this; they asked donors.  Unfortunately, donor surveys are fraught with peril, not the least of which is people stink at understanding what they would do (much better to see what they actually do).  We talked about this when talking about donor surveys that don’t stink.

Other questionable results from this survey include:

  • Allegedly the least important part of an event is “Keep me involved afterward by sending me pictures, statements on the event’s impact, or other news.”  So be sure not to thank your donors or talk to them about the difference they are making in the world!
  • 28% of people would keep donating even if the content they got was vague, was boring, talked about uninteresting programs, had incorrect info about the donors, and was not personalized.  Unfortunately, I’ve sent these appeals and the response rate isn’t that high.
  • 37% of donors like posts to Twitter as a content type.  Only 16% of donors follow nonprofits on social media.  So at least 21% of people want you to talk to them on Twitter, where they aren’t listening?

So length can be a strong driver and should be something you test.  But you want the right type of length.  Avoid longer sentences and paragraphs.  Shorter is easier to understand, and therefore truer.

Instead, delve into rich detail.  Details and active verbs make your stories more memorable.  And that helps create quality length, and not just length for length sake.

And don’t be afraid to repeat yourself in different words.  Familiarity breeds content.  It also helps skimmers get the important points in your piece (which you should be underlining, bolding, calling out, etc.).

This may not seem like the way you would want your communications.  Remember, you are not the donor.  Especially in the mail, donors who donate like to receive and read mail.  Let’s not disappoint.


After posting this, I heard a great line in Content Inc that stories should be like a miniskirt: long enough to cover everything it’s necessary to cover, but short enough to hold interest.  So I had to add that as well…

How long should a story be?

Online advertising metrics basics

You may be saying “Mr. Direct to Donor, why would I read this?  My online advertising budget is limited to whatever I can find between the couch cushions.”

First, please call me Nick.  Mister Direct to Donor is my dad (actually, Dr. Direct to Donor, DDS, but that’s another thing).

Second, knowledge of the basic online advertising metrics, along with a deep knowledge of what you are willing to pay for each type of constituent or click, can help you bootstrap an online marketing budget by making investments that will pay off in shorter timeframes that you can get offline (usually).

So, first things first.  Online advertising is dominated by CP_s.  The CP stands for “cost per” and the _ can be filled in by C for click, A for acquisition, or M for thousand.

(Yes, I know.  It should be “T is for a thousand.”  However, youmille-feuille_franc3a7ais_1 can do that most American of things — blame the French — for this one.  M technically stands for mille, which is French for one thousand.  You may have encountered this in the dessert mille-feuille, which is French for a cake of a thousand sheets, or in the card game Mille Bourne, which is based on being chased by a thousand angry Matt Damons.)

The big question for advertising is “one thousand what?”.  In this case of CPM, it’s impressions.  You are paying whatever amount (the average is $3-4 right now) for one thousand people to see your ads.  It’s basically like every other advertisement you’ve ever seen (pre-Internet) where you buy a magazine ad from a rate card or TV ads based on how many people are watching.

With this new thing called the Internet, however, you don’t need to pay this way in almost any case.  You can measure at a greater level of interaction, so most advertisers will allow you to pay per click, especially in the areas of greatest interest to we nonprofit marketers like search engine listings, remarketing, and co-targeting.

But even that is not enough control for some, who wish to pay to acquire a donor (or constituent) and that’s where cost-per-acquisition comes in.  This is not as popular as CPC, as the publisher of the ad is dependent on you to convert the donation or registration, but has maximum advantage for you as an advertiser.

What you are buying in each successive step closer to the act that you want to achieve (usually donation) is certainty.  With CPA (also CTA or cost to acquire), you know exactly how much you are going to pay for a constituent; with CPC, you know how much you are going to pay, assuming this batch of people converts like the last batch; with CPM, you are spraying and praying other than your front-end targeting model.

The beauty of this level of control is that it can be used to justify your budget.  There are vendors who will run CPA campaigns where they get all of the initial donations from a donor.  Assuming they are reputable, these can be of great value for you, because you then get to keep the second and subsequent donations (that you will get because of your excellent onboarding and stewardship process).  Others will charge a flat CPA; if your average gift is usually over that CPA, you can pull in even these first donations at a profit.  Some are even doing this for monthly donors, where you can calculate a payout and logical lifetime value.

Once you have those running, you now have the budget (because of your additional net revenue) to look at CPC ads.  If you have your donation forms running and effectively tested, you should be able to net money on these as well, by targeting well and testing various ad copy/designs and offers.

So use your knowledge of ads to help bring in some extra money that can be used for… more ads (if profitable)!

Online advertising metrics basics

Judging your online file

We’ve gone over email, Web, and constituent metrics so far — now we need to look at how your online file stacks up.

imageThe easy, and lazy, metric is file size.  There is a certain type of person — and I’m not going to name any names or get political here — that always thinks that bigger is better.  And that yuge…  I mean huge… is better than bigger.

I would be lying if I had not seen this used as a case for investment at some points.  There is no small amount of power in standing up in front of a board (which is, sadly, more older white men than we should probably have as a society at this point) and saying “your X is too small.”  That X stands for “email file size” is not entirely relevant at that point.

These people, whoever they may be, because I have too much class to single out any one particular person, are wrong.  File size is a vanity metric.  It makes you feel good (or bad) but doesn’t impact performance.

Deliverable file size is a skosh better.  Here, you subtract out people who have unsubscribed, hard bounces, people who haven’t opened an email in a significant amount of time, and other malcontents.  At least this can’t be gamed (in the long term) by going on fiverr.com and paying five bucks for thousands of email subscribers, Facebook likes, Twitter followers, etc.

But ideally, you want to take your file size and overlay your value per constituents.  If your advocacy constituents are worth $1 and your information-requesters are worth ten cents, a file that is 90,000 advocacy folks and 10,000 requesters will be worth a lot more than vice versa.  So, deliverable file size by actionable segment is probably the thing to shoot for.

But more than that, you need to look at those segments by how you get there and where you are going.  This means looking at the positive side (growth) and negative side (churn).

I’ve professed my love of the M+R metrics report here, but there’s one thing I don’t 100% agree with.  They say:

Our job is not to block the exits; our job is to throw the doors open and welcome people in.

They put this in the proper context in the next line: “You should be paying more attention to growth than churn.”  But this doesn’t mean you should be paying no attention to churn.  You want to make sure that people aren’t leaving in droves, especially if they implicate one of your acquisition strategies.  For example, if 90 percent of the people who sign a petition aren’t on your file in six months, you are either doing a bad job of retaining them or you likely didn’t want them anyway.

But, as M+R says, don’t lose a lot of sleep over churn.  The two recommendations I have are:

1) Customize your exit plans.  Many of the people who unsubscribe from you don’t want to unsubscribe as much as they want a different email relationship with you.  That may be something you are able to provide with segmented emails, fewer emails, etc.

2) Do electronic change-of-address maintenance of your file so you can recapture all of the people you want to get back.

I also like to look at online list by origin.  Sometimes, increasing online subscribers from one means of acquisition (e.g., e-append) can mask weaknesses in others (e.g., organic conversion).  There is no ideal here, but it’s good to see some diversity in origin.

Finally, make sure you are measuring the share of online revenue you get from email. You want to stay in a Goldilocks zone here.  Too little from email and your emails aren’t effective in driving people to take the most important action on your site.  Too much from email and you aren’t attracting new people to the organization.

Judging your online file

What is the value of an email address?

There are any number of ways to acquire an email address.  Change.org or Care2 will run cost-per-acquisition campaigns with you.  You can do online advertising (paid or Google Grant-ed) that drives people to your site.  You can e-append your offline constituents in the hopes of further cultivating your relationship with them.  And there’s organic — getting people to come to your site, then getting to sign on the line that is dotted.

These all have one thing in common: they cost.  They cost in time, treasure or both.  So you need to know whether the effort is worth it.  And for that, you need to be able to put a price tag on a constituent.

This is anathema to some.  Witness our (fake) debate on whether we want more donors or better donors: there are some intangibles that are, well, intangible.

But we are judged against numbers.  Our goal is to raise money and make friends, in that order.  So let’s quantify what we can.

While we are attaching caveats to this, let’s also stipulate that you should do this exercise both for your average email address (since you won’t always know from whence your constituent came) and for as many subsegments as you can reasonable do.  The value of a Care2 advocacy person will be different from an organic advocacy person, which will be different from someone who is looking for information on your site, which will be very very different from an offline donor or a walk donor that you are working to make a multichannel or organization donor.  Each will have its own value and price.

So I’m going to describe the exercise for how you would do a generic email address; the principles are the same for any subsegment.

The first step is to determine the lifetime value of the person’s online donations.  Again, I’m going to eschew attribution modeling as very complex — do it if you can, but if you can’t, you are in the right place.

denslows_three_bears_pg_3

You might think, as I once did, that the way to determine this is to take the online donations you have for a year and divide by the number of email addresses.  However, this ignores that many of your donations are made by people who are not online constituents (and may never be).  So this estimate will be far too high.

You might think, as others I’ve seen do, that you can derive this by totalling the amount given by the amount given to ask emails throughout the year.  However, this ignores that your email may stimulate a desire to give that is fulfilled on another device, another day, and even by another method (more on that later).  Counting just donations given directly to emails will give you an estimate that is too low.

So those are the Papa Bear and the Mama Bear solutions; what does Baby Bear say is just right?  I would argue that you should count the donations given online by those who were signed up for and receiving emails at the time of their online gift.  This too will be an overestimate — you might have received some of those gifts if you didn’t have those folks as constituents.  However, it’s much closer than the Papa Bear model and, as you will see from having run your numbers on revenue per page from yesterday, a constituent gift is far more likely than a person-off-the-street gift.

You also need to factor in the lift that online gives to other channels.  I recently saw an analysis of an e-append that still has double-digit increases in both response rate and average gift of the mail donors four years later.  And this included people who had since unsubscribed.  So properly written and targeted emails can be a strong retention tool.

You can look at your file and see what the offline donation and retention rates are for people for whom you have email addresses and those who don’t.  The challenge is that these are likely to be different types of people.  You ideally want to compare someone to themselves before you had their email address as well as a control audience.

That’s why I like to look at e-appends of the past for this.  You can determine:

  • Value of average donor before e-append who got appended
  • Value of average donor before e-append who didn’t appended
  • Value of average donor after e-append who got appended
  • Value of average donor after e-append who didn’t appended

From that, you should be able to derive the lift that email itself gave.  (If you need the formula, email me at nick@directtodonor.com; it’s a bit boring to go through in detail here.)

Similarly, for events with online registration, the good news is that a lot of walkers fake their email addresses or don’t give you one.  How is that good news?  It gives you a nice experiment.  Take the people who gave you their emails versus those who don’t and their return rates and gifts given/raised amounts.  My guess is that being on your email list should increase both retention and value.  These too can go into the lifetime value hopper.

Now you have a formula to got back to your analysis of pages.  Maybe those advocacy participants of today are likely to be your donors of tomorrow.  Or maybe your Change.org advocates didn’t convert the way you would like in the long-term.  These will help you make choices around investments, pages, and people.  Hope it helps!

What is the value of an email address?

Web metric basics

We talked yesterday about email metrics; now it’s Web site metrics’ turn.

We start here with the most generic of all online metrics: traffic.  No less an authority that FiveThirtyEight says that we still don’t know how to measure Web traffic. The difference is how unique visitors are measured versus total visits.  If you are an advertiser, you want to make sure the 1,000,000 visits a person is claiming to her/his site aren’t just a guy hitting reload over and over again.  This can be done by cookie, by IP address.  

My advice on this is sacrilegious for a metrics guy: don’t worry too much about it as long as you are using a consistent system for measurement.  I’ve used mainly Google Analytics for this, because it’s free, but any system will have its own way of determining this.

From this number, you can derive revenue per visitor by simply dividing your annual online revenues by your number of visitors to determine revenue per visitor.  This is a nice benchmark because you can see what all of your optimization efforts add up to; everything you do to try to get someone to a donation page, what you do to convert them, your average gift tweaking, the value you derive from your email list — all of it adds up to revenue per visitor.

But more than that, revenue per visitor also allows you to see what you are willing to invest to get someone to your site.  Let’s say your revenue per visitor is right at the M+R Benchmarks Report 2016 average of $.65 per visitor.  If the average blog post you do results in an extra 1000 visitors to your site, you should in theory be willing to pay up to $650 to write, deliver, and market that blog post (because revenue per visitor is an annual figure, so acquiring someone at cost that you can then engage in the future is a beautiful thing).

I say in theory because revenue per visitor varies based on the type of content or interaction.  I’ll talk about this at the end because we need to go through the other metrics to break this down more efficiently.

A close cousin to revenue per visitor is site donation conversion rate, or how many of the people who come to your site donate.  Instead of dividing your annual online revenues by visitors, you’ll divide the number of donations by visitors.  This is one of two key inputs to revenue per visitor (the other being average gift) and is a good way of testing whether wholesale changes to your site are helping encourage people to give.  

I recently worked with someone who put a thin banner at the top of their site encouraging donation.  He was disheartened because less than half a percent of the people who came to the site clicked on the banner.  I asked him if the total clicks were additive to donation clicks (that is, they represented people who wouldn’t have clicked to donate otherwise) or substitutive (that is, total donation clicks didn’t go up; they just moved from another donate button to this bar).  We were able to tell not only because of the donation clicks went up over baseline, but because the site donation conversation rate went up.  Now we are working on a strategy to test this bar throughout the site and with different context-specific asks.

Drilling down from the site donation conversion rate is the page donation conversion rate.  This is people who donate to a donation page divided by visitors to your donation page.  It’s a standard measure of the quality of your donation page.  This and average donation on a donation page combine to create the revenue per page.  

Revenue per page is not only a good way of measuring which donation form is better — it’s a good way of getting a feel for the valuable content on your site.  See how many of the people who come to a page end up donating directly from the page (you can do sophisticated attribution models to determine this — going directly to a donation is a quick and dirty way of doing it) and what their average gift is.  Divide that by the number of visitors you have to that page and you can see what the revenue per page is on a non-donation page as well.

This is great information to have.  Let’s say the value of a visitor to your home page is 10 cents, to a program page is 20 cents, and to an advocacy page is 40 cents.  This helps you make decisions about your content.  Do you need better calls to action on your program page?  What should be your next home page feature? (Answer: probably something about advocacy)  Where should you direct the bulk of your Google Grant traffic?  Etc.

However, there is one thing missing from all of this.  You will note that I said site donation conversion rate and page donation conversion rate.  Usually metrics folks won’t put donation in there — it’s implied.

But there’s another conversion rate that’s vitally important and that’s conversion to a constituent.  Remember that the conversion to donation process often is a series of smaller steps.  You need constituents who subscribe to your email newsletter, volunteer for your activities, and read your social media posts (OK, maybe not that last one).  A person has given you permission to talk to them is a valuable thing and should not be forgotten about.

So there’s also a site constituent conversion rate and page constituent  conversion rate — how good are your pages at capturing people.  Only when you have this to add to your revenue per page do you have a true measure of page quality.

But wait!  How do you add people converted to revenue?

That’s the topic for tomorrow as we discuss how to value a constituent.

Web metric basics

Email metric basics

Every field does its best to be impenetrable to outsiders and the world of online testing is no different.  We measure KPIs instead of “important things.” The differences among CPA, CPC, CPM, CPR, CTA, CTR, and CTOR are all important (for example, one of these can save your life, but isn’t an online metric) and there are TLAs* that I haven’t even talked about.

So this week I want to look at measuring online successes (and not-yet-successes), but first, we need to get our terms straight so we know what we are trying to impact, starting with email metrics.

For me, this is easiest picturing the steps that would go from email to action.  An email is:

  • Sent
  • Delivered
  • Opened
  • Clicked upon (or unsubscribed from)
  • Responsible for a completed action

Almost all of the other important metrics are ratios of the number of people who did this (or the number of unique people who did this — unique meaning the number of people who did something, not the number of total times something was done.  For example, 1000 people clicking on a link once and one person clicking on a link 1000 times will have the same click-through rate, but very different unique click-through rates).

The most important of these ratios are:

Delivery rate: emails delivered divided by emails sent.  This is inexact, as different email providers provide different levels of data back to you as to whether an email was a hard bounce (email permanently not delivered) or soft bounce (temporary deliver issues like full email box or email message too large).  But as long as you are using the same email program to send your emails, you will have consistent baselines and be able to assess whether it’s getting better or worse.

Open rate: emails opened divided by emails sent.  There are a couple minor problems with this.  First, opens can’t be calculated on text emails.  That is, only HTML emails have the tracking to determine whether they were opened or not.  Second, some email clients allow people to skim the contents of an email in a preview pane and count it as an open.  Third, some email clients don’t count an open as an open (even if the person interacts with the message) if it is only in a preview pane.  So it’s an inexact science.

However, open rates are still a good, but not perfect, measure for testing the big three things that a person sees before they open a message:

Why isn’t it a perfect measure?  Because it’s hackable.  Let’s say your control subject line is “Here’s how your gift saved a life.”  If you test the subject line “Your gift just won you a Porsche,” it might win on open rate, but you’ve lied to your donor (unless you have an astounding back-end premium program).  That will spike your unsubscribe rate and lower your click-throughs**.

So you probably want to look at this in combination with click-through rates (CTR).  This is another one of those metric pairs that prevent you from cheating the system that I love so much.  Click-through rate is number of people who clicked (assuming you are using unique click-through rate) by divided by emails sent.  It’s a good way of measuring how well your content gets people to (start to) take action.

Another good way to look at how your email content performs is click-to-open rate (CTOR).  This is number of people who clicked (assuming you are using unique CTOR) by divided by opens.  As you might guess, it fits very nicely with the previous two metrics.  Let’s say two emails both had 1% click-through rates.  One of them might have had a 20% open rate and a 5% click-to-open rate; the other might have had a 5% open rate and a 20% click-to-open rate.  In this case, you’d want to see if you could take the subject, sender, and pre-header of email #1 and combined it with the body copy of email #2.

You also need to look at unsubscribe rate (number of unsubscribes divided by number of emails sent), but not as much as many would think. If it starts going too high, you may want to worry about how you are acquiring your subscribers; likewise, it’s good to compare unsubscribe rates across types of constituents (that is, do your advocacy supporters unsubscribe faster than your white paper downloaders?  Perhaps it’s time for more segmentation and better advocacy content).  But don’t let it drive the boat.

Finally, you want to look at conversion rate: those who took the action divided by those who clicked through.  While not strictly an email metric, I include it here because a person could try the same underhanded tactic with the Porsche to boost click-through rates (to bait and switch the clicker) and because it’s so vital to be measuring and optimizing against.

But that’s another post.

If you want to benchmark your metrics, I strongly recommend M+R’s benchmarks study here.  And please sign up for my free newsletter here.  We have strong metrics (40% open rates and 8% click-throughs) so others are (hopefully) finding it to be useful nonprofit direct marketing content.
* Three-letter acronyms

** Also, it’s wrong.  Don’t do it.

Email metric basics

Toward a linear RFM

In addition to the many challenges of RFM already discussed, the segmentation puts up artificial barriers between segments.  Some of these include:

  • Let’s say someone is one of the people we talked about yesterday that gives every November or December.  If s/he gave in November 2014, then again in December 2015, are you really going to consider them having “lapsed” in the middle?
  • The distinction between frequency groups is artificial. As we discussed on Tuesday, Sandy, who gave you 100 gifts, and Miriam, who gave you two, are both considered multidonors for most RFM segmentations.
  • The distinction between monetary value segments is artificial. Which donor would you prefer – a donor who donates $10 ten times per year or a donor who donates $50 once a year?  RFM prefers the latter; I’m guessing you would prefer the former.

But how to do you create equivalencies among every different segment?  Would you rather have a donor who gave $100 to an acquisition package six months ago or a loyal semi-frequent $20 donor?

The ideal would be to run a model with lifetime value as the dependent variable and your traditional RFM variables, plus as many of the ones that we’ve talked about this week, and determine what your actual drivers of value are.

But lifetime value, as you can tell from the name, takes a long time.

So let’s steal a rule-of-thumb model from the for-profit world.  Connie Bauer first (at least first to my knowledge) proposed this in an influential 1988 Journal of Direct Marketing article called “A Direct Mail Customer Purchase Model.”  Here, I’ve replaced purchases with donations; I think it works in our world with this replacement.  To get the RFM score, you multiply these three things together:

  1. The reciprocal of recency of the last donation in months.
  2. Number of donations
  3. The square root of the total amount of donations the person has made.

There are a few things I like about this shorthand:

  • There’s a reasonable equivalence between recency and frequency.  Would you rather have someone who has given four gifts who gave their last gift a year ago or someone who has given two gifts and their last one was six months ago?  These would be roughly equivalent in this model and that looks about right.
  • It mitigates the artificial distinction between months.  That 12-month versus 13-month difference that in a normal RFM analysis could be the difference between sending and not sending a communication?  In this model, it’s about an 8% difference in scoring.  Important, but not fatal.
  • Because I’ve not seen the effect of the sheer numbers of gifts have a huge impact (once you get above a certain point) on retention rate, it seems intuitive that monetary value is a smaller factor than the other two.

There are some weaknesses.  Donation amounts aren’t linear: if someone has given a $25 gift in the past, the odds that they will go from there to $26 to $27 is not likely.  Some time periods, like a year, are somewhat magical, especially for one-gift-per-year, seasonality-focused donors.  And in an ideal world, you would want more recent gifts weighed a bit more than more distant gifts.  A donor’s behavior tomorrow will be more like their behavior last month than their behavior in 1988.

But given that, it’s an interesting look at the topic.  I hope the week gives you the courage and the tools to take another look at your segmentation strategy and calculations.  You’ll go nuts if you try all of these simultaneously, but conscious and continuous improvement can make huge differences in the long term.

Toward a linear RFM

Vive le donor difference

When Iyoure-killin-me-smalls-quote-1 was but a wee lad, I played youth baseball.  Or perhaps more accurately other kids played baseball at me.  I excelled in three things and three things only:

  • Bunting
  • Getting hit by pitches, to the point that I once got hit by a pitch that was called a strike.  I had to wait to get hit by the next pitch to take my base.
  • Stealing signs.

This last was where my “talent” was.  I would watch the third-base coach and when I thought a steal was coming from the signals, I would yell into the pitcher and catcher from my position in right field.  (Of course I was in right field.  There’s a chance someone might hit the ball to left field.)  I probably caused more outs with catching signs than catching balls (though still far less than I caused by batting).

The trick to stealing signs is to look for what is different from the usual.  The same is true for catching donor signals – the trick is to look for what is unusual and work from there.  Some tips:

Seasonality: Most donors are season agnostic.  They donate when an appeal touches them or strikes their fancy or they hear about you on the news or they found a $20 in a purse in the back of a closet.  However, some will renew membership in January like clockwork.  Others believe in end-of-year giving (this is prevalent among online donors).

Like everything else, there is a way of doing a sophisticated model to determine this.  However, like only some things, there is also a fast, relatively easy, and free way to do it in Excel or similar spreadsheet:

  1. Pull all of the gifts at which you want to look.  I would recommend donors with at least three years of giving history and at least four gifts, so you have a sufficient history to work with.  You want the gifts labelled by a unique donor ID number.
  2. Label all of the gifts by month (1 = January, 12 = December, and everything in between)
  3. Run a pivot table that summarizes the gifts by donor with the min month and the max month.
  4. Subtract the min from the max.

(If you’d like a walkthrough of this in more detail, please email me at nick@directtodonor.com)

Now look at the results.  The majority of donors will likely have a wide spread of 9, 10, or 11 months.  However, you will also see some 0-3 month spreads, meaning that over (at least) a three-year period and (at least) four gifts, they have given to you only in one quarter of the year.  Thus, you can likely reduce your costs on soliciting them in the other quarters of the year (not eliminate, as you don’t want them to forget you exist).

If you want to be very thorough, add six to each month number and repeat to capture those few donors who may focus their gifting around both the end and beginning of the year, but not the middle.

Premium v non-premium: This is actually the same analysis as the months, except instead of coding your gifts by month, you need to code your communications by whether they required a giveaway to give.  Some people will present as exclusively premium or non-premium donors.

This is powerful combined with seasonality analysis; if you find someone only gives at the beginning of the year to your membership campaign and has never given to a premium piece, you don’t need to send them address labels in May or the calendar in September or telemarket to them in June.  Instead, you can use lower cost (and more cultivative) pieces like donor newsletters to maintain the relationship with them.  Yes, this may only be saving $3 per year per donor, but if there are 10,000 of those donors on your file, you are talking about real money.

Out-of-place gifts: Someone has given you $10 times.  They just make their 11th gift to you: a $173 check.  What should you logically ask them for next time?

HPC says you should ask them for $173 (possibly rounding to $175).  Common sense says that the person may not have turned from a generally smaller donor to a prospective mid-major prospect overnight.

Research indicates a better answer is to use average donation of giving for longer-term donors.  Thus, you see the anomaly, take it into account, but don’t let it drive your decision making.

Another potential treatment is to use a continuous, rather than segmented, version of RFM.  We’ll discuss that tomorrow.

In the meantime, if you are interested in more research on ask strings and amounts you should ask for, I’m working on a book/white paper/whatever it ends up being on just that topic.  Newsletter subscribers will get a free PDF copy of it when it comes out, so if you would like one, please sign up for my free weekly newsletter here.

Vive le donor difference

Using non-donor knowledge to enhance segmentation

Yesterday, we introduced you to two special people that a traditional RFM analysis would group as 4-6 month $25-49.99 multis.  To wit:

Since Sandy first donated to your organization in 1992, she’s given over 100 gifts.  Nothing exorbitant – she’s now giving $30 every three or four months – but she also has volunteered, come to three walks, signed up for emails, and taken almost every advocacy action you offer.

On the other hand, you acquired Miriam from an outside list in 2012.  She gave $25, but nothing since then.  You don’t have her email or phone number, but a last chance lapsed package piqued her interest four months ago and she gave another $25.

We talked about how their donation history can and should differentiate them.  There are additional indicators here, however, that can also enhance your messaging and segmentation:

Online interactions.  If someone is active online, it’s relatively simple to group their interests by their activity – what they click on, look at, and interact with.  (Actually, technically, interact with is the easiest, click on is slightly harder, and look at can be a bear with some online tools.)

With Sandy, she is an advocate for you and doesn’t seem to require premiums to donate – perhaps you can replace the labels in that upcoming package with a paper version of an action alert – cheaper, and likely more effective.

Other organizational interactions.  Sandy has been a walker – do you want to mention that your walk is coming up in 90 days in the PS or in a buckslip?  Similarly, you should probably customize the messaging to acknowledge that she has given her time as well as her donations.  Making her feel known will only help her loyalty.

Outside data.  Getting outside data on your donors can help you adapt your tactics.  If you find out that Miriam does all of her banking online, perhaps she’s a better target for an EFT-based monthly gift than you thought (with the right messaging).

List co-operative data may indicate that that she gives to nine other charities far more often and more generously than to you.  Perhaps she’s just not that into you and you might want to cut your losses soon than you might have thought.

You may find out she does a lot of business on the telephone and find that it isn’t your organization that wasn’t lighting her up; it was the means by which you were approaching her.

All this and more can come from data appends.  And you can try to get that email address and engage her online, so hopefully you can learn more about her.

All of this – donor and non-donor interactions – are masked by an overarching RFM category.  But what if we could dispense with RFM categories altogether?  We’ll talk about that Friday; if you don’t want to miss it, or any of our Direct to Donor posts, please sign up for our free weekly newsletter.

Using non-donor knowledge to enhance segmentation