RFM segmentation alone must die

220px-lev_trotskyRecency, frequency, and monetary value (RFM) are the ruling troika of segmentation-land.  And like one of the old Soviet troikas, they brook no challenge to their rule (e.g., Trotsky, pictured at right, was murdered on Stalin’s orders with an ice ax).

But they are simply not good enough alone anymore.  I tried to be civil about this in my post Beyond RFM.  But beyond is not good enough.  We need to let a million flowers bloom in the world of segmentation.

This means taking the “7-12 $15-$19.99 multi-donor” view of segment out for a date with your ice ax.

OK, not really.  It’s still going to be a decent starting point.  But it has to stop being the ending point.  Even for those of us that have to leave complex modeling to people with more letters after their names.

So this week, I’d like to take you through various different ways of figuring out the all-important question “is communicating with this donor in this way going to help achieve my goals of net revenue, quality file growth, and/or world domination?”.

And the first topic that should be layered on is listening to what a donor’s behavior is telling you.

Part of this is non-donor behavior.  You likely already have this information if you have the donor’s email.  You can potentially tell if they’ve been to your Web site, how often, how long they spent, and what they looked at.  You definitely should be able to know how they’ve reacted to emails you’ve sent them in the past.  The difference in a lapsed donor who still regularly opens your emails and clicks on the articles versus one who, according to your email records, may or may not be dead is a significant one.

If you can get robust data, so much the better, because now you can not only include people in a communication they may not have received before, but also customize it based on what they are interested in.

But some of this is donor behavior you already know, but RFM filters out.  Channel is one. Take an online donor who is reliable and frequent at donating online.  If you’ve mailed her/him 25 times over the years to try to get him/her to donate, but s/he hasn’t responded, chances are that s/he doesn’t want to give through the mail.  Personally, I’ve found telemarketing to be the most persnickety channel: those who give through it really give through it; those who don’t, really don’t.

Another is cadence.  If someone has given you ten gifts in the past ten years and all of them have been in November or December, my money is on the fact that you can ease off the gas in May.  One program of my acquaintance runs a membership campaign that starts every January.  There is about five percent of their file that will give a membership gift like a clockwork every January or February and then nothing for the rest of the year.  Should you stop trying to get extra gifts?  No.  Should you cut your cadence way down and save yourself some costs?  Yes.

These are things the donor probably thinks they are telling you explicitly with their behavior.  It’s now incumbent upon you to listen.

Because tomorrow, things get a little bit harder, as we talk about lifecycle and loyalty.


If you’d like more nonprofit direct marketing content like this, please subscribe to my free weekly newsletter. You’ll get all of my blog posts, plus special subscribers-only content.

RFM segmentation alone must die

Quantity versus quality of pieces in donorcentric fundraising

Food for the Poor, the DMA’s Nonprofit of the Year last year, sends 27 mail pieces in its control donor series throughout the year.  These are all very good donorcentric letters, focused on the impact that you as a donor are having in saving people in their times of desperate need.

Another nonprofit of my acquaintance that will remain nameless, sends out one appeal per year.  When they asked me whether they should send a second piece, I told them that they should make their one piece work first, because it was not a compelling appeal.

There are wonderful donorcentric people who argue that nonprofits need to reduce the amount they communicate across the board.  I would argue that they need to reduce the amount they communicate badly.

Let’s take a look back at the reasons that people give for stopping giving to a nonprofit from Dr. Adrian Sergeant (first covered in Wherefore Segmentation):

 

reasons-for-lapse

As you can see, 72% of the reasons were related to not getting our message across like “other causes are more deserving” or “I don’t remember donating” or “they don’t need money any more.”  Less than four percent said inappropriate communications.  People are leaving because we persuade too little, not too much.

And as for the sentiment you may get about mailing too much, Van Diepen et al looked at irritation from nonprofit mailings.  They found that irritation can be incurred from mailings, but that it had no impact on revenue per mailing.  That is, people kept donating at the same rate per piece.

As Jeff Brooks put it in his wonderful book The Fundraiser’s Guide to Irresistible Communications:

[A] typical donor gets at least 10 pieces of unsolicited mail every delivery day.  That’s 3,000 pieces a year.  If you write to a donor twelve times a year, you’re sending 0.4 percent of her yearly total.  If you stopped mailing, the daily average would drop from 10 to 9.96.  Not a meaningful difference for you and your donor.

But for you, that cutback would mean lost revenue, forever.  A loss of hundreds, maybe thousands, of dollars from each donor.

You’ll never solve the Too-Much-Mail problem if you treat it as a numbers game.  The real issue is the relevance of the mail, not the volume.

All of that said, you could be mailing too much, as measured by both your net revenues and a true donor focus.  Here are some of the symptoms:

  • Channel mismatch. It is correct and laudable to try to get an online donor to give offline and vice versa.  However, there is a point of non-response (that varies by organization) at which the online donor is very unlikely to give.  For example, if someone gave their first gift online, continues to give on online, and hasn’t so much as looked at 10 mail pieces from you, you might be wasting money in sending those appeals (note: I say those appeals – perhaps a mailing that encourages her to go to the Website and make a donation is just what the doctor ordered).
  • Seasonality mismatch. If someone donates every November or December like clockwork, but never a second gift in the year over five years, you are probably safe in reducing the mailings they receive in spring and summer.  Note that I don’t say eliminate.  It could be that the updates they are receiving in the summer are the reason they donate in the winter.  But you can probably save some costs here.
  • Mismatch of interests. As we’ve advocated in the “change one thing” approach to testing, you can find out what messages people will respond to and what they won’t.  One you learn that, for example, a person only gives to advocacy appeals, you can safely cut some of the other types of messages they get.  Or someone who only gives to premium pieces get premiums (but for whom they are a turn-off don’t).
  • Systemic waste. Additional mailings should do two things: increase retention rates and increase total program net revenue.  That is to say, it’s not enough to say “this piece is a good one because it netted positive”; you need to be able to say that without the piece revenues would have been down overall.

To make the math simple, let’s say you mail three pieces, each of which gets $100K net revenue.  If you eliminated one of them and two pieces started making $150K net, that third piece was not netting program revenue (unless it was a cultivate piece that set up future year’s revenues or had an upgrade component or the like.

What this nets out to is that in a donorcentric future (or, at least, in my donorcentric vision of the future), people will ask how many control pieces you send and you will have to say that it depends greatly on the donors themselves (or give a range like somewhere between two and 30 pieces per person).

And, of course, that each of these pieces is customized and crafted to appeal to that particular donor or segment.  That, in my mind, is listening to the donors and not trying to let a Platonic ideal donor get in the way of each precious unique donor snowflake.


If you’d like more nonprofit direct marketing content like this, please subscribe to my free weekly newsletter. You’ll get all of my blog posts, plus special subscribers-only content.

Quantity versus quality of pieces in donorcentric fundraising

Getting donor intelligence by asking your donors

Yesterday, I said you can get a good idea of who your donor is through their actions.  The trick here is that you will never find donor motivations for which you aren’t already testing.  This is for the same reason that you can’t determine where to build a bridge by sitting at the river and looking for where the cars drive in trying to float across it, Oregon-Trail-style.

10-trail_208

Damn it, Oregon Trail.  The Native American guide told me to try to float it.
Don’t suppose that was his minor revenge for all that land taking and genocide?

To locate a bridge, you have to ask people to imagine where they would drive across a bridge, if there were a bridge.  This gives you good news and bad news: good news, you can get information you can’t get from observation; bad news, you get what people think they would do, rather than what they actually will do.

True story: I once asked people what they would do if they received this particular messaging in an unsolicited mail piece.  Forty-two percent said they would donate.  My conclusion — about 40% of the American public are liars — may have been a bit harsh.  What I didn’t know then but know now is that people are often spectacularly bad at predicting their own behavior, myself included.  (“I will only eat one piece of Halloween candy, even though I have a big bucket of it just sitting here.”)

There is, of course, a term for this (hedonic forecasting) and named biases in it (e.g., impact bias, empathy gap, Lombardi sweep, etc.).  But it’s important to highlight here that listening to what people think they think alone is perilous.  If you do it, you can launch the nonprofit equivalent of the next New Coke.

“The mind knows not what the tongue wants. […] If I asked all of you, for example, in this room, what you want in a coffee, you know what you’d say? Every one of you would say ‘I want a dark, rich, hearty roast.’ It’s what people always say when you ask them what they want in a coffee. What do you like? Dark, rich, hearty roast! What percentage of you actually like a dark, rich, hearty roast? According to Howard, somewhere between 25 and 27 percent of you. Most of you like milky, weak coffee. But you will never, ever say to someone who asks you what you want – that ‘I want a milky, weak coffee.’”  — Malcolm Gladwell

With those cautions in mind, let’s look at what survey and survey instruments are good for and not good for.

First, as mentioned, surveys are good for finding what people think they think.  They are not good for finding what people will do.  If you doubt this, check out Which Test Won, which shows two versions of a Web page.  Try to pick out which version of a Web page performed better.  I venture to say that anyone getting over 2/3rds of these right has been unplugged and now can see the code of the Matrix.  There is an easier and better way to find out what people will do, which is to test; surveys can give you the why.

Surveys are good for determining preferences.  They are not good for explaining those preferences.  There’s a classic study on this using strawberry jam.  When people were asked what their preferences were for jam, their rankings paralleled Consumer Reports’ rankings fairly closely.  When people were asked why they liked various jams and jellies, their preferences diverged from these expert opinions significantly.  The authors write:

“No evidence was found for the possibility that analyzing reasons moderated subjects’ judgments. Instead it changed people’s minds about how they felt, presumably because certain aspects of the jams that were not central to their initial evaluations were weighted more heavily (e.g., their chunkiness or tartness).”

This is not to say that you shouldn’t ask the question of why; it does mean you need to ask the question of why later and in a systematic way to avoid biasing your sample.

Surveys are good for both individual preferences and group preferences.  If you have individual survey data on preferences, you absolutely should append these data to your file and make sure you are customizing your reasons to give to the individual’s reason why s/he gives.  They also can tease out segments of donors you may not have known existed (and where you should build your next bridge.

Surveys are good for assessing experiences with your organization and bad for determining complex reasons for things.  If you have 18 minutes, I’d strongly recommend this video about how Operation Smile was able to increase retention by finding out what donors’ experiences were with them and which ones were important.  Well worth a watch.

If you do want it, you’ll see that they look at granular experiences rather than broad questions.  These are things like “Why did you lapse” or “are we mailing too much?”   These broad questions are too cognitively challenging and encompassing too many things.  For example, you rarely hear from a donor to send fewer personalized handwritten notes, because those are opened and sometimes treasured.  What the answer to a frequency question almost always leads to is an answer to the quality, rather than quantity, of solicitation.

Surveys are good when they are well crafted and bad when they are poorly crafted.  I know this sounds obvious, but there are crimes against surveys committed every day.  I recently took a survey of employee engagement that was trying to assess whether our voice was heard in an organization.  The question was phrased something like “How likely do you think it is that your survey will lead to change?”

This is what I’d call a hidden two-tail question.  A person could answer no because they are completely checked out at work and fatalistic about management.  Or a person could answer no, because they were delighted to be working there, loved their job, and wanted nothing to change.

Survey design is a science, not an art.  If you have not been trained in it, either get someone who is trained in it to help you, or learn how to do it yourself.  If you are interested in the latter, Coursera has a free online course on questionnaire design here that helped me review my own training (it is more focused on social survey design, but the concepts work similarly).

You’ll notice I haven’t mentioned focus groups.  Focus groups are good for… well, I’m not actually sure what focus groups are good for.  They layer all of the individual biases of group members together, stir them with group dynamic biases like groupthink, unwillingness to express opinions contrary to the group, and the desire to be liked, season them with observer biases and the inherent human nature to guide discussions toward preconceived notions, then serve.

Notice there was no cooking in the instructions.  This is because I’ve yet to see a focus group that is more than half-baked. (rim shot)

My advice if you are considering a focus group: take half of the money you were going to spend on the focus group, set it on fire, inhale the smoke, and write down the “insights” you had while inhaling the money smoke.  You will have the same level of validity in your results for half the costs.

Also, perhaps more helpful, take the time that you would have spent talking to people in a group and talk to them individually.  You won’t get any interference from outside people on their opinions, introverts will open up a bit more in a more comfortable setting and (who knows) they may even like you better at the end of it.  Or if you hire me as a consultant, I do these great things with entrails and the bumps on donors’ heads.

So which do you want to use: surveys or behavior?  Both. Surveys can sometimes come up with ideas that work in theory, but not in practice, as people have ideas of what they might do that aren’t true.  Behavior can show you how things work in practice, but it can be difficult to divine deep insights that generalize to other packages and communications and strategies.  They are the warp and weft of donor insights.

Getting donor intelligence by asking your donors

Learn about your donors by changing one thing

Congratulations!  A constituent joined your organization!  Now what?  

Welcome series!  Then what?

Well, of course, you drop them into the communication channel of their origin right?

As our Direct Marketing Master Yoda* would say:

6a90683cc161c525f9fbc01036b2c5b6

No. No. No.  Quicker, easier, more seductive.

But in this case, not ideal.  It’s not ideal for the constituent and it’s not ideal for learning more about what this person actually wants — you may be freezing what this person “is” before you’ve had a chance to find out.

The person has already told you that they are responsive to three things:

  • Medium: If they respond to a mail piece, for example, they do not hate mail pieces. It may not be their only, or even their favorite means of communication, but it is one to which they respond.
  • Message: Your mission probably entails multiple things.  Your goal may be wetlands preservation and you work to accomplish this through education, research, and direct conservation.  If someone downloaded your white paper on the current state of wetlands research and your additional research goals, you know that they are responsive to that research message.  It may not be their only or favorite message, but they respond.
  • Action: If someone donates, they are willing to donate.  If they sign a petition, they are willing to petition.  You can guess the rest of this about them perhaps being willing to do other things.

Other than welcome series, which I’ll talk about at another time, you are trying to sail between the Scylla of sending the same thing over and over again and the Charybdis of bombarding people with different, alien messages, media, and asks.

Thus, I would recommend what I’d call the bowling alley approach in honor of Geoffrey Moore, who advocated for a similar approach to entering new markets in his for-profit entrepreneurial classic Crossing the Chasm

The idea in the for-profit world is that you enter with one market with one product.  Once you have a foothold, you try to see that same market a different product and a different market your original product, in the same way that hitting a front bowling pin works to knock down the two behind it.

Here, we play three-dimensional bowling**. The idea behind the non-profit bowling alley, or change one, approach is that you should change only one aspect at a time of your medium, message, and action.

Let’s take our wetlands organization as an example — they work to educate, research, and conserve.  They have people who download white papers and informational packets, people who take advocacy actions, and donors.  And their means of communication are mail, phone, and online.

Let’s further take a person who downloads a white paper on research online and provides her mail and email address.  The usual temptation would be to drop her into the regular email newsletter and into the warm lead acquisition mail stream (and maybe to even do a phone append to call her).

But this would not be the best approach: you would be taking someone who, for all you know, is interested only in one medium, message, and action and asking them for something completely different.

Rather, it would be better if at first you probe other areas of interest.  Ideally, you would ask her:

  • Online for downloading additional information about research (same medium, message, and action)
  • Online for advocacy actions and donations related to research (same medium and message; different action)
  • Online for downloading information about education and conservation (same medium and action; different message)
  • In the mail and on the phone for getting additional information about research (same message and action; different medium)

Obviously, this last part is not practical; mail and phone are too expensive to not have a donation ask involved. However, you could make the mail and phone asks specific to “we need your help to help make our research resources available not just to you, but to policymakers across the country” — tying it as directly as possible to where their known area of interest.

Over time, you should get a strong picture of this person.  Maybe they are willing to do anything for your organization by any means as long as it is focus on your research initiatives.  Maybe they are willing to engage with you about anything, as long as it is only online.  And maybe they like research and conservation, but not education; online and mail, but not phone; and getting information and donating, but not engaging their representatives.

Taking it one step at a time not only helps you learn this over time, but also helps you learn it without culture shock.  If someone downloads a white paper and you ask them to take an advocacy action on that same issue online, they may not be interested, but they likely see the throughline to the action they took.  If they download a white paper and get a phone call for an unrelated action, they likely will not.

It’s the difference between a donor response of “I can see why you’d think that, but no thanks” and “what the hell?” (followed by the constituent equivalent of getting a drink thrown in your face).

It’s also why I recommend going back to the original communication mechanism for lapsed donors in the lapsed donor reactivation post.  In that case, it may be literally the one and only thing you know that works.

You may say that you don’t have the resources to do five different versions of each mail piece or telephone script.  But you can do this inexpensively if you are varying your mail messages throughout the year.  For a warm lead acquisition strategy, simply make sure the advocacy people get the advocacy mail piece and not the others for now.  If you find out some of them are responsive to a mail donation ask, you can ramp up cadence later, but for now, your slower cultivation and learning strategy can pay dividends.

This also helps prevent a common mistake: creating groups like “online advocates,” “white paper downloaders,” etc. and then mailing them without cross-suppression.  If you send each of three groups a monthly mail piece and someone is in all three groups, they may end up getting 36 mail pieces if you don’t cross-suppression (so that these groups are prioritized into like packages instead of everyone in a group getting everything).

Tomorrow, we’ll talk about how to get this type of intelligence from what you’ve already done.

* Don’t believe me?  Check Yoda’s outstanding donor newsletter here

** Science fiction always has people playing three-dimensional chess, but not three-dimensional bowling.  Why or why not?  Discuss.

Learn about your donors by changing one thing

Testing beyond individual communications

So far, the testing that I’ve discussed is how to optimize a communication or overall messaging.  The next step is trying to answer fundamental questions about the nature of your program – things like how many times to communicate and through what means.

There is a pretty good chance that you are not communicating enough to many of your constituents.

But wait, you say.  We send out a mail piece a month, have multiple telemarketing cycles per year, and have both a monthly e-newsletter and semi-frequent emails on other topics.  Our board members and staff who are on our seed lists are consistently on me, you say, that we are communicating too much.  And we get donors who complain that they are getting a mail piece before their last one was acknowledged.

However, remember in the discussion of segmentation that more donors are saying their nonprofits are undercommunicating, not over. That means that the average number profit needs to be communicating more than it is.

And the concern that you are annoying people with asking for money comes from an oft-quoted and concerning inferiority complex from the nonprofit.  We have to believe that we are good enough to merit a gift and making an appropriate ask to be effective.  We want to give our donors an opportunity to be a part of something powerful and transformative.  Remember that if we do our jobs well, donating to our organization is a positive experience.

So how would you test whether you are communicating often enough/too often?  The first step is to figure out where you are as a control with a cross-medium communications calendar.  This is easy said than done, but it’s a necessary first step.  This need not be perfect; as you are going to want to have some communications that are timely and focused on current events, you may have to have some placeholders in place that simply indicates “we’re going to email something here.”

Then split test your file and test, so that part of your file gets X communications and another gets X plus or minus 1.  I’d suggest plus.  Then measure the total success of the communications.

I once helped lead a test where we took mail pieces out of our schedule during membership recruitment.  We would send a piece or two, then wait to see if those donors would donate before sending to them against to make sure that we were addressing them properly as either a renewed donor or as someone who has not yet renewed.  Each individual piece in the resting membership series had a significantly better ROI and better net than the more consistent appeal series.

Yet the appeal series brought in more money for the organization and the mission overall.  I would argue, as I did at the time, this is the actual important metric.  If you want to look at metrics like ROI or response rate, your best opportunity is to send one letter to your single best donor – you’ll get a 100% response rate and ROI percentages in the tens of thousands or more.

But for real life, the goal is more money for more mission.  So overall net is the metric of choice.

The easiest campaigns to add to are the ones that already have a multistage component.  Let’s say you have a matching gift campaign that goes mail piece 1, email 1, mail piece 2, email 2 (with two weeks between each).  A way of testing up would be to look at doing mail piece 1, email 1 + mail piece 1.5, mail piece 2 + email 1.5, email 2 (so there’s still two weeks between each set of communications, but they double up in the middle).  That would be adding a mail piece and an email and if you test both of these with net as your goal, you will have a better framework for the campaign in the following year as well as for additional testing throughout the year.

With email only campaigns, there’s another way of checking whether you are over-emailing your file – looking to see if your total opens and clicks fall.  There is a point at which open rates and click rates will begin to fall; however, you shouldn’t worry too much until adding another email not only lowers your open and click rates but lowers your total number of opens and clicks (similar to a focus on total net, rather than net per piece).

This tipping point in email is probably well past where you think it is.  Hubspot did a study of emails per month on both open and click-through rates.  The sweet spot with the highest open and click rates was between 15 and 30 email per month.

That’s right – opens and clicks went up until you got in the range of daily emails.  Things went downhill after 30 days.  So if you are sending more than daily emails (on any day but December 31 or the last day of a matching opportunity), you might be emailing too much – so take that as a cautionary tale for the .0001% of you who are doing this.  For the other 99.9999%, hopefully this will give support for the business case for testing up on your emails.

There are three tricks to cross-platform testing:

  1. There is a whole science of attribution testing. If you have the ability to look at this literature and your data systems will support this, go for it.  However, most organizations of my experience don’t have all of their data in the same place initially, making this exceedingly hard.  Thus, this sort of testing up/down for cadence should look at sources of revenue by audience test panel rather than through what medium the donation is made.  You may be surprised how much adding a mail piece increases your online revenue or adding a telemarketing cycle boosts the mail piece.
  2. Unlike with strictly piece-based attributes, I’d argue you have to test every cell here because there are interactions among the means of communication. It may be that mail + mail is better than mail and mail + phone is better than mail, but that when you have mail + phone + mail, you have diminishing returns that don’t compensate for doing both mail pieces.
  3. You will have to be vigilant about the creation of your testing cells. ft_15-07-23_notonline_200pxAs much as you would like to call everyone who has a phone number or email everyone who has an email address, and use those who don’t have a phone number or email on file as a control audience, those are different types of donors.  Pew has a great summary of the non-Internet users of the US at right.  Even if you looked just at the age and income variables, you can see how this would make your control audience look very different from your non-control.In reverse, 66% of 25-29 year olds live in houses where there is no landline, compared with 14% of 65+ year olds, according to the National Center for Health Statistics.

    So, if you think of the average person for whom you have a phone number, but not an email address, that person looks very different from the one where you have an email address, but not a phone number.  Thus, you have to either control for all demographic variables in your assessment (hard) or split test people by means of communication that you have available. (marginally easier)

Thanks for reading and be sure to let me know at nick@directtodonor.com what future topics you’d like to see.

Testing beyond individual communications