Saving money with DIY analytics

I probably should not be the person talking about DIY.  I have a T-shirt that has a bit of every paint color I’ve ever painted a room, because I am physically incapable of not dripping on myself.  And that is minor compared to some of the crimes against home-anity I’ve committed.

Let me take the opportunity to apologize to everyone who has ever bought a house I’ve worked on.  I hope the electrical burns have healed by now.

But I do believe in DIY analytics and tricks to save money.

You can and should be using professionally produced models.  Many of them will help save you money and/or produce additional revenues.

But you can do a few things on your own to avoid breaking the bank, speed the rate of progress, or both.

Here are some cost-saving things you can do in your own spreadsheet:

Any others that Direct to Donor readers have used?  Please let me know at nick@directtodonor.com so I can share with the community.

Saving money with DIY analytics

Creating useful donor surveys

In my DMA Leadership Conference talk, I said that people who listened to what donors say they want in donor surveys deserve to be lied to.  That was obviously too harsh – what I should have said is that they deserved to be misled.

Because people (not just donors, but all human beings*) aren’t meaning to lie to you; they just don’t know what their true motivation is.  As we’ve seen, emotional reaction happens 6000 times faster than rational thought.  So unless someone is doing System Two thought, where they are rationally considering all alternatives, the role reason plays in this process is coming up with the best possible justification of a decision already made.

Consider a study that asked people to rank their top 16 motivations.  Sex was rated #14; wealth was dead last.  Then they looked at actual subconscious motivators of decisions.  Sex was rated #1 and wealth was rated #5.

This should be considered no surprise to people who have met, well, ya know, people.  But it was a surprise to people themselves, who think themselves chaise and uncorruptable, but in reality dream of having very special moments in Scrooge McDuck’s vault.

0pu-08jzteemwvmsx

But that doesn’t mean all donor surveys are bad – far from it.  It just means that, in a statement that may get me arrested by the Tautology Police**:

Bad donor surveys are bad.  Good donor surveys are good.

Common traps in donor surveys:

  • Talking only to current donors. You want to talk to people who stopped giving as well, to the extent that they will talk to you.  After all, you are looking for the difference between these two groups.  Trying to define who your good donors are without talking to former donors is like saying the reason that Fortune 500 companies are successful is because they have employees and offices.
  • Asking donors to analyze why they did what they did. They don’t know.  So they are going to try to figure out what answer someone like them would generally say or what they think you want to hear.  Neither is helpful to you.
  • Asking donors what is most important to them. Clearly, from the above, the answer is sex.  Looking only at your limited options, however, they will probably make mistakes in determining what is important to them, similar to the poor people who thought that sex and wealth (aka Genie Wishes #1 and #2) didn’t impact them.

So how do you construct a survey that gets to these important points?  You are going to set up your survey so that you can run a regression analysis.****  If you need help with how to do this, check out our post on basic regression.

You will need a dependent variable.  Ideally, this will be donation behavior because it is a clear expression of the behavior you are trying to impact.  If not, an overall satisfaction score with the organization will be generally OK, as it should correlate strongly with donation behavior.

For your independent variables, ask about aspects of your organization.  So, for example, “have you ever called X Organization about your donations?”, “did you receive a thank you note for each donation you made?”, “have you been to X Web site”, “how many days did it take for you to get your thank you note on your last gift?”, etc.

The powerful thing about regression analysis is that it will help you figure out both how people feel about their experience and how important that experience is to them?  For example, my guess is that for most organizations, the number of days it took to get a thank you will be a good predictor of retention.  Since the analysis tells you the strength of that association, you can invest the right amount of resources into that area versus new donor welcome packages or donor relations staff or database infrastructure and the like.


* Yes, non-donors are also considered human beings – just slightly lesser ones.

** Motto: Enforcing through enforcement since Socrates.***

*** Former motto: Our motto is our motto.

**** Or other modeling if you are feeling fancy.

Creating useful donor surveys

An short update on promiscuously charitable donors

First, I need to acknowledge a mistake. A much beloved former board member called me on the phrase “charitably promiscuous” on Tuesday. In thinking more about this, this probably should have been “promiscuously charitable” in order to mean what I meant to mean. As it stands, I’m probably going to have some interesting search engine implications for a while.

So I’m leaving it there — as it isn’t my goal to rewrite history — but admit my mistake here.

Second, I received in my inbox Wednesday an email from Apogee talking about results from a new study they did of ten non-profits’ donor bases. They looked at these donors’ behavior in their cooperative. Their results?

“On average, within the past year these 24-month donors have given to 3 charities. Over their lifetime in Apogee, they’ve given to 10 charities.”

“Approximately 70% to 80% of these donors have made a contribution within their core category in the past 12 months, but at least 10% of each organization’s 24-month donors donated to six other categories as well within the past year.”

“On average, only 31% of the total amounts contributed by each organization’s donors were made within category. The percentages fluctuated with 26% being the lowest and 46.5% being the highest.”

The full study is gated, but you can sign up to receive it here.  Don’t worry: there is an opt-out link should you wish it.

So, however you want to say it, our donors give to a lot of different organizations, some related to our cause, some not.  Since I was using older data on Tuesday to make this point, I wanted to give out a quick update.

An short update on promiscuously charitable donors

Learning from political fundraising: hypercustomization

fireworks4_amkOn the path to his win in Iowa, Ted Cruz took an unusual position for a presidential candidate. He spoke out against fireworks regulations.

Usually, Iowa contests focus on broad national issues that a person would be expected to lead on as president (plus ethanol).  Fireworks range as a national issue somewhere around garbage collection and why-don’t-they-do-something-about-that-tacky-display-of-Christmas-lights-on-Steve-and-Janice’s-house.

But from a data perspective, the Cruz campaign knew its supporters.  There’s a great article on this here.  Here’s a quote:

“They had divided voters by faction, self-identified ideology, religious belief, personality type—creating 150 different clusters of Iowa caucus-goers—down to sixty Iowa Republicans its statistical models showed as likely to share Cruz’s desire to end a state ban on fireworks sales.

Unlike most of his opponents, Cruz has put a voter-contact specialist in charge of his operation, and it shows in nearly every aspect of the campaign he has run thus far and intends to sustain through a long primary season. Cruz, it should be noted, had no public position on Iowa’s fireworks law until his analysts identified sixty votes that could potentially be swayed because of it.”

As we unpack this, there are several lessons we nonprofits can take from this operation:

The leadership role of direct marketing.  Cruz’s campaign is run by a direct marketing specialist.  Contrast this with Marco Rubio’s campaign, which is run by a general consultant, or Jeb Bush’s, which was run by a communications specialist.  As a result, analytics and polling in the campaign are skewed not toward what generalized messages do best with a focus group or are the least offensive to the most number of people.    

In fact, in the campaign, the analytics team has a broader set of responsibilities than normal.  Analytics drive targeting decisions online and offline.

The imperative to know your constituents.  Much political polling is focused on knowing donors in the aggregate.  The Cruz campaign wanted to know them specifically.  So they gathered not just people who were supporters and asked them about local concerns.  This came up with 77 different ideas, including red-light cameras and, as you probably guessed, fireworks bans.  We’ve talked about knowing your constituents by their deeds and by asking them; what’s important about this example is the specificity of the questions.  It’s not “what do you like or dislike”; it’s “what do you care about.”

Testing to know potential constituents.  One the campaign had these ideas, they tested them online with Facebook ads.  The ads weren’t specific to the Cruz campaign, but rather asked people to sign up for more information about that issue.  Once they had these data, they not only had specific knowledge of what people cared about, but the grist for the mill of data operations that could model Iowa voters and their key issues.  

Focusing on actual goals.  Cruz’s end goal is to drive voters, just like ours is to drive donations.  By simplifying things down to what gets people to pull their levers/hit the button/punch the chad, they had a crystallizing focus.  One can debate whether this is a good thing, as the campaign sent out a controversial Voting Violation mailing that attempted to shame infrequent voters with Cruz leanings to the polls.  (It should be noted that these mailings are the part of campaign lore — they’ve been tested and found to be very efficient, but few campaigns have ever wanted to backlash that comes inevitably from them.)  But that focus on things that matter, rather than vanity metrics like Facebook likes , help with strategy.

Hypertargeting: All of this led to some of the most targeted direct marketing that has been seen in the political world.  When telemarketing was employed for particular voters, not only would the message reflect what they cared about (e.g., fireworks bans) but also why they cared about it (e.g., missed fun at 4th of July versus what seems to some as an arbitrary attack on liberty).  This came from both people’s own survey results and what models indicated would matter to them.

So now, let’s look at this in a nonprofit direct marketing context.  How well do you know your donors and potential donors?  Or how well do you really know them?  And how well do you play that back to them?

I’ve frequently advocated here playing back tactics to donors that we know work for them and focusing our efforts on mission areas and activities we know they will support at a segment level.

But this is a different game altogether.  The ability to project not only what someone will support, but why they well, and designing mail pieces, call scripts, and emails that touch their hearts will be a critical part of what we do.  And once you have this information, it’s cheap to do: if you are sending a mail piece or making a phone call already, it’s simplicity itself to change out key paragraphs that will make the difference in the donation decision.

This also applies in efforts to get donors to transition from one-time giving to monthly giving or mid-major gift programs.

So, how can you, today, get smarter about your donors and show them you are smarter about them?

Learning from political fundraising: hypercustomization

Getting donor intelligence by asking your donors

Yesterday, I said you can get a good idea of who your donor is through their actions.  The trick here is that you will never find donor motivations for which you aren’t already testing.  This is for the same reason that you can’t determine where to build a bridge by sitting at the river and looking for where the cars drive in trying to float across it, Oregon-Trail-style.

10-trail_208

Damn it, Oregon Trail.  The Native American guide told me to try to float it.
Don’t suppose that was his minor revenge for all that land taking and genocide?

To locate a bridge, you have to ask people to imagine where they would drive across a bridge, if there were a bridge.  This gives you good news and bad news: good news, you can get information you can’t get from observation; bad news, you get what people think they would do, rather than what they actually will do.

True story: I once asked people what they would do if they received this particular messaging in an unsolicited mail piece.  Forty-two percent said they would donate.  My conclusion — about 40% of the American public are liars — may have been a bit harsh.  What I didn’t know then but know now is that people are often spectacularly bad at predicting their own behavior, myself included.  (“I will only eat one piece of Halloween candy, even though I have a big bucket of it just sitting here.”)

There is, of course, a term for this (hedonic forecasting) and named biases in it (e.g., impact bias, empathy gap, Lombardi sweep, etc.).  But it’s important to highlight here that listening to what people think they think alone is perilous.  If you do it, you can launch the nonprofit equivalent of the next New Coke.

“The mind knows not what the tongue wants. […] If I asked all of you, for example, in this room, what you want in a coffee, you know what you’d say? Every one of you would say ‘I want a dark, rich, hearty roast.’ It’s what people always say when you ask them what they want in a coffee. What do you like? Dark, rich, hearty roast! What percentage of you actually like a dark, rich, hearty roast? According to Howard, somewhere between 25 and 27 percent of you. Most of you like milky, weak coffee. But you will never, ever say to someone who asks you what you want – that ‘I want a milky, weak coffee.’”  — Malcolm Gladwell

With those cautions in mind, let’s look at what survey and survey instruments are good for and not good for.

First, as mentioned, surveys are good for finding what people think they think.  They are not good for finding what people will do.  If you doubt this, check out Which Test Won, which shows two versions of a Web page.  Try to pick out which version of a Web page performed better.  I venture to say that anyone getting over 2/3rds of these right has been unplugged and now can see the code of the Matrix.  There is an easier and better way to find out what people will do, which is to test; surveys can give you the why.

Surveys are good for determining preferences.  They are not good for explaining those preferences.  There’s a classic study on this using strawberry jam.  When people were asked what their preferences were for jam, their rankings paralleled Consumer Reports’ rankings fairly closely.  When people were asked why they liked various jams and jellies, their preferences diverged from these expert opinions significantly.  The authors write:

“No evidence was found for the possibility that analyzing reasons moderated subjects’ judgments. Instead it changed people’s minds about how they felt, presumably because certain aspects of the jams that were not central to their initial evaluations were weighted more heavily (e.g., their chunkiness or tartness).”

This is not to say that you shouldn’t ask the question of why; it does mean you need to ask the question of why later and in a systematic way to avoid biasing your sample.

Surveys are good for both individual preferences and group preferences.  If you have individual survey data on preferences, you absolutely should append these data to your file and make sure you are customizing your reasons to give to the individual’s reason why s/he gives.  They also can tease out segments of donors you may not have known existed (and where you should build your next bridge.

Surveys are good for assessing experiences with your organization and bad for determining complex reasons for things.  If you have 18 minutes, I’d strongly recommend this video about how Operation Smile was able to increase retention by finding out what donors’ experiences were with them and which ones were important.  Well worth a watch.

If you do want it, you’ll see that they look at granular experiences rather than broad questions.  These are things like “Why did you lapse” or “are we mailing too much?”   These broad questions are too cognitively challenging and encompassing too many things.  For example, you rarely hear from a donor to send fewer personalized handwritten notes, because those are opened and sometimes treasured.  What the answer to a frequency question almost always leads to is an answer to the quality, rather than quantity, of solicitation.

Surveys are good when they are well crafted and bad when they are poorly crafted.  I know this sounds obvious, but there are crimes against surveys committed every day.  I recently took a survey of employee engagement that was trying to assess whether our voice was heard in an organization.  The question was phrased something like “How likely do you think it is that your survey will lead to change?”

This is what I’d call a hidden two-tail question.  A person could answer no because they are completely checked out at work and fatalistic about management.  Or a person could answer no, because they were delighted to be working there, loved their job, and wanted nothing to change.

Survey design is a science, not an art.  If you have not been trained in it, either get someone who is trained in it to help you, or learn how to do it yourself.  If you are interested in the latter, Coursera has a free online course on questionnaire design here that helped me review my own training (it is more focused on social survey design, but the concepts work similarly).

You’ll notice I haven’t mentioned focus groups.  Focus groups are good for… well, I’m not actually sure what focus groups are good for.  They layer all of the individual biases of group members together, stir them with group dynamic biases like groupthink, unwillingness to express opinions contrary to the group, and the desire to be liked, season them with observer biases and the inherent human nature to guide discussions toward preconceived notions, then serve.

Notice there was no cooking in the instructions.  This is because I’ve yet to see a focus group that is more than half-baked. (rim shot)

My advice if you are considering a focus group: take half of the money you were going to spend on the focus group, set it on fire, inhale the smoke, and write down the “insights” you had while inhaling the money smoke.  You will have the same level of validity in your results for half the costs.

Also, perhaps more helpful, take the time that you would have spent talking to people in a group and talk to them individually.  You won’t get any interference from outside people on their opinions, introverts will open up a bit more in a more comfortable setting and (who knows) they may even like you better at the end of it.  Or if you hire me as a consultant, I do these great things with entrails and the bumps on donors’ heads.

So which do you want to use: surveys or behavior?  Both. Surveys can sometimes come up with ideas that work in theory, but not in practice, as people have ideas of what they might do that aren’t true.  Behavior can show you how things work in practice, but it can be difficult to divine deep insights that generalize to other packages and communications and strategies.  They are the warp and weft of donor insights.

Getting donor intelligence by asking your donors

And you shall know your constituents by their deeds

There are two ways to know your constituents better: listening to what they do and asking them what they think. Today, I’ll talk about the former; tomorrow, the latter.

Yesterday’s piece talked about how you can roughly define an individual’s responsiveness by medium, message, and action.  The trick is that we often segment by only one, possibly two, of these.  We have medium covered: most large-scale programs of my acquaintance distinguish among people who are mail, telemarketing, online, multichannel, etc. responders.  And many small-scale programs haven’t begun to integrate medium, so in a way this is its own segmentation.

Sometimes, we will use action as a determiner.  We’ll take our online advocates segment and drop it into one of our better-performing donor mail pieces (frequently not customizing the message to advocacy, more’s the pity).

We rarely segment by message, even though picking something that people care about is the most basic precondition of the three.  After all, you may not like telefundraising, but you’d at least listen if it was immediately and urgent about something that you care about.  And it’s much easier to get someone to do something they haven’t done before for a cause they believe in than to get them to do something they’ve done many times if they don’t believe in the message.

The good news is that you have your constituents’ voting records, of a sort.  Consider each donation to a communication a vote for that communication and each non-donation (or, if you can get it from email, non-open or non-clickthrough) as a vote against that communication.

[tangent] This is also a helpful technique for when your executive director comes into your office and says “I’ve had five calls today from people who aren’t happy about [insert name of communication here].”  If you reframe it as five people voted against it by calling and five thousand people voted for it by donating, the noisy few are not nearly as concerning.[/tangent]

A proper modeler would use the data from these votes to run a Bayesian model to update continually the priors on whether or not someone would respond to a piece.  As you can probably tell, I’m not a proper modeler.  I prefer my models fast, free, and explainable.  So here’s how I’d use this voting data:

  • Take all of your communications over a 3-5 year period and code them by message.  So for our hypothetical wetlands organization from yesterday, this might be education, research, and conservation.  Hopefully, you don’t have too many communications that mix your messages (people donate to causes, not lists), but if you do, either take it by the primary focus or code it to both messages.
  • Determine the mix of your communications.  Let’s say that over five years this wetlands organization did 25 conservation appeals, 15 education appeals, and 10 research appeals.  This makes the mix 50% conservation, 30% education, and 20% research.
  • Take your donor file and pull out only those people who donated an average of at least once per year over that 3-5 year period.  This will ensure you are looking only at those people who have even close to sufficient data to draw conclusions.
  • Take the coding of communications you have and apply it to the pieces to which the person donated.  Generate a response rate for each type of message for each person on your file.
  • Now, study that list.

In studying that list, you are probably going to find some interesting results:

  • There are going to be some people (a minority of your file but likely a healthy segment) that only gave to one type of message.  And you’ll see the pattern immediately.  Someone who gave eight times over five years to education appeals and never to conservation or research appeals is clearly an education donor.  You will look at all of the other communications you sent this person and all of the people like her in the X-issue-only segments and you will weep a little.  But weep not.  You can now save your costs and these people’s irritation in the future by sending them only the communications about their issue area (with the occasional test to see if their preferences have changed).  It’s only a mistake unless you don’t learn from it; if you do learn from it, it’s called testing.
  • You can also probably lump people who gave rarely to other messages in with the X-issue only people.  So if someone gave to nine of the ten research appeals and to only one each of education and conservation, they clearly have a strong research preference.  This is why it’s helpful to look at these data by response rates — you can see where people have ebbs and flows in their support.
  • You will also see people who like two messages, but not a third (or fourth or however many you have; I will warn you to minimize the number of buckets, as you will not have a large enough sample size without).  So if someone gave five times, three to education appeals and two to research appeals, education and research both appeal to this person with a 20% response rate.  However, conservation doesn’t apparently appeal to them, so you can reduce communications in this realm.
  • You’ll also see a contingent of folks who donate to communications in roughly the same proportion that you send them out.  These people can probably be classified as organizational or institutional donors.  It will take far more digging than mere file analysis to figure out what makes this donor tick.

This leads into an important point: these will not get you to why.  Even things like how often a person gives for how long or Target Analytics Group’s Loyalty Insights, which can show if the person is giving uniquely to you or to others, are transactional data.  While useful proxies, they can’t tell you the depth of feeling that someone has for an organization or let you know what ties bind them to you.  To do that, you must ask.  That’s what I’ll cover tomorrow.  But hopefully this gets a little closer to information that will help you customize your donor’s experiences.

 

And you shall know your constituents by their deeds

Learn about your donors by changing one thing

Congratulations!  A constituent joined your organization!  Now what?  

Welcome series!  Then what?

Well, of course, you drop them into the communication channel of their origin right?

As our Direct Marketing Master Yoda* would say:

6a90683cc161c525f9fbc01036b2c5b6

No. No. No.  Quicker, easier, more seductive.

But in this case, not ideal.  It’s not ideal for the constituent and it’s not ideal for learning more about what this person actually wants — you may be freezing what this person “is” before you’ve had a chance to find out.

The person has already told you that they are responsive to three things:

  • Medium: If they respond to a mail piece, for example, they do not hate mail pieces. It may not be their only, or even their favorite means of communication, but it is one to which they respond.
  • Message: Your mission probably entails multiple things.  Your goal may be wetlands preservation and you work to accomplish this through education, research, and direct conservation.  If someone downloaded your white paper on the current state of wetlands research and your additional research goals, you know that they are responsive to that research message.  It may not be their only or favorite message, but they respond.
  • Action: If someone donates, they are willing to donate.  If they sign a petition, they are willing to petition.  You can guess the rest of this about them perhaps being willing to do other things.

Other than welcome series, which I’ll talk about at another time, you are trying to sail between the Scylla of sending the same thing over and over again and the Charybdis of bombarding people with different, alien messages, media, and asks.

Thus, I would recommend what I’d call the bowling alley approach in honor of Geoffrey Moore, who advocated for a similar approach to entering new markets in his for-profit entrepreneurial classic Crossing the Chasm

The idea in the for-profit world is that you enter with one market with one product.  Once you have a foothold, you try to see that same market a different product and a different market your original product, in the same way that hitting a front bowling pin works to knock down the two behind it.

Here, we play three-dimensional bowling**. The idea behind the non-profit bowling alley, or change one, approach is that you should change only one aspect at a time of your medium, message, and action.

Let’s take our wetlands organization as an example — they work to educate, research, and conserve.  They have people who download white papers and informational packets, people who take advocacy actions, and donors.  And their means of communication are mail, phone, and online.

Let’s further take a person who downloads a white paper on research online and provides her mail and email address.  The usual temptation would be to drop her into the regular email newsletter and into the warm lead acquisition mail stream (and maybe to even do a phone append to call her).

But this would not be the best approach: you would be taking someone who, for all you know, is interested only in one medium, message, and action and asking them for something completely different.

Rather, it would be better if at first you probe other areas of interest.  Ideally, you would ask her:

  • Online for downloading additional information about research (same medium, message, and action)
  • Online for advocacy actions and donations related to research (same medium and message; different action)
  • Online for downloading information about education and conservation (same medium and action; different message)
  • In the mail and on the phone for getting additional information about research (same message and action; different medium)

Obviously, this last part is not practical; mail and phone are too expensive to not have a donation ask involved. However, you could make the mail and phone asks specific to “we need your help to help make our research resources available not just to you, but to policymakers across the country” — tying it as directly as possible to where their known area of interest.

Over time, you should get a strong picture of this person.  Maybe they are willing to do anything for your organization by any means as long as it is focus on your research initiatives.  Maybe they are willing to engage with you about anything, as long as it is only online.  And maybe they like research and conservation, but not education; online and mail, but not phone; and getting information and donating, but not engaging their representatives.

Taking it one step at a time not only helps you learn this over time, but also helps you learn it without culture shock.  If someone downloads a white paper and you ask them to take an advocacy action on that same issue online, they may not be interested, but they likely see the throughline to the action they took.  If they download a white paper and get a phone call for an unrelated action, they likely will not.

It’s the difference between a donor response of “I can see why you’d think that, but no thanks” and “what the hell?” (followed by the constituent equivalent of getting a drink thrown in your face).

It’s also why I recommend going back to the original communication mechanism for lapsed donors in the lapsed donor reactivation post.  In that case, it may be literally the one and only thing you know that works.

You may say that you don’t have the resources to do five different versions of each mail piece or telephone script.  But you can do this inexpensively if you are varying your mail messages throughout the year.  For a warm lead acquisition strategy, simply make sure the advocacy people get the advocacy mail piece and not the others for now.  If you find out some of them are responsive to a mail donation ask, you can ramp up cadence later, but for now, your slower cultivation and learning strategy can pay dividends.

This also helps prevent a common mistake: creating groups like “online advocates,” “white paper downloaders,” etc. and then mailing them without cross-suppression.  If you send each of three groups a monthly mail piece and someone is in all three groups, they may end up getting 36 mail pieces if you don’t cross-suppression (so that these groups are prioritized into like packages instead of everyone in a group getting everything).

Tomorrow, we’ll talk about how to get this type of intelligence from what you’ve already done.

* Don’t believe me?  Check Yoda’s outstanding donor newsletter here

** Science fiction always has people playing three-dimensional chess, but not three-dimensional bowling.  Why or why not?  Discuss.

Learn about your donors by changing one thing