Saving money with DIY analytics

I probably should not be the person talking about DIY.  I have a T-shirt that has a bit of every paint color I’ve ever painted a room, because I am physically incapable of not dripping on myself.  And that is minor compared to some of the crimes against home-anity I’ve committed.

Let me take the opportunity to apologize to everyone who has ever bought a house I’ve worked on.  I hope the electrical burns have healed by now.

But I do believe in DIY analytics and tricks to save money.

You can and should be using professionally produced models.  Many of them will help save you money and/or produce additional revenues.

But you can do a few things on your own to avoid breaking the bank, speed the rate of progress, or both.

Here are some cost-saving things you can do in your own spreadsheet:

Any others that Direct to Donor readers have used?  Please let me know at nick@directtodonor.com so I can share with the community.

Saving money with DIY analytics

Creating useful donor surveys

In my DMA Leadership Conference talk, I said that people who listened to what donors say they want in donor surveys deserve to be lied to.  That was obviously too harsh – what I should have said is that they deserved to be misled.

Because people (not just donors, but all human beings*) aren’t meaning to lie to you; they just don’t know what their true motivation is.  As we’ve seen, emotional reaction happens 6000 times faster than rational thought.  So unless someone is doing System Two thought, where they are rationally considering all alternatives, the role reason plays in this process is coming up with the best possible justification of a decision already made.

Consider a study that asked people to rank their top 16 motivations.  Sex was rated #14; wealth was dead last.  Then they looked at actual subconscious motivators of decisions.  Sex was rated #1 and wealth was rated #5.

This should be considered no surprise to people who have met, well, ya know, people.  But it was a surprise to people themselves, who think themselves chaise and uncorruptable, but in reality dream of having very special moments in Scrooge McDuck’s vault.

0pu-08jzteemwvmsx

But that doesn’t mean all donor surveys are bad – far from it.  It just means that, in a statement that may get me arrested by the Tautology Police**:

Bad donor surveys are bad.  Good donor surveys are good.

Common traps in donor surveys:

  • Talking only to current donors. You want to talk to people who stopped giving as well, to the extent that they will talk to you.  After all, you are looking for the difference between these two groups.  Trying to define who your good donors are without talking to former donors is like saying the reason that Fortune 500 companies are successful is because they have employees and offices.
  • Asking donors to analyze why they did what they did. They don’t know.  So they are going to try to figure out what answer someone like them would generally say or what they think you want to hear.  Neither is helpful to you.
  • Asking donors what is most important to them. Clearly, from the above, the answer is sex.  Looking only at your limited options, however, they will probably make mistakes in determining what is important to them, similar to the poor people who thought that sex and wealth (aka Genie Wishes #1 and #2) didn’t impact them.

So how do you construct a survey that gets to these important points?  You are going to set up your survey so that you can run a regression analysis.****  If you need help with how to do this, check out our post on basic regression.

You will need a dependent variable.  Ideally, this will be donation behavior because it is a clear expression of the behavior you are trying to impact.  If not, an overall satisfaction score with the organization will be generally OK, as it should correlate strongly with donation behavior.

For your independent variables, ask about aspects of your organization.  So, for example, “have you ever called X Organization about your donations?”, “did you receive a thank you note for each donation you made?”, “have you been to X Web site”, “how many days did it take for you to get your thank you note on your last gift?”, etc.

The powerful thing about regression analysis is that it will help you figure out both how people feel about their experience and how important that experience is to them?  For example, my guess is that for most organizations, the number of days it took to get a thank you will be a good predictor of retention.  Since the analysis tells you the strength of that association, you can invest the right amount of resources into that area versus new donor welcome packages or donor relations staff or database infrastructure and the like.


* Yes, non-donors are also considered human beings – just slightly lesser ones.

** Motto: Enforcing through enforcement since Socrates.***

*** Former motto: Our motto is our motto.

**** Or other modeling if you are feeling fancy.

Creating useful donor surveys

An short update on promiscuously charitable donors

First, I need to acknowledge a mistake. A much beloved former board member called me on the phrase “charitably promiscuous” on Tuesday. In thinking more about this, this probably should have been “promiscuously charitable” in order to mean what I meant to mean. As it stands, I’m probably going to have some interesting search engine implications for a while.

So I’m leaving it there — as it isn’t my goal to rewrite history — but admit my mistake here.

Second, I received in my inbox Wednesday an email from Apogee talking about results from a new study they did of ten non-profits’ donor bases. They looked at these donors’ behavior in their cooperative. Their results?

“On average, within the past year these 24-month donors have given to 3 charities. Over their lifetime in Apogee, they’ve given to 10 charities.”

“Approximately 70% to 80% of these donors have made a contribution within their core category in the past 12 months, but at least 10% of each organization’s 24-month donors donated to six other categories as well within the past year.”

“On average, only 31% of the total amounts contributed by each organization’s donors were made within category. The percentages fluctuated with 26% being the lowest and 46.5% being the highest.”

The full study is gated, but you can sign up to receive it here.  Don’t worry: there is an opt-out link should you wish it.

So, however you want to say it, our donors give to a lot of different organizations, some related to our cause, some not.  Since I was using older data on Tuesday to make this point, I wanted to give out a quick update.

An short update on promiscuously charitable donors

Learning from political fundraising: hypercustomization

fireworks4_amkOn the path to his win in Iowa, Ted Cruz took an unusual position for a presidential candidate. He spoke out against fireworks regulations.

Usually, Iowa contests focus on broad national issues that a person would be expected to lead on as president (plus ethanol).  Fireworks range as a national issue somewhere around garbage collection and why-don’t-they-do-something-about-that-tacky-display-of-Christmas-lights-on-Steve-and-Janice’s-house.

But from a data perspective, the Cruz campaign knew its supporters.  There’s a great article on this here.  Here’s a quote:

“They had divided voters by faction, self-identified ideology, religious belief, personality type—creating 150 different clusters of Iowa caucus-goers—down to sixty Iowa Republicans its statistical models showed as likely to share Cruz’s desire to end a state ban on fireworks sales.

Unlike most of his opponents, Cruz has put a voter-contact specialist in charge of his operation, and it shows in nearly every aspect of the campaign he has run thus far and intends to sustain through a long primary season. Cruz, it should be noted, had no public position on Iowa’s fireworks law until his analysts identified sixty votes that could potentially be swayed because of it.”

As we unpack this, there are several lessons we nonprofits can take from this operation:

The leadership role of direct marketing.  Cruz’s campaign is run by a direct marketing specialist.  Contrast this with Marco Rubio’s campaign, which is run by a general consultant, or Jeb Bush’s, which was run by a communications specialist.  As a result, analytics and polling in the campaign are skewed not toward what generalized messages do best with a focus group or are the least offensive to the most number of people.    

In fact, in the campaign, the analytics team has a broader set of responsibilities than normal.  Analytics drive targeting decisions online and offline.

The imperative to know your constituents.  Much political polling is focused on knowing donors in the aggregate.  The Cruz campaign wanted to know them specifically.  So they gathered not just people who were supporters and asked them about local concerns.  This came up with 77 different ideas, including red-light cameras and, as you probably guessed, fireworks bans.  We’ve talked about knowing your constituents by their deeds and by asking them; what’s important about this example is the specificity of the questions.  It’s not “what do you like or dislike”; it’s “what do you care about.”

Testing to know potential constituents.  One the campaign had these ideas, they tested them online with Facebook ads.  The ads weren’t specific to the Cruz campaign, but rather asked people to sign up for more information about that issue.  Once they had these data, they not only had specific knowledge of what people cared about, but the grist for the mill of data operations that could model Iowa voters and their key issues.  

Focusing on actual goals.  Cruz’s end goal is to drive voters, just like ours is to drive donations.  By simplifying things down to what gets people to pull their levers/hit the button/punch the chad, they had a crystallizing focus.  One can debate whether this is a good thing, as the campaign sent out a controversial Voting Violation mailing that attempted to shame infrequent voters with Cruz leanings to the polls.  (It should be noted that these mailings are the part of campaign lore — they’ve been tested and found to be very efficient, but few campaigns have ever wanted to backlash that comes inevitably from them.)  But that focus on things that matter, rather than vanity metrics like Facebook likes , help with strategy.

Hypertargeting: All of this led to some of the most targeted direct marketing that has been seen in the political world.  When telemarketing was employed for particular voters, not only would the message reflect what they cared about (e.g., fireworks bans) but also why they cared about it (e.g., missed fun at 4th of July versus what seems to some as an arbitrary attack on liberty).  This came from both people’s own survey results and what models indicated would matter to them.

So now, let’s look at this in a nonprofit direct marketing context.  How well do you know your donors and potential donors?  Or how well do you really know them?  And how well do you play that back to them?

I’ve frequently advocated here playing back tactics to donors that we know work for them and focusing our efforts on mission areas and activities we know they will support at a segment level.

But this is a different game altogether.  The ability to project not only what someone will support, but why they well, and designing mail pieces, call scripts, and emails that touch their hearts will be a critical part of what we do.  And once you have this information, it’s cheap to do: if you are sending a mail piece or making a phone call already, it’s simplicity itself to change out key paragraphs that will make the difference in the donation decision.

This also applies in efforts to get donors to transition from one-time giving to monthly giving or mid-major gift programs.

So, how can you, today, get smarter about your donors and show them you are smarter about them?

Learning from political fundraising: hypercustomization

Getting donor intelligence by asking your donors

Yesterday, I said you can get a good idea of who your donor is through their actions.  The trick here is that you will never find donor motivations for which you aren’t already testing.  This is for the same reason that you can’t determine where to build a bridge by sitting at the river and looking for where the cars drive in trying to float across it, Oregon-Trail-style.

10-trail_208

Damn it, Oregon Trail.  The Native American guide told me to try to float it.
Don’t suppose that was his minor revenge for all that land taking and genocide?

To locate a bridge, you have to ask people to imagine where they would drive across a bridge, if there were a bridge.  This gives you good news and bad news: good news, you can get information you can’t get from observation; bad news, you get what people think they would do, rather than what they actually will do.

True story: I once asked people what they would do if they received this particular messaging in an unsolicited mail piece.  Forty-two percent said they would donate.  My conclusion — about 40% of the American public are liars — may have been a bit harsh.  What I didn’t know then but know now is that people are often spectacularly bad at predicting their own behavior, myself included.  (“I will only eat one piece of Halloween candy, even though I have a big bucket of it just sitting here.”)

There is, of course, a term for this (hedonic forecasting) and named biases in it (e.g., impact bias, empathy gap, Lombardi sweep, etc.).  But it’s important to highlight here that listening to what people think they think alone is perilous.  If you do it, you can launch the nonprofit equivalent of the next New Coke.

“The mind knows not what the tongue wants. […] If I asked all of you, for example, in this room, what you want in a coffee, you know what you’d say? Every one of you would say ‘I want a dark, rich, hearty roast.’ It’s what people always say when you ask them what they want in a coffee. What do you like? Dark, rich, hearty roast! What percentage of you actually like a dark, rich, hearty roast? According to Howard, somewhere between 25 and 27 percent of you. Most of you like milky, weak coffee. But you will never, ever say to someone who asks you what you want – that ‘I want a milky, weak coffee.’”  — Malcolm Gladwell

With those cautions in mind, let’s look at what survey and survey instruments are good for and not good for.

First, as mentioned, surveys are good for finding what people think they think.  They are not good for finding what people will do.  If you doubt this, check out Which Test Won, which shows two versions of a Web page.  Try to pick out which version of a Web page performed better.  I venture to say that anyone getting over 2/3rds of these right has been unplugged and now can see the code of the Matrix.  There is an easier and better way to find out what people will do, which is to test; surveys can give you the why.

Surveys are good for determining preferences.  They are not good for explaining those preferences.  There’s a classic study on this using strawberry jam.  When people were asked what their preferences were for jam, their rankings paralleled Consumer Reports’ rankings fairly closely.  When people were asked why they liked various jams and jellies, their preferences diverged from these expert opinions significantly.  The authors write:

“No evidence was found for the possibility that analyzing reasons moderated subjects’ judgments. Instead it changed people’s minds about how they felt, presumably because certain aspects of the jams that were not central to their initial evaluations were weighted more heavily (e.g., their chunkiness or tartness).”

This is not to say that you shouldn’t ask the question of why; it does mean you need to ask the question of why later and in a systematic way to avoid biasing your sample.

Surveys are good for both individual preferences and group preferences.  If you have individual survey data on preferences, you absolutely should append these data to your file and make sure you are customizing your reasons to give to the individual’s reason why s/he gives.  They also can tease out segments of donors you may not have known existed (and where you should build your next bridge.

Surveys are good for assessing experiences with your organization and bad for determining complex reasons for things.  If you have 18 minutes, I’d strongly recommend this video about how Operation Smile was able to increase retention by finding out what donors’ experiences were with them and which ones were important.  Well worth a watch.

If you do want it, you’ll see that they look at granular experiences rather than broad questions.  These are things like “Why did you lapse” or “are we mailing too much?”   These broad questions are too cognitively challenging and encompassing too many things.  For example, you rarely hear from a donor to send fewer personalized handwritten notes, because those are opened and sometimes treasured.  What the answer to a frequency question almost always leads to is an answer to the quality, rather than quantity, of solicitation.

Surveys are good when they are well crafted and bad when they are poorly crafted.  I know this sounds obvious, but there are crimes against surveys committed every day.  I recently took a survey of employee engagement that was trying to assess whether our voice was heard in an organization.  The question was phrased something like “How likely do you think it is that your survey will lead to change?”

This is what I’d call a hidden two-tail question.  A person could answer no because they are completely checked out at work and fatalistic about management.  Or a person could answer no, because they were delighted to be working there, loved their job, and wanted nothing to change.

Survey design is a science, not an art.  If you have not been trained in it, either get someone who is trained in it to help you, or learn how to do it yourself.  If you are interested in the latter, Coursera has a free online course on questionnaire design here that helped me review my own training (it is more focused on social survey design, but the concepts work similarly).

You’ll notice I haven’t mentioned focus groups.  Focus groups are good for… well, I’m not actually sure what focus groups are good for.  They layer all of the individual biases of group members together, stir them with group dynamic biases like groupthink, unwillingness to express opinions contrary to the group, and the desire to be liked, season them with observer biases and the inherent human nature to guide discussions toward preconceived notions, then serve.

Notice there was no cooking in the instructions.  This is because I’ve yet to see a focus group that is more than half-baked. (rim shot)

My advice if you are considering a focus group: take half of the money you were going to spend on the focus group, set it on fire, inhale the smoke, and write down the “insights” you had while inhaling the money smoke.  You will have the same level of validity in your results for half the costs.

Also, perhaps more helpful, take the time that you would have spent talking to people in a group and talk to them individually.  You won’t get any interference from outside people on their opinions, introverts will open up a bit more in a more comfortable setting and (who knows) they may even like you better at the end of it.  Or if you hire me as a consultant, I do these great things with entrails and the bumps on donors’ heads.

So which do you want to use: surveys or behavior?  Both. Surveys can sometimes come up with ideas that work in theory, but not in practice, as people have ideas of what they might do that aren’t true.  Behavior can show you how things work in practice, but it can be difficult to divine deep insights that generalize to other packages and communications and strategies.  They are the warp and weft of donor insights.

Getting donor intelligence by asking your donors

And you shall know your constituents by their deeds

There are two ways to know your constituents better: listening to what they do and asking them what they think. Today, I’ll talk about the former; tomorrow, the latter.

Yesterday’s piece talked about how you can roughly define an individual’s responsiveness by medium, message, and action.  The trick is that we often segment by only one, possibly two, of these.  We have medium covered: most large-scale programs of my acquaintance distinguish among people who are mail, telemarketing, online, multichannel, etc. responders.  And many small-scale programs haven’t begun to integrate medium, so in a way this is its own segmentation.

Sometimes, we will use action as a determiner.  We’ll take our online advocates segment and drop it into one of our better-performing donor mail pieces (frequently not customizing the message to advocacy, more’s the pity).

We rarely segment by message, even though picking something that people care about is the most basic precondition of the three.  After all, you may not like telefundraising, but you’d at least listen if it was immediately and urgent about something that you care about.  And it’s much easier to get someone to do something they haven’t done before for a cause they believe in than to get them to do something they’ve done many times if they don’t believe in the message.

The good news is that you have your constituents’ voting records, of a sort.  Consider each donation to a communication a vote for that communication and each non-donation (or, if you can get it from email, non-open or non-clickthrough) as a vote against that communication.

[tangent] This is also a helpful technique for when your executive director comes into your office and says “I’ve had five calls today from people who aren’t happy about [insert name of communication here].”  If you reframe it as five people voted against it by calling and five thousand people voted for it by donating, the noisy few are not nearly as concerning.[/tangent]

A proper modeler would use the data from these votes to run a Bayesian model to update continually the priors on whether or not someone would respond to a piece.  As you can probably tell, I’m not a proper modeler.  I prefer my models fast, free, and explainable.  So here’s how I’d use this voting data:

  • Take all of your communications over a 3-5 year period and code them by message.  So for our hypothetical wetlands organization from yesterday, this might be education, research, and conservation.  Hopefully, you don’t have too many communications that mix your messages (people donate to causes, not lists), but if you do, either take it by the primary focus or code it to both messages.
  • Determine the mix of your communications.  Let’s say that over five years this wetlands organization did 25 conservation appeals, 15 education appeals, and 10 research appeals.  This makes the mix 50% conservation, 30% education, and 20% research.
  • Take your donor file and pull out only those people who donated an average of at least once per year over that 3-5 year period.  This will ensure you are looking only at those people who have even close to sufficient data to draw conclusions.
  • Take the coding of communications you have and apply it to the pieces to which the person donated.  Generate a response rate for each type of message for each person on your file.
  • Now, study that list.

In studying that list, you are probably going to find some interesting results:

  • There are going to be some people (a minority of your file but likely a healthy segment) that only gave to one type of message.  And you’ll see the pattern immediately.  Someone who gave eight times over five years to education appeals and never to conservation or research appeals is clearly an education donor.  You will look at all of the other communications you sent this person and all of the people like her in the X-issue-only segments and you will weep a little.  But weep not.  You can now save your costs and these people’s irritation in the future by sending them only the communications about their issue area (with the occasional test to see if their preferences have changed).  It’s only a mistake unless you don’t learn from it; if you do learn from it, it’s called testing.
  • You can also probably lump people who gave rarely to other messages in with the X-issue only people.  So if someone gave to nine of the ten research appeals and to only one each of education and conservation, they clearly have a strong research preference.  This is why it’s helpful to look at these data by response rates — you can see where people have ebbs and flows in their support.
  • You will also see people who like two messages, but not a third (or fourth or however many you have; I will warn you to minimize the number of buckets, as you will not have a large enough sample size without).  So if someone gave five times, three to education appeals and two to research appeals, education and research both appeal to this person with a 20% response rate.  However, conservation doesn’t apparently appeal to them, so you can reduce communications in this realm.
  • You’ll also see a contingent of folks who donate to communications in roughly the same proportion that you send them out.  These people can probably be classified as organizational or institutional donors.  It will take far more digging than mere file analysis to figure out what makes this donor tick.

This leads into an important point: these will not get you to why.  Even things like how often a person gives for how long or Target Analytics Group’s Loyalty Insights, which can show if the person is giving uniquely to you or to others, are transactional data.  While useful proxies, they can’t tell you the depth of feeling that someone has for an organization or let you know what ties bind them to you.  To do that, you must ask.  That’s what I’ll cover tomorrow.  But hopefully this gets a little closer to information that will help you customize your donor’s experiences.

 

And you shall know your constituents by their deeds

Learn about your donors by changing one thing

Congratulations!  A constituent joined your organization!  Now what?  

Welcome series!  Then what?

Well, of course, you drop them into the communication channel of their origin right?

As our Direct Marketing Master Yoda* would say:

6a90683cc161c525f9fbc01036b2c5b6

No. No. No.  Quicker, easier, more seductive.

But in this case, not ideal.  It’s not ideal for the constituent and it’s not ideal for learning more about what this person actually wants — you may be freezing what this person “is” before you’ve had a chance to find out.

The person has already told you that they are responsive to three things:

  • Medium: If they respond to a mail piece, for example, they do not hate mail pieces. It may not be their only, or even their favorite means of communication, but it is one to which they respond.
  • Message: Your mission probably entails multiple things.  Your goal may be wetlands preservation and you work to accomplish this through education, research, and direct conservation.  If someone downloaded your white paper on the current state of wetlands research and your additional research goals, you know that they are responsive to that research message.  It may not be their only or favorite message, but they respond.
  • Action: If someone donates, they are willing to donate.  If they sign a petition, they are willing to petition.  You can guess the rest of this about them perhaps being willing to do other things.

Other than welcome series, which I’ll talk about at another time, you are trying to sail between the Scylla of sending the same thing over and over again and the Charybdis of bombarding people with different, alien messages, media, and asks.

Thus, I would recommend what I’d call the bowling alley approach in honor of Geoffrey Moore, who advocated for a similar approach to entering new markets in his for-profit entrepreneurial classic Crossing the Chasm

The idea in the for-profit world is that you enter with one market with one product.  Once you have a foothold, you try to see that same market a different product and a different market your original product, in the same way that hitting a front bowling pin works to knock down the two behind it.

Here, we play three-dimensional bowling**. The idea behind the non-profit bowling alley, or change one, approach is that you should change only one aspect at a time of your medium, message, and action.

Let’s take our wetlands organization as an example — they work to educate, research, and conserve.  They have people who download white papers and informational packets, people who take advocacy actions, and donors.  And their means of communication are mail, phone, and online.

Let’s further take a person who downloads a white paper on research online and provides her mail and email address.  The usual temptation would be to drop her into the regular email newsletter and into the warm lead acquisition mail stream (and maybe to even do a phone append to call her).

But this would not be the best approach: you would be taking someone who, for all you know, is interested only in one medium, message, and action and asking them for something completely different.

Rather, it would be better if at first you probe other areas of interest.  Ideally, you would ask her:

  • Online for downloading additional information about research (same medium, message, and action)
  • Online for advocacy actions and donations related to research (same medium and message; different action)
  • Online for downloading information about education and conservation (same medium and action; different message)
  • In the mail and on the phone for getting additional information about research (same message and action; different medium)

Obviously, this last part is not practical; mail and phone are too expensive to not have a donation ask involved. However, you could make the mail and phone asks specific to “we need your help to help make our research resources available not just to you, but to policymakers across the country” — tying it as directly as possible to where their known area of interest.

Over time, you should get a strong picture of this person.  Maybe they are willing to do anything for your organization by any means as long as it is focus on your research initiatives.  Maybe they are willing to engage with you about anything, as long as it is only online.  And maybe they like research and conservation, but not education; online and mail, but not phone; and getting information and donating, but not engaging their representatives.

Taking it one step at a time not only helps you learn this over time, but also helps you learn it without culture shock.  If someone downloads a white paper and you ask them to take an advocacy action on that same issue online, they may not be interested, but they likely see the throughline to the action they took.  If they download a white paper and get a phone call for an unrelated action, they likely will not.

It’s the difference between a donor response of “I can see why you’d think that, but no thanks” and “what the hell?” (followed by the constituent equivalent of getting a drink thrown in your face).

It’s also why I recommend going back to the original communication mechanism for lapsed donors in the lapsed donor reactivation post.  In that case, it may be literally the one and only thing you know that works.

You may say that you don’t have the resources to do five different versions of each mail piece or telephone script.  But you can do this inexpensively if you are varying your mail messages throughout the year.  For a warm lead acquisition strategy, simply make sure the advocacy people get the advocacy mail piece and not the others for now.  If you find out some of them are responsive to a mail donation ask, you can ramp up cadence later, but for now, your slower cultivation and learning strategy can pay dividends.

This also helps prevent a common mistake: creating groups like “online advocates,” “white paper downloaders,” etc. and then mailing them without cross-suppression.  If you send each of three groups a monthly mail piece and someone is in all three groups, they may end up getting 36 mail pieces if you don’t cross-suppression (so that these groups are prioritized into like packages instead of everyone in a group getting everything).

Tomorrow, we’ll talk about how to get this type of intelligence from what you’ve already done.

* Don’t believe me?  Check Yoda’s outstanding donor newsletter here

** Science fiction always has people playing three-dimensional chess, but not three-dimensional bowling.  Why or why not?  Discuss.

Learn about your donors by changing one thing

Getting to the Truth of one database

the-one-ring

One Database to rule them all.
One Database to find them.
One Database to bring them all
And in the darkness bind them.*

A beloved former boss of mine once asked the best question I’ve even heard and may ever hear about databases: “Which database is going to be the Truth?”

Others may call this the database of record, but the Truth is far more evocative.  It encompasses “which database will have all of our people?”, “which database will have all of our donations regardless of source?”, and “which database will be the arbiter and tie-breaker for all constituent record issues?”

This is a necessary pre-condition of donor knowledge.  You will not have true knowledge of a constituent of all of your data isn’t all in one place.  And working on donor information without the backend systems to back it up could be a waste of time and effort.

If you are like most nonprofits, you are either laughing or crying at the discussion of one database.  You likely have a few different donor databases by donation type.  Then you have records of people you serve, your email list, your event attendees, and so on.

And, sadly, some of them are necessary.  Some databases do things that other databases will not do.  You may not be able to run your direct mail program out of your online database or vice versa.

So here are some steps you can take to get all of your information in one Truth even if there are multiple databases behind it:

Purge unnecessary databases.  And I mean purge them. Ideally it should be as if your unnecessary database displeased Stalin: it just disappears from history, incorporated into other people’s stories.  To do that:

  • Ask whether another database can do what this database does.  If so, bring the data over and train the relevant parties.  The good news is that often the rogue database in question is just an Excel spreadsheet that can be directly imported into your database of choice.
  • Ask whether another database can do what this database does with modifications.  Rarely is something perfect initially.  You will likely have to create reports for people that they are used to running, but if you are bringing them into a good database, that’s a matter of business rules and set-up, rather than technical fixes.
  • If not, ask if the person can do without what the database can’t do.  You’d be surprised how many things are done because they have been done rather than for any rational reason.

Assuming that you have some databases that can’t be replicated in one big happy database, decide what database is going to be the Truth.  This should have the capacity to store all of your fields, run reports, and do basic data entry.  If you are keeping your direct marketing database, it doesn’t need to be able to run a direct marketing program.  But it does need to have the capacity to do the basic functions.

You may say that you don’t have a database that can fulfill this function.  In that case, I would recommend what I call a Traffic Cop database.  This is a database that you can inexpensively put in the center of multiple databases and get data to and from the other databases.  It’s job is to make sure every database knows what every other database is doing and existing to pull out duplicates and host change management.

Now, sync the databases to the Truth database.  Sometimes you may be fortunate and be using a database that has existing linkages.  For example, if you have decided that SalesForce is going to be your Truth, there are some pre-existing syncs you can get from their apps.  If not:

  • Start by syncing manually.  That is, export a report from one database and import it into the other.  Then, reverse (if you keeping a database, syncing it has to go both ways).  This will allow you to figure out what fields go where and more importantly how to translate from one database to the other (e.g., some databases want the date to be formatted 01/18/2016 and woe be unto you if you forget the zero before the one; others may not having a leading zero or have month and date as separate fields or the like).
  • After you have your process down, you can automate.  This can happen one of two ways: through the database’s APIs or through an automated report from one database that uploads to a location followed by an automated import from the other database.  Both are viable solutions — you would generally prefer the API solution, but you do what you have to do.
  • Make sure you have an effective deduplication process.  It almost goes without saying (and if it doesn’t, check out our PSA for data hygiene here), but data can get messy quickly if you don’t have these in place.

Here are some of those common objections and the easiest replies:

  • Cost: “how can we afford to take on a database project?”  Answer: how can we afford not to?  The lost donations from people calling you up asking for a refund and you have to look through five different databases to see where they donate.  The extra time to try to reconcile your donor database and financial systems.  The data that you won’t be able to get or use for your direct marketing and the lost revenues from that.
  • No direct marketing constituents: “I don’t want X (usually the people we serve) to get hit up for donations.”  Answer: We won’t be able to guarantee they won’t get a solicitation unless we know who they are.  We rent acquisition lists all the time and these people could be on there.
  • We’ve already invested in this other database: Answer: point them to this Wikipedia page.  It’s easier than trying to explain sunk costs on your own.
  • Provincialism: “We have database X and it works fine for us.” Answer: actually there are three answers for this one.  First, start elsewhere.  Usually, someone will have a database that isn’t working for them and better you start with them, who will then start singing the praises of both you and the Truth, than with the people who like where they are currently.  Second, usually, there is an “I wish we could do X” list somewhere that will make it worth this person’s time to switch.  Third, go to the highers-up with your business case.  By this time, you hopefully have some happy converters and some results from your direct marketing program (e.g., “we can put the year someone started with us on their member card now!”) to share.

Hopefully, this helps you get to your own version of the Truth.  Now that you have it, let’s talk about what to put in there.  That’s our charter for the rest of the week.

* Since we started with Game of Thrones yesterday, we have to do Lord of the Rings today…

Getting to the Truth of one database

Why know about your donors?

Winter is coming to nonprofits. Unnamed, faceless, cold, sparse, biting, relentless, gnawing winter. And not all of us will survive.

sean-bean

There are more nonprofits than ever before and that number is increasing.

The pie of charitable giving is expanding, but not as a percentage of GDP and not as a much as the number of nonprofits are expanding. Thus, the average nonprofit’s funding will be going down.

Retention rates (when controlling for lifecycle as advocated here) are at best flat and often down. Online donor retention rates are particularly alarming.

And it is becoming more expensive to retain donors. In order to hit net revenue budgets, nonprofits increase the number of communications sent. Communications increase in quantity and decrease in quality of results for each piece.

As retention drops, the need for additional acquisition increases, further increasing donor-by-donor pressure to give broadly and shallowly.

Nonprofits flee to what they believe is quality, recapitulating what has worked for others. Donors see the playbook, whether it is address labels or a compelling story.

Everyone has a story and most can be told compellingly. So we do. But it’s enough less and less of the time.

Most nonprofits do most of their acquisition from lists of people who give to other nonprofits. Few bring in new people to the idea of philanthropy, considering it is easier to get the philanthropic to give more.

The tragedy of the commons plays out in a million different households. Maybe ten million. To give to one is to be solicited by that one and by the many.

The donor pool is now an apt analogy, as we are polluting and overfishing these same waters without restocking.

Winter is coming. So what needs to be done first?

One might say let’s prevent winter. One would be correct. It is necessary for our long-term survival. We will talk about converting people into the idea of giving at another time — it would be called stimulating primary demand in the for-profit world.

But one must survive the short term to get to the long term. And thus, there is something we need to do first.

One might say to be donor-centric and to love our donors. One would be correct. The ones who will make it through this winter will be the ones that have stood out from the crowd. Their envelopes will be opened, possibly partly for the free gift, but mostly for the joy they create and reinforce. Their emails will be read possibly partly for a nifty subject line, but mostly for a human connection that they forge. Their calls will be answered because they thanked and thanked well.

But there is a precondition for donor-centric treatment. And thus, there is something we need to do first.

The first thing is to know. We must know who donates. Yes, we need to know their demographics, but also far beyond that. We need to know the world they dream of creating. And we need to tell them about how they are helping to create that world.

These wonderful people are planting seeds. They are planning them so kids have a place to swing, so there is shade, so that people can breathe easier, so we can have apples. We owe it to apple people to know they in it for the apples. We owe it to them to tell them about neither the tire swing nor the shade if they don’t care. Our story to them will be the deep moist flesh that children will pick from their tree and the juices that will stay on their cheeks until banished by a shirt sleeve. We will speak of shade to shade people and breathing to those who value breathing most.

To do this, we need to know.

This week will be focused on how to know. I’ll go into the sausage-making that is gaining donor intelligence. But it’s important we start with the why.

It’s because winter is coming. Only those provisioned with true friends will make it through.

The good news is that we are nonprofits. We face down demons worse than winter.

Why know about your donors?