Saving money with DIY analytics

I probably should not be the person talking about DIY.  I have a T-shirt that has a bit of every paint color I’ve ever painted a room, because I am physically incapable of not dripping on myself.  And that is minor compared to some of the crimes against home-anity I’ve committed.

Let me take the opportunity to apologize to everyone who has ever bought a house I’ve worked on.  I hope the electrical burns have healed by now.

But I do believe in DIY analytics and tricks to save money.

You can and should be using professionally produced models.  Many of them will help save you money and/or produce additional revenues.

But you can do a few things on your own to avoid breaking the bank, speed the rate of progress, or both.

Here are some cost-saving things you can do in your own spreadsheet:

Any others that Direct to Donor readers have used?  Please let me know at nick@directtodonor.com so I can share with the community.

Saving money with DIY analytics

Easter eggs in your donor database (guest post)

I have the privilege of sharing a guest post from Angela Struebing, president of CDR Fundraising Group.  For more insights from Angela and the CDR team, you can try their blog here.  Thanks, Angela!

eastereggs

Every year I organize our neighborhood Easter Egg hunt. I stuff and hide over 600 eggs and love watching kids run through the field searching for them. The excitement they feel when finding an egg is the same rush I get when I discover something actionable in a client data file. It got me thinking about some data eggs that are often hidden. For some you have to look a little harder, but the answers are always in the data.

  • When evaluating list performance look past initial response metrics and assess long-term value (LTV) at an individual list level. We often find that lists that look bad upfront may show life when looking at 12-month or 18-month payback periods or retention rates. The same goes for looking at LTV by package. A test that might have had a lower response initially may bring on more loyal donors over the long haul. Make sure you look well beyond just campaign reports for this information.
  • Along the same lines, matchbacks where you look at returns that may be coming in through one channel but driven by another, is another hidden gem in your file. This is especially true for brick and mortar institutions where a recipient gets a mail piece and can respond through the mail, via phone, online, or in the lobby. In order to gauge true list value, you’ll want to look at all response channels and see from where the response was driven. This will also encourage you to make it as easy as possible for donors to give through any channel.
  • This leads us to multi-channel migration and attribution analysis. You’ll want to understand if donors are migrating from online to offline or offline to online. While counterintuitive, we see more people giving an initial gift and then moving to offline giving than vice versa. Knowing this may change your marketing focus. Attribution is critical to making investment decisions and understanding how the various channels are working together.
  • I find lapsed donors particularly interesting and profitable. They have already exhibited an interest in your mission. They can usually be reactivated for less than it costs to find a new donor and are more valuable to an organization (based on number of gifts and average gift). Take the time to test what really works with lapsed segments. Do they perform better in acquisition or to housefile packages or perhaps a tailored lapsed package? All lapsed cohorts aren’t the same with deep and recent lapsed names performing very differently. Should you use a reduced ask, Most Recent Contribution vs. Highest Previous Contribution or generic acquisition string? Do you reference their previous relationship or – if they’ve been absent long enough – treat them as a prospect? How far back can you mail? All of the answers to these questions can be found within your database (and carefully crafted tests).

These are just a few of the things I go looking after when reviewing results and file trends. What hidden gems have you found? Happy Hunting!


C80B9596crcloseANGELA
Angela Struebing is president of CDR Fundraising Group, a multichannel agency focused on helping nonprofits maximize their online, direct mail, telemarketing and DRTV fundraising results. As president, Angela is responsible for overall agency management and strategic planning for national nonprofit clients to include Shriners’ Hospitals for Children, MoMA and the Marine Toys for Tots Foundation.

 

Easter eggs in your donor database (guest post)

Getting to the Truth of one database

the-one-ring

One Database to rule them all.
One Database to find them.
One Database to bring them all
And in the darkness bind them.*

A beloved former boss of mine once asked the best question I’ve even heard and may ever hear about databases: “Which database is going to be the Truth?”

Others may call this the database of record, but the Truth is far more evocative.  It encompasses “which database will have all of our people?”, “which database will have all of our donations regardless of source?”, and “which database will be the arbiter and tie-breaker for all constituent record issues?”

This is a necessary pre-condition of donor knowledge.  You will not have true knowledge of a constituent of all of your data isn’t all in one place.  And working on donor information without the backend systems to back it up could be a waste of time and effort.

If you are like most nonprofits, you are either laughing or crying at the discussion of one database.  You likely have a few different donor databases by donation type.  Then you have records of people you serve, your email list, your event attendees, and so on.

And, sadly, some of them are necessary.  Some databases do things that other databases will not do.  You may not be able to run your direct mail program out of your online database or vice versa.

So here are some steps you can take to get all of your information in one Truth even if there are multiple databases behind it:

Purge unnecessary databases.  And I mean purge them. Ideally it should be as if your unnecessary database displeased Stalin: it just disappears from history, incorporated into other people’s stories.  To do that:

  • Ask whether another database can do what this database does.  If so, bring the data over and train the relevant parties.  The good news is that often the rogue database in question is just an Excel spreadsheet that can be directly imported into your database of choice.
  • Ask whether another database can do what this database does with modifications.  Rarely is something perfect initially.  You will likely have to create reports for people that they are used to running, but if you are bringing them into a good database, that’s a matter of business rules and set-up, rather than technical fixes.
  • If not, ask if the person can do without what the database can’t do.  You’d be surprised how many things are done because they have been done rather than for any rational reason.

Assuming that you have some databases that can’t be replicated in one big happy database, decide what database is going to be the Truth.  This should have the capacity to store all of your fields, run reports, and do basic data entry.  If you are keeping your direct marketing database, it doesn’t need to be able to run a direct marketing program.  But it does need to have the capacity to do the basic functions.

You may say that you don’t have a database that can fulfill this function.  In that case, I would recommend what I call a Traffic Cop database.  This is a database that you can inexpensively put in the center of multiple databases and get data to and from the other databases.  It’s job is to make sure every database knows what every other database is doing and existing to pull out duplicates and host change management.

Now, sync the databases to the Truth database.  Sometimes you may be fortunate and be using a database that has existing linkages.  For example, if you have decided that SalesForce is going to be your Truth, there are some pre-existing syncs you can get from their apps.  If not:

  • Start by syncing manually.  That is, export a report from one database and import it into the other.  Then, reverse (if you keeping a database, syncing it has to go both ways).  This will allow you to figure out what fields go where and more importantly how to translate from one database to the other (e.g., some databases want the date to be formatted 01/18/2016 and woe be unto you if you forget the zero before the one; others may not having a leading zero or have month and date as separate fields or the like).
  • After you have your process down, you can automate.  This can happen one of two ways: through the database’s APIs or through an automated report from one database that uploads to a location followed by an automated import from the other database.  Both are viable solutions — you would generally prefer the API solution, but you do what you have to do.
  • Make sure you have an effective deduplication process.  It almost goes without saying (and if it doesn’t, check out our PSA for data hygiene here), but data can get messy quickly if you don’t have these in place.

Here are some of those common objections and the easiest replies:

  • Cost: “how can we afford to take on a database project?”  Answer: how can we afford not to?  The lost donations from people calling you up asking for a refund and you have to look through five different databases to see where they donate.  The extra time to try to reconcile your donor database and financial systems.  The data that you won’t be able to get or use for your direct marketing and the lost revenues from that.
  • No direct marketing constituents: “I don’t want X (usually the people we serve) to get hit up for donations.”  Answer: We won’t be able to guarantee they won’t get a solicitation unless we know who they are.  We rent acquisition lists all the time and these people could be on there.
  • We’ve already invested in this other database: Answer: point them to this Wikipedia page.  It’s easier than trying to explain sunk costs on your own.
  • Provincialism: “We have database X and it works fine for us.” Answer: actually there are three answers for this one.  First, start elsewhere.  Usually, someone will have a database that isn’t working for them and better you start with them, who will then start singing the praises of both you and the Truth, than with the people who like where they are currently.  Second, usually, there is an “I wish we could do X” list somewhere that will make it worth this person’s time to switch.  Third, go to the highers-up with your business case.  By this time, you hopefully have some happy converters and some results from your direct marketing program (e.g., “we can put the year someone started with us on their member card now!”) to share.

Hopefully, this helps you get to your own version of the Truth.  Now that you have it, let’s talk about what to put in there.  That’s our charter for the rest of the week.

* Since we started with Game of Thrones yesterday, we have to do Lord of the Rings today…

Getting to the Truth of one database

Regression analysis in direct marketing

If you don’t know what a linear regression analysis is or how it is measured, I recommend you start with my post on running regressions in Excel here.

OK, now that you’re back, you’ll notice I did an OK job of saying what a linear regression analysis is and what it means, but I didn’t mention why these would be valuable.  Today, we rectify this error.

In yesterday’s post on correlations, I mentioned that they only work for two variables at a time. This is extremely limiting, in that most of your systems are more complex with this.  Additionally, because of interactions between multiple variables, it’s difficult to determine what is causing what.  I’ve discussed before how the failure of the US housing market was related to people assuming variables that are independent were actually correlated with each other.  

Linear regression analysis allows you to look at the intercorrelations between and among various variables.  As a result, regression analysis is the primary basic modeling algorithm.  In fact, it’s often used as a baseline for other approaches — if you can’t beat the regression analysis, it’s back to the drawing board.

Two side notes here:

First, if you are interested in learning to do this yourself, I strongly recommend Kaggle competitions.  Kaggle is where people compete for money to produce the best models for various things — right now, for example, they are running a $200,000 competition on diagnosing heart disease, a $50,000 competition for stock market modeling, and a $10,000 competition to identify endangered whales from photography.

It’s some pretty cool data stuff and the best part is that they have tutorial competitions for people like me (and perhaps you; I would hate to assume).  One sample is to model what passengers would survive the sinking of the Titanic from variables like age, sex, class ticket, fare, etc.  They walk you through correlation, regression, and some more advanced modeling techniques we’ll discuss later in the week.  Here, as ever, they look for improvement on regression as the goal of more advanced models.

Second, it’s tempting to view regression as a Mendoza line* of modeling: a lowered hurdle that shouldn’t be bothered with.  But regression can give you fairly powerful results and, unlike many of the other more advanced modeling we’re going to discuss, you can do it and interpret it yourself.

That said, like correlation, it doesn’t know what to do with non-linear variables.  For example, you have probably noticed that your response rate falls off significantly after a donor hasn’t donated in 12 months (plus or minus).  A regression model that looks at number of months since last gift will ignore this and assume that the difference between 10 and 11 months is the same as the difference between 12 and 13 months.  And it isn’t.  It also will choke on our ask string test in the same way as correlations will.

So here are some things worth testing with regression analyses:

Demographic variables: you may know the composition of your donor file (and if you are like most non-profits, it’s probably female skewed).  But have you looked at which sex ends up becoming the better donor over time?  It may be with a regression analysis that the men on your file donate more or more often (or not), which could change your list selects (I know I have been known to put a gender select on an outside file rental to improve its performance).

Lapsed modeling your file: Using RFM analysis, you know what segments perform best for you and which go into your lapsed program (if not, use RFM analysis to figure out what segments perform best for you).  However, there may be hidden gems in your file that missed a gift (according to you) and would react well if approached again hidden in your lapsed files.  Taking your appended data like wealth, demographics, and other variables alongside your standard RFM analysis can help find some of these folks to reach out to.

Content analysis: In the early regression article, I show a (bad) example of using regression analysis to find out what blog posts work best.  This can be applied to Facebook or other content as well.

What I didn’t mention is that once you have this data, it probably applies across media.  What works on Facebook and in your blog are probably good topics for your enewsletters, email appeals, and possibly paper newsletters as well.  Through this type of topic analysis, you will figure out what your constituents react to, then give them more of it.

This, however, looks at your audience monolithically.  In future posts, I’ll talk about both some ways to cluster/segment your file like k-means clustering and some ways on improving on regression analysis with techniques like Bayesian analysis.  For now, though, it’s time to look at some formulae that rule our worlds even beyond direct marketing: what do Google and Facebook use?

 

* A baseball term coming from Mario Mendoza, a weak-hitting shortstop who usually averaged around .200 batting average.  Anyone below Mendoza in the batting average category was considered to be hitting below the Mendoza line or very poorly.  (He made up for this for several years with strong fielding).  And now you know the rest of the story.

mario_mendoza_autograph

Regression analysis in direct marketing

Correlations in direct marketing II: The Wrath of Khan Academy

Yesterday, we saw how to run and interpret correlations.  Today, we’re going to look at the implications of the way correlations are set up for direct marketers.

First and foremost, I must stipulate that correlation does not equal causation.  I did a good job of discussing this in a previous post talking about how attractive Matt Damon is in his movies.  Rather than go into a lot of detail on this, I’ll link over to that post here.  Looking back at that post, I forgot to put in a picture of Matt Damon, which I will rectify here:

 

damon_cropped

Intelligence and attractiveness correlate;
I wish I could have explained this to people in high school.

This is fairly intuitive, given our discussion of height and weight earlier.  With exceptions for malnutrition and the like, it really doesn’t make sense to say someone’s weight causes them to be taller or height causes them to be heavier.

There’s a great Khan Academy video that covers a lot of this here. The Khan Academy video also gives me an excuse for the name of the blog post that I really couldn’t pass up.

Back to correlations, they only predict linear (one-way) relationships.  Given the renewal rates above, a correlation is not the ideal tool for describing this relationship, as it will give you a rubric that says “for every decrease in decile number, you will have an X% increase in renewal rate.”  We can see looking at the data that this isn’t the case — moving from 2 to 1 has a huge impact, whereas moving from 9 to 6 has an impact that is muddled at base.

Another example is the study on ask strings we covered here.  When looking at one-time donors, asking for less performed better than asking for the same as their previous gift.  Asking for more also performed better than asking for the same as the person’s previous gift.  However, if you were to run a correlation, it would say there is no relationship because the data isn’t in a line (graphically, you are looking at a U shape).  We know there is a relationship, but not one that can be described with a correlation.

You’ll also note that they only work between two variables.  Most systems of your acquaintance will be more complex than this and we’ll have to use other tools for this.  That said…

Correlations are a good way of creating a simple heuristic.  SERPIQ just did an analysis of content length and search engine listings that I learned about here. They found a nice positive correlation:

wordcountcontent

Hat tip to CoSchedule for the graph and SERPIQ for the data.

As you read further in the blog post, you’ll see that there is messiness there.  It’s highly dependent on the nature of the search terms, the data are not necessarily linear, and non-written media like video are complicating.  However, the data lend themselves to a simple rule of “longer form content generally works a little bit better for search engine listings” or, in a lot of cases, “your ban on longer-form content may not be a good idea.”  While these come with some hemming and hawing, being able to have simplicity in your rules is a good thing, making them easier to follow.

But refer back to the original point: correlation isn’t causation.  Even in the example above of word count being related to search engine listings, more work is required to find out what type of causal relationship, if any, there is between word count and search engine listings.

Hope this helps.  Tomorrow, we’ll talk about regression analysis, which will take you all the way back to your childhood to look for memories that will…

Um, actually, it will be about statistical regression analysis.  Never mind.

Correlations in direct marketing II: The Wrath of Khan Academy

Correlations in direct marketing: an intro

This week, I’d like to take a look at some of the formulae and algorithms that run our lives in direct marketing.

Al Gore giving his global warming talk in Mountain View, CA on 7

The algorithm was invented by Al Gore and named after his dance moves
(hence, Al Gore rhythm).
Here is a video of him dancing.

Before you run in fear, my goal is not to make you capable of running these algorithms — some of the ones we’ll talk about this week I haven’t yet run myself.  Rather, my goal is to create some understanding of what these do so you can interpret results and see implications.

And the first big one is correlation.

But, Nick, you say, you covered correlation in your Bloggie-Award-winning post Semi-Advanced Direct Marketing Excel Statistics; will this really be new?

My answers:

  1. Thank you for reading the back catalog so intensely.
  2. The Bloggies, like 99.999998% of the Internet, do not actually know this blog exists.
  3. In that post, I talked about how to run them, but not what they mean.  I’m looking to rectify this.

So, correlation simply means how much two variables move together (aka covariance).  This is measured by a correlation coefficient that statisticians call r.  R ranges from 1 (perfect positive relationship) to 0 (no relationship whatsoever) to -1 (perfect negative relationship).  The farther from zero the number is, the stronger the relationship.

A classic example of this is height and weight.  Let’s say that everyone on earth weighed 3 pounds for every inch of height.  So if you were 70 inches tall (5’10”), you would weigh 210 pounds; at 60 inches (5’0”), you would weigh 180 pounds.  This is a perfect correlation with no variation, for an R of 1.

Clearly, this isn’t the case.  If you are like me, after the holidays, your weight has increased but you haven’t grown in height.  Babies aren’t born 9 inches long and 27 pounds (thank goodness).  And the Miss America pageant/scholarship competition isn’t nearly this body positive.  So we know this isn’t a correlation of one.

That said, we also know that the relationship isn’t zero.  If you hear that someone is a jockey at 5’2”, you naturally assume they do not moonlight as a 300-pound sumo wrestler on the weekend.  Likewise, you can assume that most NBA players have a weight that would be unhealthy on me or (I’m making assumptions based on the base rate heights of the word with this statement) you.

So the correlation between height and weight is probably closer to .6.

There’s a neat trick with r: you can square it and get something called coefficient of determination.  This number will get you the amount of one variable that is predicted by the other.  So, in our height-weight example, 36 percent (.6 squared) of height is explained by its relationship with weight and vice versa.  It also means that there’s 64 percent of other in there (which we’ll get to tomorrow when we talk about regression analysis.

You can get some of this intuitively without the math.  Here’s a direct marketing example I was working on a couple of weeks ago.  An external modeling vendor had separated our file into deciles in terms of what they felt was their likelihood of renewing their support.  Here’s what the data looked like by decile:

Decile Retention rates over six months
1 50%
2 40%
3 35%
4 32%
5 30%
6 25%
7 27%
8 24%
9 28%
10 21%

No, this isn’t the real data; it’s been anonymized to protect the innocent and guilty.

You need only look at this data to see that there is a negative correlation between decline and retention rate — the higher (worse) decile, the lower the retention rate.  It also illustrates that it’s not a perfect linear relationship — clearly, this model does a better job of skimming the cream off the top of the file than predicting among the bottom half of the file.  

Tomorrow, we’ll talk about the implications of these correlations for direct marketing.

Correlations in direct marketing: an intro

6 intermediate cost-per-click techniques

The original cost-per-click (CPC) search engines did their listings strictly by what you were willing to pay per click.  (I actually used Goto.com for CPC listings, before it become Overture Services, before it became Yahoo! Search Marketing.  Nothing like Internet time to make one feel old).

500004804-03-01

Yes. This was once a thing.  A big thing.

Google’s algorithm, however, takes the quality of the ad and the site into account.  This is partly because you will come back if you have positive experiences on the site and partly because it maximizes profits.  For the same reason that you would look at gross revenue per mail piece/phone contact/email/carrier pigeon instead of just response rate in isolation, Google looks at gross revenue per ad shown as the backbone of its infrastructure.

Thus, it is in your interest to maximize your click-through rate (except in one very special case I’ll discuss on Friday); you can pass your better bidding brethren by beating them on quality.  Hence the focus on things like negative keywords and phrase matching yesterday: you want to get your clicks on as few ads as possible.  An average quality score from Google is a 5.  If you are at a 10, your cost per click goes down by 50%; if you are at a 1, it goes up by 400%.

Targeting smarter also helps you get clicks from the people from whom you want to get clicks, instead of those who didn’t understand what they were getting into from your ad.

So here are a few techniques to help get to the next level of pay-per-click success:

Check in on your keywords regularly. This should be at least weekly; daily would be better.  It doesn’t have to be for long, but Google will keep giving you helpful tips on additional strategies and keywords to try.  You can also see what is performing and what isn’t, retooling ad copy for underperforming ads and learning which landing pages aren’t converting as well.

Set up conversion tracking.  In the beginning, Internet advertising was sold in CPM – cost per thousand impressions and the earth was without form, and void.  Then came CPC – cost per click – where you pay for an action, rather than a view.  The ultimate is going to be cost per conversion, where you only pay when you get a donor (or other person you are desiring), and you can set your goals accordingly.  Companies won’t want to do this because they have to rely on you to convert, rather than themselves, but it is semi-inevitable.

You can have this advantage right now if you set up conversion tracking.  You will be able to see how many people convert and, if they give donations, how much you get from the campaign.  Seeing how much you get from a campaign ahead of time, then bidding, is like playing poker with all of the cards face up – it’s remarkable how much better it makes you.

Unbounce your page.  Not every page converts well.  With conversion tracking set up, you can tell if your page is repulsing potential constituents.  Testing with Google solutions or a solution like Optimizely can help you convert more people and lower your CPC costs as your quality score goes up.

Set up dynamic keyword targeting.  A person is more likely to click an ad that has the exact words that they put into the search engine in it.  The trick is that people put all sorts of things into search engines.  With dynamic keyword targeting, it doesn’t matter if they put “rainforest deforestation,” “rain forest deforestation,” “tropical forest deforestation,” “destruction of the rainforest,” “tropic rainforest deforestation,” etc., into the search bar, you can add those specific words into your ad.

Geotarget your ads.  This is especially true if you are a nonprofit with a limited geographic reach.  If you are an early childhood intervention provider in Dallas, you likely don’t want Seattle searchers.  However, this applies even to national and international nonprofits.  If you have chapters, or state-specific content, you can direct those specific searchers to the area more relevant for them.  This works especially well for things like walks and other events, where people will likely only come from a certain distance around to the event.

Go for broke.  If you do get a Google Grant, try to use every cent.  Not only will it get you more traffic, more constituents, and more donors, but it will also allow you to apply for more money.  Your first steps to worldwide nonprofit domination await.

I hope these are helpful.  Please leave any tips you’ve found useful in the comments section below.

6 intermediate cost-per-click techniques