Millennial Myth Busting: Attributes of Gen Y

I had the privilege of having a Corner Office article published in December’s NonProfit Pro. (I’d link to it, but it doesn’t yet appear to be online.)  One sentence in that piece triggered more reaction than all three months of my blogging combined:

We should regard a nonprofit that courts a Millennial audience at the expense of their core like the person who dyes their hair and takes off their ring to hit on people at a college bar: unfaithful to those who love them, uncomfortable with who they are, and ill-equipped to succeed even if success were desirable.

What I had not realized is that it appears that there are cultural warriors on both sides of a debate that summarizes to “Millennials are awesome and the future” versus “Millennials are horrible and the Earth is doomed.” And I had come down in the “get off my lawn” Gran Torino camp.

gran_torino_poster

His next movie was focused on how angry he was with that empty chair.

So this week, I wanted to add a bit of nuance to this statement and to the strategy discussion of millennials and non-profits.  I add emphasis to strategy here.  My central point is the Corner Office article was to highlight that sometimes trends are used instead of strategies.

Nowhere does this seem clearer to me than in the discussion about generational dynamics, especially as it concerns the unique snowflakes called millennials.  The discussions remind me of the introduction to the Duck tours of Wisconsin Dells we went on growing up, where the tour guide would tell you that what you were about to hear was about one-third the truth, one-third Native American legends, and one-third out-and-out lies.

Since the Confirmed/Plausible/Busted trichotomy is likely copyrighted (copywrote?) by people who bust myths far better than I, I’ll use this truth/legend/lie way of breaking things down.  

Let’s review some of the attributes that millennials purportedly possess:

[They] have radically different life experiences than those in generations before them.”

[T]hey distrust hierarchy. They prefer more informal arrangements. They prefer to judge on merit rather than on status. They are far less loyal to their companies. … They know computers inside and out. They like money, but they also say they want balance in their lives.

They are “more collaborative”, “less hierarchical,” “more altruistic”, “more tech-savvy,” “balanced”, “candid in their communications,” and “rule-shy.”  

Most of children seem to be taking so long to grow up, at least by conventional measures. Therituals that once marked adulthood – graduation, the first job, marriage, children – have been delayed, eliminated or extended.

They are a “highly educated, pampered group, their numbers are small but their impact is great.”

By now, you have probably guessed the conceit here – all of these things weren’t said about millennials.  They are contemporaneous accounts about Generation X and Yuppie Baby Boomers.

The truth is that many of the things said about millennials are the things said about kids since time immemorial.  So much of what you hear about kids today with their smartphones, their social networks, and their texting are echoes through the ages of hearing about kids today with their fire, their pointed sticks, and their paintings inside the caves.

I’d highly recommend a Mental Floss list of some of these throughout the ages for the humor in this.  As you see surveys about how gender and sexuality are more fluid among the young (also here)), hopefully this 1771 broadside against the feminization of the then-current set of men sounds familiar:

Whither are the manly vigor and athletic appearance of our forefathers flown? Can these be their legitimate heirs? Surely, no; a race of effeminate, self-admiring, emaciated fribbles can never have descended in a direct line from the heroes of Potiers and Agincourt.

Can’t you just hear the ”harrumph” that must have followed this statement, possibly followed by a feverish polishing of a monocle or some such?  Plato himself wrote that kids are rude and don’t respect authority.  I would have put that in the list above, but the fact that it’s in ancient Greek might have been a giveaway that it wasn’t talking about millennials.

Yet these same “insights” are being repackaged for this current generation and will likely be repackaged for the next generation.  So tomorrow, we’ll put these to the test.

Agree?  Disagree?  Let me know in the comments.

Millennial Myth Busting: Attributes of Gen Y

Understanding and using Facebook’s algorithm

Facebook is the nexus of a lot of debate as to how best to incorporate social media into other marketing efforts.  My argument will be there is a twofold Facebook strategy: 1) using organic content to engage your superfans and 2) using addressable media to reach everyone else.

the-social-network-movie-poster-david-fincher-381x600

500 million users.  How quaint.

Like Google, the base of the Facebook algorithm (EdgeRank) is fairly easy:

  • Affinity: How close the person creating the content is to the person receiving it.
  • Weight: How much the post has been interacted with it, with deeper interactions counting more
  • Time decay: How long it has been since it has been posted.

These interactions are multiplied together and summed, roughly.

Like Google, however, it has been altered over time significantly.  There are now significant machine learning components baked in that help with spam detection and bias toward quality content.  Additionally, now users can prioritize their News Feeds themselves.  Finally, because of the sheer amount of content available, the organic reach of an average post is single digit percentages or below, meaning that if you have 100,000 likes, maybe 2,000 people will see your average post.

The implications of this base algorithms are stark:

 

  • Organic reach on Facebook is for the people who really love you.  Many people think of Facebook as a new constituent acquisition system.  However, people who come in dry will almost never see your posts.
  • Consequently, only things that connect with your core will have any broader distribution.  Think of who is in the top two percent of your constituents: employees, top volunteers, board members, and that may be about it.  If those people don’t give the post weight, no one outside of this group will see it.
  • What you have done for them lately has outsized weight.  Research into Facebook interactions shows that Facebook gives outsized weight to what a person as interacted with in their last 50 interactions.
  • Facebook is not for logorrhea like Twitter.  Think of your posts as a currency you spend each time.  If your post gets above average interactions, you will move your average up and interact with more people; if not, your reach will lose.  Posting too many times (which varies from organization to organization) will diminish your audience as average reach will decline).  Additionally, all of the things you have to post for organizational reasons (e.g., sponsor thank yous) are spending your audience and you have to assess how much you are willing to spend to fulfill those objectives.
  • This all adds up to the uber-rule: Facebook is for things your core supporters will interact with quickly.  If they don’t, it won’t reach your more distant supporters and it will lessen the likelihood that your next post will reach them as well.
  • It also relates to the second uber-rule: because Facebook can change its algorithm as it wishes, you should not build your house on rented land.  The best thing you can do with your interactions is to direct them to your site, to engage your content and sign up for your list.

This all sounds a bit dire, so I should also highlight how to reach the other 98%(ish) of your Facebook audience as well as some of your non-Facebook audience on Facebook: addressable media.

Facebook allows you to upload a list of your supporters and target advertising to them specifically whether or not they are current Facebook likers of you.  You can learn more about this on my CPC ads post here.  This also goes into lookalike audiences, a way of getting people who aren’t who you talk to currently, but look a lot like them, a nifty acquisition trick.  Since organic reach won’t get you to these loosely and non-affiliated people, this is the only way to achieve that reach.  And, since it is cost-per-click, you can control your investment and your results.

But like discussed above, these campaigns should be to build your relationship to people outside of Facebook.  For the same reason companies advertising on CBS don’t work to build a greater relationship to CBS, but rather to the advertising companies, your advertising on Facebook shouldn’t be aimed at getting Mark Zuckerberg et al more friends — they have over a billion of them already.

Understanding and using Facebook’s algorithm

How to use Google’s algorithm in your direct marketing

You may say a search engine optimization strategy is not direct marketing. I humbly disagree.  In fact, working with Google and other search engines (but mostly Google) can help you with your warm lead generation, helping you get your direct marketing program started for free as I’ve advocated in the past.  In addition, by knowing what a warm lead same to you for and about, you can customize your approach to that person in interesting ways.

So, how does the Google algorithm work?  It’s been through approximately a googol different versions throughout the years (there’s a good basic list here), but some of the underlying thinking behind it has been largely unchanged.

It’s instructive to think about search engines pre-Google.  There were two different models: directories that were maintained by hand, by either a company (e.g., Yahoo) or by a community (dmoz) and search engines that used textual analysis to determine how applicable a page was to your search (e.g., Altavista, Lycos).  The first model has obvious problems with the scale of the Web.  The second has problems with determining quality. People who would spam every possible keyword for a page at the bottom of the page or create 100,000 pages each focused on optimizing for its own set of terms performed well in these engines, but probably should not.

The fundamental question was how do you have a computer determine reputation?

The basic insight that the Google founders had was from the world of academia, where a research paper’s quality can be estimated by how many papers cite it.  They realized that when someone links to a page, they are voting for that page’s quality.  Looking at the initial linking pattern, you can get a basic view of what important sites are.  Then, you can factor in the quality of the linking sites to alter the quality rankings.  After all, getting one link from, for example, the White House is more important that 100 different links from Jim Bob’s Big House of Internet.

 

This is the core of the original Google algorithm called PageRank (named after Larry Page, not Web pages, oddly enough).

The changes over the years since have made this influence important, but not the sole criterion as it used to be.  Other factors now include:

  • Machine learning based on what people actually click on (a different type of “voting”)
  • Weighting toward mobile-friendly sites
  • Personalization of search engine listings
  • De-spamming algorithms
  • Devaluation of ads above the fold
  • Incorporation of social signals
  • Situational reputation (e.g., if my blog linked to you, it would help you more for direct marketing terms than with your hummingbird mating pattern blog)

And it’s constantly evolving.  So there are a few implications to this:

The easiest way to get good search engine listings isn’t to optimize for Google; it’s to create quality content.  I know.  This is a bummer.  Or not, if you have quality content.  The goal of Google and other search engines is to evolve to make searching a true meritocracy.  In the beginning, you had a chance of gaming the system.  You don’t have that chance now.

That does include things like not having ad-based content, making it mobile friendly, and prompting social media interactions.

There’s an important corollary to this, which is that anyone who tells you that they have a special sauce either is lying or won’t have their tactics last out the year.  That said, there are a few that you can do that will help both your content quality and your search engine listings.

Make sure you have the terms you want to be found for in your articles.  Not even Google will find the best possible page for “is James Bond a Time Lord?” (hint: it’s this one) if it doesn’t have the words James Bond and Time Lord on it.  Ideally, these will be prominently placed (e.g., in the title or header tags) and frequent (but not spammy frequent).

Check your bounce rates. With machine learning incorporated into the algorithm, you want to make sure people are getting what they came for when they come to your page.  This makes continual testing and improvement of your content will pay dividends.

Create content for the searches you want to dominate. Let’s say you are (or want to be) the premier early childhood education nonprofit in Missoula, Montana.  You find through your keyword research that people don’t necessarily look for “early childhood education”; they look for conditions (e.g., “autism services”, “Down’s Syndrome”) or symptoms (e.g., “child not speaking”, “when starting crawling”, “development milestones”).  Look the volume of search terms, which you can do with Google’s free keyword suggest tool once you have your AdWords account and Google Grant.

You do have a Google Grant, don’t you?  If not, get one ASAP here.  

So, let’s say you want to focus on autism to start — you should be creating content that helps parents in your area learn about autism, what it is, and how you can serve them.  Lather, rinse, and repeat with your other areas of content.  Not only will this help with the Google algorithm (in terms of keyword density and in terms of more people linking to quality content), but it will also help with conversions (as people get content that fills an established need) and in knowledge past conversion (if someone comes in on an autism search term to autism content, you can market to them differently than someone looking for Down’s Syndrome content).

Finally, ask your partners to link to your specific content.  This isn’t link spamming, but rather you linking to people who have good content for your constituents and vice versa.  This will help lift both of your boats.

I’m sorry that there are no magic beans to sell you here from the algorithm. But hopefully this will help you avoid buying someone else’s.

With Facebook, however, there are a few more lessons for organic content that we will cover tomorrow.

How to use Google’s algorithm in your direct marketing

Regression analysis in direct marketing

If you don’t know what a linear regression analysis is or how it is measured, I recommend you start with my post on running regressions in Excel here.

OK, now that you’re back, you’ll notice I did an OK job of saying what a linear regression analysis is and what it means, but I didn’t mention why these would be valuable.  Today, we rectify this error.

In yesterday’s post on correlations, I mentioned that they only work for two variables at a time. This is extremely limiting, in that most of your systems are more complex with this.  Additionally, because of interactions between multiple variables, it’s difficult to determine what is causing what.  I’ve discussed before how the failure of the US housing market was related to people assuming variables that are independent were actually correlated with each other.  

Linear regression analysis allows you to look at the intercorrelations between and among various variables.  As a result, regression analysis is the primary basic modeling algorithm.  In fact, it’s often used as a baseline for other approaches — if you can’t beat the regression analysis, it’s back to the drawing board.

Two side notes here:

First, if you are interested in learning to do this yourself, I strongly recommend Kaggle competitions.  Kaggle is where people compete for money to produce the best models for various things — right now, for example, they are running a $200,000 competition on diagnosing heart disease, a $50,000 competition for stock market modeling, and a $10,000 competition to identify endangered whales from photography.

It’s some pretty cool data stuff and the best part is that they have tutorial competitions for people like me (and perhaps you; I would hate to assume).  One sample is to model what passengers would survive the sinking of the Titanic from variables like age, sex, class ticket, fare, etc.  They walk you through correlation, regression, and some more advanced modeling techniques we’ll discuss later in the week.  Here, as ever, they look for improvement on regression as the goal of more advanced models.

Second, it’s tempting to view regression as a Mendoza line* of modeling: a lowered hurdle that shouldn’t be bothered with.  But regression can give you fairly powerful results and, unlike many of the other more advanced modeling we’re going to discuss, you can do it and interpret it yourself.

That said, like correlation, it doesn’t know what to do with non-linear variables.  For example, you have probably noticed that your response rate falls off significantly after a donor hasn’t donated in 12 months (plus or minus).  A regression model that looks at number of months since last gift will ignore this and assume that the difference between 10 and 11 months is the same as the difference between 12 and 13 months.  And it isn’t.  It also will choke on our ask string test in the same way as correlations will.

So here are some things worth testing with regression analyses:

Demographic variables: you may know the composition of your donor file (and if you are like most non-profits, it’s probably female skewed).  But have you looked at which sex ends up becoming the better donor over time?  It may be with a regression analysis that the men on your file donate more or more often (or not), which could change your list selects (I know I have been known to put a gender select on an outside file rental to improve its performance).

Lapsed modeling your file: Using RFM analysis, you know what segments perform best for you and which go into your lapsed program (if not, use RFM analysis to figure out what segments perform best for you).  However, there may be hidden gems in your file that missed a gift (according to you) and would react well if approached again hidden in your lapsed files.  Taking your appended data like wealth, demographics, and other variables alongside your standard RFM analysis can help find some of these folks to reach out to.

Content analysis: In the early regression article, I show a (bad) example of using regression analysis to find out what blog posts work best.  This can be applied to Facebook or other content as well.

What I didn’t mention is that once you have this data, it probably applies across media.  What works on Facebook and in your blog are probably good topics for your enewsletters, email appeals, and possibly paper newsletters as well.  Through this type of topic analysis, you will figure out what your constituents react to, then give them more of it.

This, however, looks at your audience monolithically.  In future posts, I’ll talk about both some ways to cluster/segment your file like k-means clustering and some ways on improving on regression analysis with techniques like Bayesian analysis.  For now, though, it’s time to look at some formulae that rule our worlds even beyond direct marketing: what do Google and Facebook use?

 

* A baseball term coming from Mario Mendoza, a weak-hitting shortstop who usually averaged around .200 batting average.  Anyone below Mendoza in the batting average category was considered to be hitting below the Mendoza line or very poorly.  (He made up for this for several years with strong fielding).  And now you know the rest of the story.

mario_mendoza_autograph

Regression analysis in direct marketing

Correlations in direct marketing II: The Wrath of Khan Academy

Yesterday, we saw how to run and interpret correlations.  Today, we’re going to look at the implications of the way correlations are set up for direct marketers.

First and foremost, I must stipulate that correlation does not equal causation.  I did a good job of discussing this in a previous post talking about how attractive Matt Damon is in his movies.  Rather than go into a lot of detail on this, I’ll link over to that post here.  Looking back at that post, I forgot to put in a picture of Matt Damon, which I will rectify here:

 

damon_cropped

Intelligence and attractiveness correlate;
I wish I could have explained this to people in high school.

This is fairly intuitive, given our discussion of height and weight earlier.  With exceptions for malnutrition and the like, it really doesn’t make sense to say someone’s weight causes them to be taller or height causes them to be heavier.

There’s a great Khan Academy video that covers a lot of this here. The Khan Academy video also gives me an excuse for the name of the blog post that I really couldn’t pass up.

Back to correlations, they only predict linear (one-way) relationships.  Given the renewal rates above, a correlation is not the ideal tool for describing this relationship, as it will give you a rubric that says “for every decrease in decile number, you will have an X% increase in renewal rate.”  We can see looking at the data that this isn’t the case — moving from 2 to 1 has a huge impact, whereas moving from 9 to 6 has an impact that is muddled at base.

Another example is the study on ask strings we covered here.  When looking at one-time donors, asking for less performed better than asking for the same as their previous gift.  Asking for more also performed better than asking for the same as the person’s previous gift.  However, if you were to run a correlation, it would say there is no relationship because the data isn’t in a line (graphically, you are looking at a U shape).  We know there is a relationship, but not one that can be described with a correlation.

You’ll also note that they only work between two variables.  Most systems of your acquaintance will be more complex than this and we’ll have to use other tools for this.  That said…

Correlations are a good way of creating a simple heuristic.  SERPIQ just did an analysis of content length and search engine listings that I learned about here. They found a nice positive correlation:

wordcountcontent

Hat tip to CoSchedule for the graph and SERPIQ for the data.

As you read further in the blog post, you’ll see that there is messiness there.  It’s highly dependent on the nature of the search terms, the data are not necessarily linear, and non-written media like video are complicating.  However, the data lend themselves to a simple rule of “longer form content generally works a little bit better for search engine listings” or, in a lot of cases, “your ban on longer-form content may not be a good idea.”  While these come with some hemming and hawing, being able to have simplicity in your rules is a good thing, making them easier to follow.

But refer back to the original point: correlation isn’t causation.  Even in the example above of word count being related to search engine listings, more work is required to find out what type of causal relationship, if any, there is between word count and search engine listings.

Hope this helps.  Tomorrow, we’ll talk about regression analysis, which will take you all the way back to your childhood to look for memories that will…

Um, actually, it will be about statistical regression analysis.  Never mind.

Correlations in direct marketing II: The Wrath of Khan Academy

Correlations in direct marketing: an intro

This week, I’d like to take a look at some of the formulae and algorithms that run our lives in direct marketing.

Al Gore giving his global warming talk in Mountain View, CA on 7

The algorithm was invented by Al Gore and named after his dance moves
(hence, Al Gore rhythm).
Here is a video of him dancing.

Before you run in fear, my goal is not to make you capable of running these algorithms — some of the ones we’ll talk about this week I haven’t yet run myself.  Rather, my goal is to create some understanding of what these do so you can interpret results and see implications.

And the first big one is correlation.

But, Nick, you say, you covered correlation in your Bloggie-Award-winning post Semi-Advanced Direct Marketing Excel Statistics; will this really be new?

My answers:

  1. Thank you for reading the back catalog so intensely.
  2. The Bloggies, like 99.999998% of the Internet, do not actually know this blog exists.
  3. In that post, I talked about how to run them, but not what they mean.  I’m looking to rectify this.

So, correlation simply means how much two variables move together (aka covariance).  This is measured by a correlation coefficient that statisticians call r.  R ranges from 1 (perfect positive relationship) to 0 (no relationship whatsoever) to -1 (perfect negative relationship).  The farther from zero the number is, the stronger the relationship.

A classic example of this is height and weight.  Let’s say that everyone on earth weighed 3 pounds for every inch of height.  So if you were 70 inches tall (5’10”), you would weigh 210 pounds; at 60 inches (5’0”), you would weigh 180 pounds.  This is a perfect correlation with no variation, for an R of 1.

Clearly, this isn’t the case.  If you are like me, after the holidays, your weight has increased but you haven’t grown in height.  Babies aren’t born 9 inches long and 27 pounds (thank goodness).  And the Miss America pageant/scholarship competition isn’t nearly this body positive.  So we know this isn’t a correlation of one.

That said, we also know that the relationship isn’t zero.  If you hear that someone is a jockey at 5’2”, you naturally assume they do not moonlight as a 300-pound sumo wrestler on the weekend.  Likewise, you can assume that most NBA players have a weight that would be unhealthy on me or (I’m making assumptions based on the base rate heights of the word with this statement) you.

So the correlation between height and weight is probably closer to .6.

There’s a neat trick with r: you can square it and get something called coefficient of determination.  This number will get you the amount of one variable that is predicted by the other.  So, in our height-weight example, 36 percent (.6 squared) of height is explained by its relationship with weight and vice versa.  It also means that there’s 64 percent of other in there (which we’ll get to tomorrow when we talk about regression analysis.

You can get some of this intuitively without the math.  Here’s a direct marketing example I was working on a couple of weeks ago.  An external modeling vendor had separated our file into deciles in terms of what they felt was their likelihood of renewing their support.  Here’s what the data looked like by decile:

Decile Retention rates over six months
1 50%
2 40%
3 35%
4 32%
5 30%
6 25%
7 27%
8 24%
9 28%
10 21%

No, this isn’t the real data; it’s been anonymized to protect the innocent and guilty.

You need only look at this data to see that there is a negative correlation between decline and retention rate — the higher (worse) decile, the lower the retention rate.  It also illustrates that it’s not a perfect linear relationship — clearly, this model does a better job of skimming the cream off the top of the file than predicting among the bottom half of the file.  

Tomorrow, we’ll talk about the implications of these correlations for direct marketing.

Correlations in direct marketing: an intro

Increasing your non-electronic mail open rates

These direct marketing kids today, with their emails and analytics and the Facebook — they don’t know how hard it used to be.  Back in my day, we sent people letters.  You couldn’t measure open rates!  You’d just see if they sent back their check and hoped they opened it!  And the mail carrier walked uphill both ways.

The problem is that my day was yesterday.  We still can’t tell if people are opening our envelopes.  Given the amount of testing of colors and windows and teaser copy that goes into this area that can only be measured tertiarily, this is a pity.

Today’s study doesn’t entirely solve this but takes a nice step forward in understanding what gets people to open and react to envelopes.

[TANGENT]

I know I shouldn’t be talking about this right now — I should be writing about direct marketing New Year’s Resolutions, just like I should have done the year in review last week, Star Wars content the week before that, and preparing for year-end giving content in November.

And maybe I’ll do that some day, but for right now, I’m going to try to remain counterprogramming.  Think of me as the nonprofit direct marketing Puppy Bowl — if you tire of zigging, come over here and I’ll probably be zagging.

puppybowl

As Chekov said, if you mention the Puppy Bowl in Act 1,
you must show an image of it in Act 3.

[/TANGENT]

To test envelopes, GfK has a panel of German households who give GfK the direct mail pieces they do not want at the end of each month, either opened or unopened.  The study authors (Feld et al) then looked at the impact of envelopes on the open rate and keeping rate of the mail pieces.  They looked at 68 attributes of 36 design characteristics across almost 400 nonprofit campaigns.  You can get the whole study here if you want the full list, but suffice it to say that when you are looking at what percentage of the response device in an envelope is colored and have five different segments for this, you are doing a pretty comprehensive look at the piece.

The first big result to note is that the open rate did not correlate to the keeping rate. I’ve seen this personally — when an envelope promises something the contents do not deliver, the piece is shredded with extreme prejudice.  Now on the nitty-gritty:

  • Colored envelopes decreased open rates.  I know, it’s difficult to cut through the clutter, but that apparently isn’t the way to do it.
  • Larger envelopes, questioning teasers, and a promotional design on the envelope back all increase open rates.  I would go one step further and advocate for questions that can’t be answered with a yes/no and that elicit curiosity.  While you could put “What is the capital of North Dakota?”* on your envelope, I wouldn’t recommend it.
  • Pre-stamped return envelopes increase keeping rate; postage paid on the outside envelope decreases open rates.  These may seem obvious, but you will have to assess whether the cost involved is worth the increases, as both will increase your cost per piece.
  • A testimonial from a helper increases keeping rates.  It seems like I’ve been talking about variants of these for the past couple of weeks — how social proof and authority can help your appeals, as well as how information can enhance persuasiveness among high-dollar donors.
  • Premiums can work, but expensive ones decrease keeping rates.  People like to receive things (reciprocity at work), but the idea that the nonprofit is spending more on the premium than on the mission is a significant turnoff.
  • Efforts to recruit new members decrease keeping rates. My guess here is that it’s too much too soon.  I’ve seen membership efforts do very well to existing donors (who likely want a sense of belonging), but for new supporters, it might be like proposing marriage on the first date.
  • In the letter, logos and fax numbers increase keeping rates.  Yes, fax numbers.  It also appears that having the phone number decreases keeping rates.  I have no idea why this would be.  If you do, please leave it in the comments to help illuminate us.
  • People kept letters more closer to the end of the month.  Perhaps a “more disposable income” effect at the end of the month?  I’m not sure here either.

Finally, longer letters and personalization increase keeping rates.  I’ve talked about personalization helping your efforts.  Longer, in this case, means more than one page of letter, but my guess is that there may be a sweet spot after that in the 2-4 page range.

We hear about information overload, but I would argue that there is mostly an overload of bad content generated by the same people who created Mad Libs (e.g., [number] ways to [verb] your [noun]; [number] videos that will keep you [verb]ing: number [number] will blow your mind).

A well-written letter, by contrast, can be a beautiful and effective thing.

So, the idea mail piece in this study (were cost no object) would be a larger than average white envelope.  It would not use the impersonal “postage paid” indicia, would ask an enticing question to get the potential reader interested, and the reverse would feature a strong offer.  A letter with your logo and fax number (for now, don’t question it — just go with it) that is more than one page would be on the inside, featuring a testimonial from a helper.  And your return envelope would be prestamped.

Nothing completely earth-shattering here, I would say, but these are some very solid tips for making your pieces more effective.


* It’s a trick question — both the N and the D are capitals.

Increasing your non-electronic mail open rates

Education versus emotion in direct marketing appeals

You can not educate your donors into giving.  It’s close to a cardinal rule in direct response fundraising.

At the same time, it’s a constant temptation.  You have great programs that save and change lives.  You’ve worked hard to validate that you are making a significant impact.  And you’d love to tell someone about it who cares.

Karlan and Wood tested education versus emotion in mail appeals.  And while the results are a bit more obvious than the last two days’ studies, they are still instructive for direct marketers.

The researchers sent mailers to recent donors (which they define as past three years, an interesting difference between researchers and we direct marketing practitioners, who would likely look at people who made a single gift almost three years ago as lapsed rather than recent).  In the first test, the control group (⅔) received an emotional and personal story about a participant in the nonprofit’s program.  In the test group (⅓), there was an additional paragraph in the insert, which talked about the “rigorous scientific methodologies” that demonstrated the impact of the nonprofit’s program.

For the follow-up, one-third received an emotional appeal, one-third received the control letter plus paragraphs about program effectiveness, and one-third received the control letter plus paragraphs about program effectiveness that explicitly cited Yale researchers as the source of program effectiveness.  This is likely an attempt to use authority influence similar to the Gates Foundation study discussed last week.

The researchers found that the information on program effectiveness had no impact on either likelihood of giving or amount given.

That is a nail in the coffin for those who think we should be talking about program effectiveness and double-blind studies and outputs versus outcomes versus impacts in our fundraising copy.

And we could bury that coffin now except for an interesting split that the researchers found in the data: effectiveness data turned off smaller donors and turned on larger donors.

That is to say, people who had given larger amounts (about $100+ more recently) were about one percentage point more likely to donate when given effectiveness information and donated $4.45 more.  Smaller donors were .6 percentage points less likely to donate when given effectiveness information.  With controls in place for things like household income, previous gifts, etc., the researchers were able to reject the idea that larger and smaller donors behave the same.

This goes to the idea that there are two different mechanisms for giving going on: heart gifts and head gifts.  (Or, if you prefer the Kahnemann nomenclature, gifts that come from System I and System II).

Your smaller donors are potentially giving gifts because of how it makes them feel and how you make them feel as a result.  A $10 gift is something many can do without deep contemplation.  However, if you are dedicating a more substantial part of your income to a gift, you may want to know that Yale researchers (or, better yet, Vanderbilt researchers) have backed up the program’s effectiveness.

The lesson that comes from this, in my mind, is that we should not have the same verbiage in our letters for a high dollar and a low dollar audience.  In fact, this study indicates that you can get more and larger gifts from your high-dollar donors with a simple paragraph addition to your existing emotional impact appeal.

In the unlikely event that there are social scientist researchers reading this, this study presents three questions in my mind:

  1. Does the amount at which the heart/head switch occurs depend on your income?  That is, for some, $100 is a life-changing amount of money; for others, it’s a tip at a restaurant.  My thought would be that everyone has a different threshold for what type of gift is which.
  2. Is this why we see an end to people upgrading their gifts at a certain point?  That is, once a charity has recruited your heart, is there a point beyond which you won’t give to them because they are entering the head realm?
  3. Finally, is this part of the reason sustaining gifts work well is that they break down a gift that, annualized, would require sign-off from the brain into gifts that can be given on an emotional basis?

Please leave your thoughts in the comments.

Education versus emotion in direct marketing appeals

The science of ask strings

Today’s direct marketing paper says, in essence, the less you ask for, the more people respond and the less they give.  Duh.

But there are some great surprises in the paper that make it well worth exploration.

De Bruyn and Prokopec took a look at anchoring effects in ask strings.  Specifically, they worked with a large and anonymous European non-profit to mail to their donor list.  They did so with a 3 x 3 matrix of ask strings set by two criteria: 1) is the initial ask below, at, or above their previous contribution? and 2) is the ask string steep (20% increases in levels), steeper (50% increases in levels), or steepest (80% increase in levels).  The ask strings were four items long.

This is a bit confusing, but here are initial and final asks for each condition, assuming a $100 donor.  You’ll note they are appropriately rounded:

Lower Equal Higher
Steep $85 … $140 $100 … $170 $120 … $200
Steeper $70 … $230 $100 … $350 $150 … $500
Steepest $55 … $320 $100 … $580 $180 … $1000

Some of these may look to you as they looked to me — fairly aggressive.  In the higher steepest condition, you are asking your $100 donor to donate $180, $320, $580, or $1000 — not a common ask string by any means.  That’s why I’m glad there are studies like these that test this with other people’s money.

As I mentioned, they found asking for more got more in average donation but suppressed response rate.  However, there were several other elaborations on this:

  • Ask string steepness didn’t affect response rate. Only the lowest, left-most ask seemed to affect response rate significantly.  The lesson here is that you can ask for more and get more without hurting response.  This is potentially free money.
  • Steepness did increase average gift.  So 80% increases won in this case.
  • Multi-donors were more set in their ways. Indexing off of higher than their previous contribution was related to a big drop — from an average of 10.5% among those who had the ask string that started at equal to 9.1% among those who were asked for higher.  It is, not shockingly, as if the multi donors were saying that they had already told the nonprofit what they give and don’t forget it.
  • The worst thing you could do was ask single donors for what they gave before.  This surprised me.  Response rates for the single donors were 5.3% in the lower group, 4.1% in the equal group, and 4.3% in the higher group.  Indexed average gifts were .937 (lower), .909 (equal), and 1.162 (higher).  So there was a trough in both response rate and average gift for asking a single donor for the same thing they gave before.

They didn’t give the net revenue per piece charts in the study; I found them invaluable in understanding the implications.  These are indexed to a $100 donor to make the math easy:

Single donors Lower Equal Higher
Steep $4.74 $3.54 $4.23
Steeper $4.76 $3.96 $5.62
Steepest $5.49 $3.68 $5.26
Multi-donors Lower Equal Higher
Steep $10.42 $10.16 $9.96
Steeper $9.30 $10.44 $9.67
Steepest $10.46 $10.53 $10.68

All this indicates something to me that I hadn’t thought of before (and maybe you have and have tested it — if so, please put it in the comments or email me at nick@directtodonor.com so we can have a report from the trenches): different ask strings for single versus multi-donors.

The hypothesis that I would form based on these results is that people who have given before are set in their ways of what they want to give and thus we should index from the previous contribution or the HPC.  Single donors are more pliable, so we can work to get more value out of them early in the relationship, elevating their support before they get set in their ways.

science-pinkman

Hope this has been as valuable for you as it has been for me.

The science of ask strings

How to structure your matching gift campaign

Matching gift campaigns work. But are they necessary?

Whether it’s a grantor’s challenge fund, a campaign match, a fund set up by a generous donor or donors, matching gifts are a frequently used and frequently successful tactic.  Most of the time, it’s set up as a “double your impact” campaign.

Three researchers — Huck, Rasul, and Shepard — looked at whether a lead donor increased the success of a campaign and how the structure of the match impacted that success.

They did this for the Bavarian State Opera House.  (BTW, if you are a researcher and want to run a test with donors on your dime, email me at nick@directtodonor.com; I’m usually game.)

300px-mc3bcnchen_nationaltheater_interior

The Bavarian State Opera House.
Fundraising motto: hey, these inlays don’t gild themselves.

Here were the six test treatments:

  1. Control: No lead donor, no match commitment
  2. Lead donor: A generous donor has already funded part of the program for 60,000 Euros (remember, Bavarian State Opera House).  We need your help with the other part.
  3. 50% match: A generous donor will match your Euro with .5 Euro.
  4. 100% match: Euro for Euro match
  5. Non-linear matching: A generous donor will match any gift made over and above 50 Euros.  (Which is to say if you donate 120 Euros, the donor gives 70.  If you donate 70, the donor gives 20.  If you donate 40, the donor gives nothing)
  6. Fixed gift matching: A generous donor will match any positive gift made with 20 Euros of his* own.

Got your guesses of what will do what?  Good — here we go with the results**:

Response rate Average gift Revenue per piece
Control 3.7% 74.3 2.79
Lead donor 3.5% 132 4.62
50% match 4.2% 101 4.19
100% match 4.2% 92.3 3.84
Non-linear match 4.3% 97.9 4.18
Fixed gift match 4.7% 69.2 3.27

Yeah, not what I thought either.  I figured, from all of the virtual ink I spilled in social proof and authority last week, that the presence of a lead donor would help. Presumably, there was another mechanism in place — that of anchoring.  I’ll dedicate a full post or five to anchoring effects at some point, but now, suffice it to say that by throwing out the number of 60,000 Euros you can trigger the idea that a person’s gift should be closer to that number.  For some, that may turn them off (although the decline in response rate wasn’t statistically significant).

What surprised me was that the matches didn’t help revenue per piece (unless of course the match is generating marginal revenue).  The matches increase response rates, but the average gift was significantly lower in all of the matches.  The authors’ hypothesis is that the match has a bit of a crowding out effect — that is, the donor feels like their 50 Euros is actually 100 Euros, so they need not make the donation of 100 Euros to have the impact they wanted to have.  This is certainly plausible and consistent with previous research.

What to make of this? Like, I’m guessing, many of you, I’d only tested matching gift language versus control language. However, there is some evidence here that simply stating that a lead gift has been made can increase the anchoring effect and support the idea that a program is worth funding without potential negative byproducts of crowding out donations.

That’s for the general case.  You might also take a look at a fixed gift match depending on your goal.  Generally, I prefer quality of donors to quantity.  However, if you were running a campaign like lapsed reactivation, you might legitimately want to maximize your response rate at the expense of short-term net revenue.

Based on this, I’m going to be looking at testing this against our typical matching gift campaign.  If you do likewise, please let me know at nick@directtodonor.com or in the comments below.  It would be great to see additional evidence on this.

*  The gendering is from the original — not my own.

** The rounding is in the original paper and throws off the revenue per piece variable a bit, but I chose to stick with what they had in the original paper.

How to structure your matching gift campaign