Boiling a frog online

They say that the way to boil a frog is not to put them in boiling water.  It’s to put them in cold water and slowly turn up the heat.  Because the change is gradual, the frog will not notice until it is boiled.

Who “they” are and why they want to boil a frog is still unknown.  But the somewhat unfortunate metaphor has a point — it’s often easier to get someone to make a big change in small steps.

Thus, instead of thinking in conversions (for example, getting a visitor to your site to donate), it’s often easier to think in terms of microconversions — the little steps that lead to your (and hopefully your prospect’s) goal.  As the great conversion expert Avinash Kaushik says, “Focus on measuring your macro (overall) conversions, but for optimal awesomeness identify and measure your micro conversions as well.”

There are a few ways you can make this work for you:

Track microconversions and how they lead to your ultimate goal.  Some of these microconversions can include:

  • Connecting with you on social
  • Commenting on a blog post
  • Taking an advocacy action
  • Signing a petition
  • Downloading a white paper
  • Looking at a donation page
  • Subscribing to your e-newsletter
  • Contacting your organization
  • Creating an account
  • Looking for directions to your office

From here you are looking at a classic consultant’s 2×2 matrix:

  • High usage of the microconversion; high conversion to your end goal.  These are the things that make you happy.  For example, if action alert usage is the highest activity on your site and advocates are among your most likely people to donate, you are doing your job well.
  • Low usage of the microconversion; low conversion to your end goal.  You can ignore these things for now; they’ll require a lot of work to get into shape.  You have lower hanging fruit.
  • High usage of the microconversion; low conversion to your end goal.  This is one form of an opportunity — you want to work to optimize the path from the microconversion to your end goal.  Let’s say many people are downloading your white paper, but few of them are donating.  You might find that your communications are largely around different topics from the white paper and your asks aren’t related — these are all fixable things.
  • Low usage of the microconversion; high conversion to your end goal.  If almost no one is commenting on your blog, but almost everyone who comments donates, you should be working to get as many people as possible comment on your blog.

Also, if you are getting fancy, you can compute the value of each microconversion by looking at the donation history of people who take the action.  I’d advise you to get fancy, but the matrix is a good start.

Test a multi-stage donation form.  Tradition says that you click a big button that says “Donate” and you are taken to a long form that you fill out in its entirety.  Tradition will get you all of the gifts that you traditionally get.  The boiling a frog analogy works here; people want to finish things they start, so turning a long form into a series of microsteps easier can increase your overall conversion rate.

Heritage Foundation tried this technique and found it increased registrations by 99% with a two-step versus one-step form.

A few ways to do this include:

  • Ask for a donation amount up front.  If you mouse over a donate button, an ask array or a free response question can capture an amount immediately and pass it through to the next step.  It’s a simple step and once someone has taken that action, they are more likely to fulfill their donation.  (And if they don’t, you have a solid ask amount for their next visit or remarketing.)
  • Separate the credit card information from basic address information, with the address first.  Credit card information is the most personal information, so you want to get someone to volunteer their more basic information first.
  • Remember to use the period after donation confirmation to make a monthly giving ask as described here.

 

Introduce your surveys with easy questions first.  There is a reason that professional pollsters save questions like race and household income to the end — they come at a time when the subject is already psychological committed to completing the survey.  As we’ve discussed, commitment is a very powerful thing.  (Also, because if the person stops at that point, you still have the main data you want.)

If you are doing an online survey, start with a simple question up front, then build on future screens.  An online progress gauge is also helpful.  When a person knows there’s only 20% left in the survey, they are more likely to complete it (just like they are more likely to donate when there’s only 20% left in a campaign).

The big commonality with all of these techniques is to start small and build to a larger commitment.  It won’t help convert those who come to your web site looking to make a donation (OK, may it might), but it will help you build commitments among constituents who are less certain about taking a big step forward in their relationship with you.

Boiling a frog online

How long should a story be?

Long enough, and no longer.  There!  That was a quick post.

I just realized that I’ve referred many a time about telling quality stories, but haven’t gone into a lot of detail on how.

So that starts today with length of your story.  I like this topic partly because I get to quote Jeff Brooks’ Fundraising’s Guide to Irresistible Communications:

“I’ve tested long against short many times.  In direct mail, the shorter message only does better about 10 percent of the time (a short message does tend to work better for emergency fundraising).

But most often, if you’re looking for a way to improve an appeal, add another page.  Most likely it’ll boost response.  Often in can generate a higher average gift too.

It’s true in email as well, though not as decisively so.”

In addition to emergencies, I’ve personally found shorter to be better with appeals where urgency is a main driver (e.g., reminder of matching gift deadline; advocacy appeals tied to a specific date) and institutional appeals like a membership reminder.

Other than that, length is to be sought, not avoided.

This is counterintuitive; smart people ask why our mail pieces are so long.  And it’s not what people say themselves.  There is a recent donor loyalty study from Abila where they indicate that only 20% of people read five paragraphs in and only seven percent of people are still reading at the ten paragraph market.

Here’s a tip: if you are reading this, this data point is probably not correct.

The challenge with this data point is that they didn’t test this; they asked donors.  Unfortunately, donor surveys are fraught with peril, not the least of which is people stink at understanding what they would do (much better to see what they actually do).  We talked about this when talking about donor surveys that don’t stink.

Other questionable results from this survey include:

  • Allegedly the least important part of an event is “Keep me involved afterward by sending me pictures, statements on the event’s impact, or other news.”  So be sure not to thank your donors or talk to them about the difference they are making in the world!
  • 28% of people would keep donating even if the content they got was vague, was boring, talked about uninteresting programs, had incorrect info about the donors, and was not personalized.  Unfortunately, I’ve sent these appeals and the response rate isn’t that high.
  • 37% of donors like posts to Twitter as a content type.  Only 16% of donors follow nonprofits on social media.  So at least 21% of people want you to talk to them on Twitter, where they aren’t listening?

So length can be a strong driver and should be something you test.  But you want the right type of length.  Avoid longer sentences and paragraphs.  Shorter is easier to understand, and therefore truer.

Instead, delve into rich detail.  Details and active verbs make your stories more memorable.  And that helps create quality length, and not just length for length sake.

And don’t be afraid to repeat yourself in different words.  Familiarity breeds content.  It also helps skimmers get the important points in your piece (which you should be underlining, bolding, calling out, etc.).

This may not seem like the way you would want your communications.  Remember, you are not the donor.  Especially in the mail, donors who donate like to receive and read mail.  Let’s not disappoint.


After posting this, I heard a great line in Content Inc that stories should be like a miniskirt: long enough to cover everything it’s necessary to cover, but short enough to hold interest.  So I had to add that as well…

How long should a story be?

Creating useful donor surveys

In my DMA Leadership Conference talk, I said that people who listened to what donors say they want in donor surveys deserve to be lied to.  That was obviously too harsh – what I should have said is that they deserved to be misled.

Because people (not just donors, but all human beings*) aren’t meaning to lie to you; they just don’t know what their true motivation is.  As we’ve seen, emotional reaction happens 6000 times faster than rational thought.  So unless someone is doing System Two thought, where they are rationally considering all alternatives, the role reason plays in this process is coming up with the best possible justification of a decision already made.

Consider a study that asked people to rank their top 16 motivations.  Sex was rated #14; wealth was dead last.  Then they looked at actual subconscious motivators of decisions.  Sex was rated #1 and wealth was rated #5.

This should be considered no surprise to people who have met, well, ya know, people.  But it was a surprise to people themselves, who think themselves chaise and uncorruptable, but in reality dream of having very special moments in Scrooge McDuck’s vault.

0pu-08jzteemwvmsx

But that doesn’t mean all donor surveys are bad – far from it.  It just means that, in a statement that may get me arrested by the Tautology Police**:

Bad donor surveys are bad.  Good donor surveys are good.

Common traps in donor surveys:

  • Talking only to current donors. You want to talk to people who stopped giving as well, to the extent that they will talk to you.  After all, you are looking for the difference between these two groups.  Trying to define who your good donors are without talking to former donors is like saying the reason that Fortune 500 companies are successful is because they have employees and offices.
  • Asking donors to analyze why they did what they did. They don’t know.  So they are going to try to figure out what answer someone like them would generally say or what they think you want to hear.  Neither is helpful to you.
  • Asking donors what is most important to them. Clearly, from the above, the answer is sex.  Looking only at your limited options, however, they will probably make mistakes in determining what is important to them, similar to the poor people who thought that sex and wealth (aka Genie Wishes #1 and #2) didn’t impact them.

So how do you construct a survey that gets to these important points?  You are going to set up your survey so that you can run a regression analysis.****  If you need help with how to do this, check out our post on basic regression.

You will need a dependent variable.  Ideally, this will be donation behavior because it is a clear expression of the behavior you are trying to impact.  If not, an overall satisfaction score with the organization will be generally OK, as it should correlate strongly with donation behavior.

For your independent variables, ask about aspects of your organization.  So, for example, “have you ever called X Organization about your donations?”, “did you receive a thank you note for each donation you made?”, “have you been to X Web site”, “how many days did it take for you to get your thank you note on your last gift?”, etc.

The powerful thing about regression analysis is that it will help you figure out both how people feel about their experience and how important that experience is to them?  For example, my guess is that for most organizations, the number of days it took to get a thank you will be a good predictor of retention.  Since the analysis tells you the strength of that association, you can invest the right amount of resources into that area versus new donor welcome packages or donor relations staff or database infrastructure and the like.


* Yes, non-donors are also considered human beings – just slightly lesser ones.

** Motto: Enforcing through enforcement since Socrates.***

*** Former motto: Our motto is our motto.

**** Or other modeling if you are feeling fancy.

Creating useful donor surveys

Learning from political fundraising: hypercustomization

fireworks4_amkOn the path to his win in Iowa, Ted Cruz took an unusual position for a presidential candidate. He spoke out against fireworks regulations.

Usually, Iowa contests focus on broad national issues that a person would be expected to lead on as president (plus ethanol).  Fireworks range as a national issue somewhere around garbage collection and why-don’t-they-do-something-about-that-tacky-display-of-Christmas-lights-on-Steve-and-Janice’s-house.

But from a data perspective, the Cruz campaign knew its supporters.  There’s a great article on this here.  Here’s a quote:

“They had divided voters by faction, self-identified ideology, religious belief, personality type—creating 150 different clusters of Iowa caucus-goers—down to sixty Iowa Republicans its statistical models showed as likely to share Cruz’s desire to end a state ban on fireworks sales.

Unlike most of his opponents, Cruz has put a voter-contact specialist in charge of his operation, and it shows in nearly every aspect of the campaign he has run thus far and intends to sustain through a long primary season. Cruz, it should be noted, had no public position on Iowa’s fireworks law until his analysts identified sixty votes that could potentially be swayed because of it.”

As we unpack this, there are several lessons we nonprofits can take from this operation:

The leadership role of direct marketing.  Cruz’s campaign is run by a direct marketing specialist.  Contrast this with Marco Rubio’s campaign, which is run by a general consultant, or Jeb Bush’s, which was run by a communications specialist.  As a result, analytics and polling in the campaign are skewed not toward what generalized messages do best with a focus group or are the least offensive to the most number of people.    

In fact, in the campaign, the analytics team has a broader set of responsibilities than normal.  Analytics drive targeting decisions online and offline.

The imperative to know your constituents.  Much political polling is focused on knowing donors in the aggregate.  The Cruz campaign wanted to know them specifically.  So they gathered not just people who were supporters and asked them about local concerns.  This came up with 77 different ideas, including red-light cameras and, as you probably guessed, fireworks bans.  We’ve talked about knowing your constituents by their deeds and by asking them; what’s important about this example is the specificity of the questions.  It’s not “what do you like or dislike”; it’s “what do you care about.”

Testing to know potential constituents.  One the campaign had these ideas, they tested them online with Facebook ads.  The ads weren’t specific to the Cruz campaign, but rather asked people to sign up for more information about that issue.  Once they had these data, they not only had specific knowledge of what people cared about, but the grist for the mill of data operations that could model Iowa voters and their key issues.  

Focusing on actual goals.  Cruz’s end goal is to drive voters, just like ours is to drive donations.  By simplifying things down to what gets people to pull their levers/hit the button/punch the chad, they had a crystallizing focus.  One can debate whether this is a good thing, as the campaign sent out a controversial Voting Violation mailing that attempted to shame infrequent voters with Cruz leanings to the polls.  (It should be noted that these mailings are the part of campaign lore — they’ve been tested and found to be very efficient, but few campaigns have ever wanted to backlash that comes inevitably from them.)  But that focus on things that matter, rather than vanity metrics like Facebook likes , help with strategy.

Hypertargeting: All of this led to some of the most targeted direct marketing that has been seen in the political world.  When telemarketing was employed for particular voters, not only would the message reflect what they cared about (e.g., fireworks bans) but also why they cared about it (e.g., missed fun at 4th of July versus what seems to some as an arbitrary attack on liberty).  This came from both people’s own survey results and what models indicated would matter to them.

So now, let’s look at this in a nonprofit direct marketing context.  How well do you know your donors and potential donors?  Or how well do you really know them?  And how well do you play that back to them?

I’ve frequently advocated here playing back tactics to donors that we know work for them and focusing our efforts on mission areas and activities we know they will support at a segment level.

But this is a different game altogether.  The ability to project not only what someone will support, but why they well, and designing mail pieces, call scripts, and emails that touch their hearts will be a critical part of what we do.  And once you have this information, it’s cheap to do: if you are sending a mail piece or making a phone call already, it’s simplicity itself to change out key paragraphs that will make the difference in the donation decision.

This also applies in efforts to get donors to transition from one-time giving to monthly giving or mid-major gift programs.

So, how can you, today, get smarter about your donors and show them you are smarter about them?

Learning from political fundraising: hypercustomization

Availability heuristics and direct marketing: what we remember easily is all there is

Today, we’ll look at the availability heuristic. Availability means if you can recall an example of something happening, it must be as or more important than something that you can’t easily recall happening.

A classic example of this from the literature is people overestimate the number of words that begin with the letter R. They also underestimate the number of words where R is the third letter.

Or, similarly, some people will say there are more six-letter words that end in “ing” than end in “g” (which is impossible). We can easily recall things that begin with R or end with “ing.” That’s how they are filed in our brains; thus, we think it happens more often.

This can affect how our causes are seen by the public. Quick: how many people are killed by drunk driving versus cell phone use while driving?

Got your answer?

In 2013, the last year for which we have data for both causes, drunk driving killed 10,110 people in the United States.

Cell phone use and driving killed 445 people

Chances are, if you are like most Americans, you thought these were about equivalent. You almost certainly did not think these two numbers were more than an order of magnitude different.

Why is that? Because you can look at the car next to you at a stoplight and see the driver is texting. It is far more difficult for you to look at the car next to you and see that the driver is drunk. And so our availability heuristic can easily recall cell phone use and driving and that gets moved up in our mental queue.

Incidentally, both are dangerous. If you are reading this on your phone while driving, please stop now.

So how can you use (or mitigate) this effect in your nonprofit direct marketing? The biggest example is take advantage of news. Disaster fundraising is in part successful because it speaks to a desperate, urgent need, and partly because it reminds people that those needs are with us. Similarly, if your issue is in the news, most people think to reach out via fast means like email and text messaging. However, we don’t often think to swap out our telemarketing scripts or send out a direct mail piece for an urgent issue. One solution is to pre-print appeals. You can have stationary with a reply device on hand. If there is something urgent that comes up, customize the copy, laser in the text, and go straight to postage.

It’s also important to build plausible scenarios. Were I to do marketing for an organization fighting drunk driving (you know, purely hypothetically), I shouldn’t say “When was the last time you were driving next to a drunk driver?” It’s very difficult to recall this.

However, what if I say:

“When was the last time you were out on the roads and the driver in front of you just didn’t seem right? You know, they were weaving in their lane, waited too long to brake, or didn’t seem to be paying attention…”

My guess is that you have seen numerous people who fit that description recently. In truth, not all of these people were drunk (they could be stoned, distracted, sleepy, morons, etc.), but puts the frame around something that is instantly recognizable.

A less obvious solution is to ask people for a lot of negative feedback. One study looked at course evaluations for college students and found that if they were asked to provide 10 examples of how the course could be done better, they rated the course almost 10% higher than students who were asked to provide two examples.

The idea is that two examples are easy to come up with:

  1. The professor should consider using an antiperspirant
  2. Ethan Frome sucks; we shouldn’t read it

Boom. Done. Having to come up with 10 examples taxes the brain. Thus, we think the class was better because it’s hard to come up with things that are bad to say about it.

This was a shock to me, because one of my favorite open-ended survey questions is “What is the one thing you would change about X?”. My thinking is this a way of cutting through all of the minutiae to find out what is important to people. What I’ve been unconsciously doing is priming people to focus on that bad thing and making them think it’s incredibly easy to come up with bad things to say.

This is probably also another reason to do search engine optimization and use those Google Grants. If people see your organization’s name associated with an issue in the sponsored listings, news section, images section, videos section, and organic search engine listings, you will be top of mind for them. When people are thinking about your cause, they will more likely think of your organization.

If you liked this post, please consider signing up for our weekly newsletter that bundles these along with other hopefully valuable stuff every Saturday.

And if you didn’t, please send me 17 reasons why not to nick@directtodonor.com.

Availability heuristics and direct marketing: what we remember easily is all there is

Getting donor intelligence by asking your donors

Yesterday, I said you can get a good idea of who your donor is through their actions.  The trick here is that you will never find donor motivations for which you aren’t already testing.  This is for the same reason that you can’t determine where to build a bridge by sitting at the river and looking for where the cars drive in trying to float across it, Oregon-Trail-style.

10-trail_208

Damn it, Oregon Trail.  The Native American guide told me to try to float it.
Don’t suppose that was his minor revenge for all that land taking and genocide?

To locate a bridge, you have to ask people to imagine where they would drive across a bridge, if there were a bridge.  This gives you good news and bad news: good news, you can get information you can’t get from observation; bad news, you get what people think they would do, rather than what they actually will do.

True story: I once asked people what they would do if they received this particular messaging in an unsolicited mail piece.  Forty-two percent said they would donate.  My conclusion — about 40% of the American public are liars — may have been a bit harsh.  What I didn’t know then but know now is that people are often spectacularly bad at predicting their own behavior, myself included.  (“I will only eat one piece of Halloween candy, even though I have a big bucket of it just sitting here.”)

There is, of course, a term for this (hedonic forecasting) and named biases in it (e.g., impact bias, empathy gap, Lombardi sweep, etc.).  But it’s important to highlight here that listening to what people think they think alone is perilous.  If you do it, you can launch the nonprofit equivalent of the next New Coke.

“The mind knows not what the tongue wants. […] If I asked all of you, for example, in this room, what you want in a coffee, you know what you’d say? Every one of you would say ‘I want a dark, rich, hearty roast.’ It’s what people always say when you ask them what they want in a coffee. What do you like? Dark, rich, hearty roast! What percentage of you actually like a dark, rich, hearty roast? According to Howard, somewhere between 25 and 27 percent of you. Most of you like milky, weak coffee. But you will never, ever say to someone who asks you what you want – that ‘I want a milky, weak coffee.’”  — Malcolm Gladwell

With those cautions in mind, let’s look at what survey and survey instruments are good for and not good for.

First, as mentioned, surveys are good for finding what people think they think.  They are not good for finding what people will do.  If you doubt this, check out Which Test Won, which shows two versions of a Web page.  Try to pick out which version of a Web page performed better.  I venture to say that anyone getting over 2/3rds of these right has been unplugged and now can see the code of the Matrix.  There is an easier and better way to find out what people will do, which is to test; surveys can give you the why.

Surveys are good for determining preferences.  They are not good for explaining those preferences.  There’s a classic study on this using strawberry jam.  When people were asked what their preferences were for jam, their rankings paralleled Consumer Reports’ rankings fairly closely.  When people were asked why they liked various jams and jellies, their preferences diverged from these expert opinions significantly.  The authors write:

“No evidence was found for the possibility that analyzing reasons moderated subjects’ judgments. Instead it changed people’s minds about how they felt, presumably because certain aspects of the jams that were not central to their initial evaluations were weighted more heavily (e.g., their chunkiness or tartness).”

This is not to say that you shouldn’t ask the question of why; it does mean you need to ask the question of why later and in a systematic way to avoid biasing your sample.

Surveys are good for both individual preferences and group preferences.  If you have individual survey data on preferences, you absolutely should append these data to your file and make sure you are customizing your reasons to give to the individual’s reason why s/he gives.  They also can tease out segments of donors you may not have known existed (and where you should build your next bridge.

Surveys are good for assessing experiences with your organization and bad for determining complex reasons for things.  If you have 18 minutes, I’d strongly recommend this video about how Operation Smile was able to increase retention by finding out what donors’ experiences were with them and which ones were important.  Well worth a watch.

If you do want it, you’ll see that they look at granular experiences rather than broad questions.  These are things like “Why did you lapse” or “are we mailing too much?”   These broad questions are too cognitively challenging and encompassing too many things.  For example, you rarely hear from a donor to send fewer personalized handwritten notes, because those are opened and sometimes treasured.  What the answer to a frequency question almost always leads to is an answer to the quality, rather than quantity, of solicitation.

Surveys are good when they are well crafted and bad when they are poorly crafted.  I know this sounds obvious, but there are crimes against surveys committed every day.  I recently took a survey of employee engagement that was trying to assess whether our voice was heard in an organization.  The question was phrased something like “How likely do you think it is that your survey will lead to change?”

This is what I’d call a hidden two-tail question.  A person could answer no because they are completely checked out at work and fatalistic about management.  Or a person could answer no, because they were delighted to be working there, loved their job, and wanted nothing to change.

Survey design is a science, not an art.  If you have not been trained in it, either get someone who is trained in it to help you, or learn how to do it yourself.  If you are interested in the latter, Coursera has a free online course on questionnaire design here that helped me review my own training (it is more focused on social survey design, but the concepts work similarly).

You’ll notice I haven’t mentioned focus groups.  Focus groups are good for… well, I’m not actually sure what focus groups are good for.  They layer all of the individual biases of group members together, stir them with group dynamic biases like groupthink, unwillingness to express opinions contrary to the group, and the desire to be liked, season them with observer biases and the inherent human nature to guide discussions toward preconceived notions, then serve.

Notice there was no cooking in the instructions.  This is because I’ve yet to see a focus group that is more than half-baked. (rim shot)

My advice if you are considering a focus group: take half of the money you were going to spend on the focus group, set it on fire, inhale the smoke, and write down the “insights” you had while inhaling the money smoke.  You will have the same level of validity in your results for half the costs.

Also, perhaps more helpful, take the time that you would have spent talking to people in a group and talk to them individually.  You won’t get any interference from outside people on their opinions, introverts will open up a bit more in a more comfortable setting and (who knows) they may even like you better at the end of it.  Or if you hire me as a consultant, I do these great things with entrails and the bumps on donors’ heads.

So which do you want to use: surveys or behavior?  Both. Surveys can sometimes come up with ideas that work in theory, but not in practice, as people have ideas of what they might do that aren’t true.  Behavior can show you how things work in practice, but it can be difficult to divine deep insights that generalize to other packages and communications and strategies.  They are the warp and weft of donor insights.

Getting donor intelligence by asking your donors