It’s time to stop… vanity metrics

I’m writing this during the South Carolina Republican primary.  The votes haven’t started being counted yet, but I know who is going to win.  Because I know that Ben Carson has 35% of the Facebook likes among GOP contenders in the state; Trump is second at 25%.  Thus, Carson will get approximately 35% of the vote.

What?  Doesn’t it work that way?  Facebook likes aren’t a reliable indicator of support, donations, interest, or almost anything else?

The bitter truth: Facebook likes are a vanity metric.  They have little to do with your ultimate goal of constituent acquisition, donor conversion, and world domination, yet people will still ask what that number is.  And when they hear it, they will nod, say that that’s a good number, and ask what we can do to increase it.

That’s when a tiny little part of you dies.

So, in our Things To Stop Doing, we have vanity metrics.  These metrics may make you feel good.  They may be easy to measure.  And some of them may feel like a victory.  But they bring you little closer to your goals.  We are creatures of finite capacity and time, so the act of measuring them, talking about them, or (worst of all) striving for them drains from things that actually matter.

Facebook likes and Twitter followers are probably some of the better-known vanity metrics.  But they are far from the only ones.  And while some of these are partly useful (e.g., Facebook likes is an indicator of a warm lead repository for marketing on the platform), there’s almost always a better measure.

Because it always comes back to what your goals are.  Usually, that goal is to get people to take an action. Your metrics should be close to that action or the action itself.

Without further ado, some metrics to stop measuring.

Web site visits.  Yes, really.  This is for a couple of reasons:

  1. Not all visitors are quality visitors.  If you’ve been using Web site visits as a useful metric, and wish to depress yourself, go to Google Analytics (or your comparable platform) and see how long visitors spend on your site.  Generally, you’ll find that half or more of your users are on your site for more than 30 seconds.  Are 30 seconds long enough for people to take the action you want them to take on your site?  Not usually (except for email subscribes).

  2. Not all visitors are created equal.  Let’s say you find that people coming to your site looking for a particular advocacy action sign up for emails 10% of the time; those who come looking for information about a disease sign up 5% of the time; those who look for top-line statistics sign up 1% of the time.  Which of these is the most valuable visitor?

    This isn’t a trick question.  You would rather have one person looking for advocacy actions than nine people looking for stats.  Except that the metric of Web site visits lumps them all into one big, not-very-useful bucket.

These are both symptoms of the larger problem, which is that if you had to choose between two million visitors, of whom 1% convert, and one million visitors, of whom 3% convert, you’d choose the latter.  Thus, potential replacements for this metrics are visits to particular pages on the Website where you have a good idea of the conversion rates, weighted Web traffic, and (most simply) conversions.

Mail acquisition volume.  You get the question a lot – how many pieces are we sending in acquisition?  Is it more or less than last year?  And it’s not a bad estimate as to a few different things about a mail program: are they committed to investing in mail donors?  Is the program growing or shrinking?  What are their acquisition costs?

But from a practical perspective, all of these things could be better answered by the number of donors acquired (and even better by a weighted average of newly acquired donors’ projected lifetime values, estimated from initiation amount and historical second gift and longer-term amounts, but that’s tougher).  A good rule of thumb is:

Never measure a metric that someone could easily game with a counterproductive action.

And you can do that with mail acquisition volume by going on a spending spree.  Of course, you can also do that with donors acquired, but it will spike your cost per donor acquired, which you are hopefully pairing with the number of donors acquired like we recommend in our pairing metrics post.

Time on site.  You notice that people are only spending an average of 1:30 on your Web site, so you do a redesign to make your site and content stickier.  Congratulations – you got your time on site up to 2:00!

Someone else notices that people are spending 2:00 on your Web site.  They work to streamline content, make it faster loading, and give people bite-sized information rather than downloading PDFs and such.  Congratulations – you got your time on site down to 1:30!

Therein lies the problem with time on site – whatever movement it makes is framed as positive when it could be random noise.  Or worse.  Your sticky site may just be slower loading and your bite-sized content may just be decreasing conversion.

So another rule of good metrics:

Only measure metrics where movement in a direction can be viewed as good or bad, not either/both.

Here again, conversions are the thing to measure.  You want people to spend the right amount of time on your site, able to get what they want and get on with their lives.  That Goldilocks zone is probably different for different people.

Email list size.  While you totally want to promote this in social proof (like we talked about with McDonalds trying to get cows to surrender), you actually likely want to be measuring a better metrics of active email subscribers, along the lines of people who have opened an email from you in the past six months.  These are the people you are really trying to reach with your messaging.

When you remove metrics like these from your reporting or, at least, downplay them, you will have fewer conversions with your bosses that ask you to focus on things that don’t matter.  That’s a win for them and a win for you.

I should mention that I am trying to build my active weekly newsletter subscribers.  Right now, we have an open rate of 70% and click-through rates of 20%+, so it seems (so far) to be content that people are enjoying (or morbidly curious about).  So I’m hoping you will join here and let me know what you think.

 

It’s time to stop… vanity metrics

The dirty dirty data tricks that dirty dirty people will use to try to get their way

Matthew Berry, New York Times bestselling author and mediocre fantasy football advice-giver (this is a compliment; you have to listen to the podcast), does a column each year called “100 Facts.”  In his intro, each time, he warns about the exercise he is going to undertake.  Statistics can be shaded in whatever way you wish (I’m paraphrasing him), so he acknowledges that he is presenting the best facts to support his perceptions of players.  But he goes further to say that’s all other fantasy football analysts are doing as well – he’s just the one being honest about it.  It’s the analyst’s equivalent of Penn and Teller’s cups and ball trick with clear cups – just because you know how the trick is done doesn’t make it less entertaining.

With the knowledge of statistics comes the responsibility of presenting them effectively.  My first and much beloved nonprofit boss used to say that if you interrogate the data, it will confess.  I would humbly submit a corollary: if you torture the data, it will start confessing to stuff just to make you stop.

A well-wrapped statistic is better than Hitler’s “big lie”; it misleads, yet it cannot be pinned on you.
— How to Lie with Statistics by Darrell Huff

So here are some common tricks people will use to make their points.  Arm yourself against this, less you be the victim of data presented with either malice or ignorance.

The wonky y-axis

The person presenting to you was supposed to increase revenue by a lot.  In fact, s/he increased it by only a little.  The weasel solution?  Make a mountain out of that molehill:

deceptiverevenuegraph

Note that the difference between the top and bottom of the y-axis is only $10,000.  Here’s what that same graph looks like with the y-axis starting at 0, as we are trained to expect unless there’s a very good reason:

zero based revenue graph

Both are true, but the latter is a more accurate representation of what went on over the year.

Ignoring reference points

Let’s take a look at that last graph with the budgeted goal added in.

revenue graph with budget goal

This tells a very different story, no? Always be on the lookout for context like this.

The double wonky y-axis

I’ve been saving a Congressional slide for this blog post.  I make no claims about which side of this issue is true or right or moral or whatever.  That said, this is also a good example of having quality debates with good data versus intentionally putting your spin on the ball.

This graph was presented by Congressman Jason Chaffetz in the debate over Planned Parenthood.


Hat tip to PolitiFact.

The graph seems to say that Planned Parenthood health screenings have decreased, abortions have increased, and now Planned Parenthood performs more abortions than health screenings.

But this is a case where the graph has two different y-axes.  Looking at the data, you can see that there were still well more than double as many prevention services performed as abortions.  When we look at the graph, it looks like the opposite is true.

Again, you may choose to do with this information what you will; there are many who would say one abortion is too many.  However, to paraphrase Daniel Patrick Moynihan, you can have your own opinions, but not your own facts.

The outliers

This is one of those things that is less frequently used by people to fool you and more often overlooked by people who subsequently fool themselves.

Here’s a sample testing report.

test2

This one seems like a pretty clean win for Team Test Letter.  Generally, you are going to take the .2% point decrease in response rate in order to increase average gift by $7 and an additional 14.6 cents per piece mailed out.  Game, set, match.

But one must always ask the uber-question, why. So you look at the donations. It turns out a board member mailed her annual $10,000 gift to the test package.  No such oddball gifts went to the control package.  Since this is not likely a replicable event, let’s take out this one chance donation out and look at the data again.
test3An even cleaner win for Team Control.  The test appears to have suppressed both response rate and average gift.

Percentages versus absolutes

Check out the attached graph of email open rates, where a new online team came it and the director bragged about the increase in open rates.  I actually saw a variant of this one happen live.

openrate1

Wow.  Clearly, much better subject lines under the new regime, no?  More people are getting our messages.

Well, for clarity, let’s look at this on a month by month basis.

openrate2

So, something happened in July that spiked open rates. Maybe it’s the new team, but we must ask why. One of the common culprits, when you are looking at percentages, is a change in N, the denominator.  Let’s look at the same graph, but instead of percentages, we are going to look at the number of people who opened the email.

openrate3

Huh.  Our big spike disappeared.

In looking into this, July is when we started suppressing people who had not opened an email in the past six months.  This is actually a very strong practice, preventing people who don’t want to get email from you, have moved on to another address, or were junk data to begin with off of your files.  As a result, your likelihood of being called spam goes down significantly.

So it wasn’t that twice as many people were opening emails; it was that half as many people (the good half) were getting the emails.

Correlation does not equal causation

The wonderful site FiveThirtyEight recently did a piece on how Matt Damon is more attractive in movies where he is perceived as being smarter.  For example, see how dreamy Damon is perceived to be as super-genius Will Hunting.  As Irene Adler says to Sherlock in the eponymous BBC series, brainy is the new sexy.

And you can look at this and think a logical conclusion: the smarter a Matt Damon character is in a movie, the more attractive that character is perceived to be.  This is plausible even though dreaminess was judged from a still frame – if Matt Damon is wearing an attractive sweater, it’s one of the Bourne movies; if it’s WWII garb, probably Saving Private Ryan.

This conclusion would reason that when Damon plays Neil DeGrasse Tyson in the upcoming biopic, his resultant sexiness will distract from the physical mismatching casting.

There’s also the hypothesis posited by the author: “The more attractive Damon is perceived to be in a movie, the smarter he is perceived to be.”  This says the reverse of the above: if Damon is attractive in a movie, he will be perceived to be smart.  This too is plausible – we tend to overestimate the competence of people we find to be attractive (hence why there is no picture of me on the site – you would immediately start discounting my advice).

Or it could be an exogenous third factor that causes both.  What if make-up artists want to symbolize dumbness by making actors unattractive (actually, since it’s Matt Damon, let’s say less attractive not unattractive)?  Film is after all a visual medium and since they know people underestimate less attractive people, they aim to make less competent characters less attractive.

Those are the ways correlation can go: A can cause B, B can cause A, or C can cause A and B.

This is what we must guard against in drawing final conclusions, but rather continually refined theories.  Let’s say you are seeing a general trend that your advocacy mail packages are doing better than your average mail package.  It’s generally safe to say more advocacy mail packages would be better.  But what if it isn’t the advocacy messaging, but that advocacy messages have a compelling reply device?  Or that when you mailed your advocacy pieces, you were also in the news?

One of the key parts of determining the results of a test is learning what the test actually means.  It’s important to strip away other possibilities until you have determined what the real mechanism is for success or failure.  This is why, for the blog analysis last week, I did a regression analysis rather than a series of correlations – to control for autocorrelations.

You don’t have to be versed in all manner of stats; the most important thing it to keep asking why.  From that, you can find the closest version to the truth.

The dirty dirty data tricks that dirty dirty people will use to try to get their way

7 direct marketing charts your boss must see today

Yay!  It’s my first clickbait-y headline!

I preach, or at least will be preaching, the gospel of testing everything.  There have been times that it has been a rough year for the mail schedule, but then we get to a part of the year we tested into last year, so I know that the projections are going to be pretty good and our tweaks are going to work.  It is those times that there are but one set of footprints on the beach, for it is the testing that is carrying me. So I eventually had to test out one of these headlines — my apologies in advance if it works.

The truth is that there are no such charts that run across all organizations.  There are general topics that you need to cover with your boss – file health in gross numbers, file health by lifecycle segment, in-year performance, long-term projections, how your investments are performing.

But what you need to do is tell your story.  You need to analyze all of the data, make your call, and present all of the evidence that makes your case and all of the evidence that opposes it.

This sounds simple, but how often do you see presentations that feature slides that educate to no end – slides that repeat and repeat but come to no point.  Also, they are repetitive and recapitulate what has already been said.

On Monday, I brought up the war between art and science marketers.  The secret to how the artists win is:

Stories with pictures

Yes, really. The human brain craves narrative and will put a story to about anything that comes in front of it.  It also retains images better than anything else.  There’s a semi-famous experiment where they gave noted oenologists (French for “wine snobs”)* white wine with red food coloring. The experts used all of the words that one uses to describe red wine, without ever noting that it was actually a white wine. When confronted with this, the so-called wine experts all resigned their posts and took up the study of nonprofit direct marketing to do something useful with their lives.

winesmeller

OK, I’m lying about that last part.

My point is that we privilege our sight over all other sense – in essence, we are all visual learners.  When we see words on a slide, our brain, which is still trying to figure out why it isn’t hunting mastodons, sees the letters and has to pause to think “what’s with all of those defective pictures.”

So, as I’ve been writing a lot of defective pictures and I promised the seven direct marketing charts your boss must see today, let’s discuss a story that you would want to tell and how you would present it.

1.

Graph1

The idiot I replaced the idiot that I replaced cut acquisition mailings in 2012.

2.

Graph2

It spiked net revenue for a time, enough for him to find another job.

3.

Graph3

But that has really screwed us out of multiyear donors coming into 2015.  You can see the big drop in multiyear donors in 2014 because they weren’t acquired two years earlier.

4.

graph4

And multiyear donors are our best donors.  You’ll also note that our lapsed reacquired donors have greater yearly value than newly acquired with about the same retention rate.  Thus, my first strategic priority to focus more in reacquiring lapsed donors.  Not as good as the multiyear donor that idiot made sure we didn’t have coming into the file this year, but pretty darn good.

5.

graph5

Lapsed donors have actually decreased as a portion of our average acquisition mailing…

6.

graph6

…yet they have been cheaper to acquire.  In summary, they are better donors than newly acquired donors and they are cheaper to acquire, yet we’ve been reaching out to them less.  Thus, we have an opportunity here.

7.

graph7

Because of this insight and because my salary significantly lags the national average for a direct marketing manager of $67,675, I believe I deserve a raise.  I’m now open for questions.

I swear that in many presentations, this would be over 30 slides and over an hour long.  I’ve actually given some of those presentations and if someone was in one of those and is still reading this, I apologize.

Some key notes from this:

  • Note the use of color to draw attention to the areas that are important to you. Other data are there to provide background, but if you are giving the presentation, it is incumbent upon you to guide the mind of your audience.  In fact, if you are giving the presentation, you may wish to present the chart/graph/data normally, then have the important colors jump out (or the less important ones fade away), arrows fly in, and text appear.
  • As mentioned, this is a different structure of presentation that would normally occur. Normally, there would be a section on file health, then one on revenues, one on strategic priorities, and so on.  However, when you structure like that, the slide that makes the point of why you are doing the strategic priorities you are doing may be 50 slides early.  You can say, “remember the slide that said X?” but regardless of what the answer is, the answer is really is no.  You are smarter than that.  You are going to use data to support narrative, not mangle your story to fit an artificial order of data.
  • There is one point per image (with the exception of #4, which had a nice segue opportunity) and no bullet points. Bullet points help in Web reading (hence my using them here), but they actually hurt memory and retention in presentations.

With this persuasive power, though, comes persuasive responsibility.  Not in the sense that your PowerPoint will soon have you enough dedicated followers to form your own doomsday cult, although if that opportunity arises, please take the high road.

What I mean is as you get better and better at distilling your point, there will be a temptation to take shortcuts and to tilt the presentation so it favors your viewpoint beyond what is warranted.  Part of this is ethical, to be sure – don’t be that type of person – but a larger part is that no one person is smarter than everyone else summed together.  Even readers of this blog.  If you omit or gloss over important data points, you aren’t allowing honest disagreement and insights among your audience that can come to greater understanding.  By creating an army of ill-informed meat puppets, you are going it alone trusting on your knowledge and skill alone to get you through.  There will be a day and that day may be soon when the insight you will need will be in someone else’s head.

You do have to prioritize for your audience.  You may have noticed some other points you would have covered in these graphs – retention in this program is falling and cost to acquire donors is increasing.  This person chose to focus on lapsed but didn’t hide the other metrics, which is sound policy.

So we will cap off the week tomorrow with tricks that other people use to shade their data.  I debated doing this section because it could be equally used as a guide to shade your data.  But you are trusting me and I’m trusting you.  Knowledge is not good or bad in and of itself, but let’s all try to use it for good.

* Oenology is actually from the Greek words for “wine” and “study of,” but that isn’t funny…

7 direct marketing charts your boss must see today

Metric pairing for fun and nonprofit

There is no one metric you should measure anywhere in direct marketing.  Like Newton would have said if he were a direct marketer, each metric must have an equal and opposite metric.

The problem with any single metric is, as either or both of Karl Pearson and Peter Drucker said, that which is measured improves.  My corollary to this is what isn’t measured is sacrificed to improve that which is measured.

So what metric dyads should you be measuring?

Response rate versus average gift: This one is the obvious one.  If you measured only response rate, someone could reduce the ask string and lower the heck out of the amount of the ask to spike response rates.  If you focused solely on gift amount, you could cherry pick leads and change the ask string to favor higher gifts.  Put together, however, they give a good picture of the response to a piece.

Net income versus file health: Anyone could hit their net income goals by not acquiring new donors.  More on this another time, but suffice it to say this is a bad idea, possibly one of the worst ideas.  Likewise, an acquisition binge can increase the size of a donor base very quickly but spend money even more quickly.

Cost per donor acquired versus number of new donors acquired: If you had to design a campaign to bring in one person, you could do it very inexpensively – probably at a profit.  Each successive donor because harder and harder to acquire, requiring more and more money.  That’s why if only cost is analyzed, few donors will be acquired, and vice versa.

Web traffic (sessions or unique visitors) versus bounce rate: Measuring only one could mean many very poor visitors or only a few very good visitors.  Neither extreme is desirable.

Click-through rate versus conversion rate: If only your best prospective donors click on something, most of them will convert.  More click-throughs mean a lower conversion rate, but no one should be punished for effectiveness in generating interest.

List growth versus engagement rates: Similar to Web site metrics, you want neither too many low-quality constituents nor too few high-quality ones. Picture what would happen if someone put 1,000, 10,000, or 100,000 fake email addresses on your email list.  Your list would grow, but you would have significantly lower open rates and click-throughs.  Same with mail – as your list increases, response rate will go down – you need to find if the response rate is down disproportionately.

Gross and net revenue: Probably don’t even need to mention this one, but if you measure gross revenue only, you will definitely get it.  You will not, however, like what happens to your costs.

Net revenue versus ROI: Usually, these two move in concert.  However, sometimes, additional marginal costs will decrease ROI, but increase net revenue per piece as in the example yesterday.  In fact, most examples of this are more dramatic, involving high-dollar treatments where high-touch treatments increase costs significantly, but increase net revenue per piece more.  A smart direct marketing will make judgment calls balancing these two metrics.

Net revenue versus testing: This is clearly a cheat, as testing is not really a metric, but a way to increase your revenue is not to take risks, mailing all control packages, using the same phone script you always have, and running the same matching gift campaign online that you did last year.  Testing carries costs, but they are costs that must be born to preserve innovation and prevent fatigue in the long run.

These are just a few of the metrics to look out for, but the most important part of this is that any one single metrics can be gamed (whether intentional or un-).  One of the easiest ways to avoid this is thinking in the extreme – how would you logically spike the metrics.  From there, you can find the opposing metric to make sure you maintain a balanced program.

Metric pairing for fun and nonprofit

The basics of direct marketing reporting – part two

Yesterday, we talked about the key metrics you want to look at in Excel – 13-14 indicators that speak to you about progress and testing results.

However, a direct marketing Muggle will look and these data and say “Huh.  Interesting.” This is direct marketing Muggle code for “this is not interesting and it makes me think of my Algebra II class, which was taught by a nun.”

While you will want all of the data, you will want a skinnier, clearer chart for others, preferably with colors that call out what is actually important.  Let’s look at a fairly standard test – your thesis was that extra personalization in the letter would increase average gift versus your control.  Here’s what this could look like:

uglytest

The first thing to notice is that your hypothesis was wrong – average gift didn’t go up.  But now you have another decision – should you pay for the additional personalization in the future?

You, as a direct marketing professional, can read this chart.  The increased personalization caused response rate to increase.  As a result, gross income per piece went up and net income per piece went up.  However, return on investment went down; the additional investment didn’t bring in as much as the investments before it.  What would you recommend to your boss?

This is a judgment call based on your goals for your program.  One good approach would be to call for a retest – possibly with even more personalization or to see if you can get the personalization costs down or different ask strings to try to boost average gift.  This is clearly not in the 95% percentile one way or the other (which are other good fields to add to your spreadsheet when you get more advanced), so more testing would be good.

But I know which one I would mail more quantity of when the next test is done – I would use the personalization version as the control.  For me, net per piece matters more than ROI.  Our donors’ time is a scarce and valuable commodity.  There are only so many times you have the opportunity to get in front of them, so if you have the opportunity to maximize their investment of their time, versus trying to go for cost control in borderline cases like this one.

Charity Navigator would disagree with me, as they focus on cost of fundraising, so that’s another point in my argument’s favor.  Remember the Charity Navigator Constanza test – hear what they have to say and do the opposite and it will be to your benefit.

So now you have your course of action.  Now you have to have other people see it your way.  Time to explain it:

pretty test

The first thing to note is that it’s legible.  The second is quantity and absolute gross, net, and cost numbers are gone.  These don’t have any relevance to the decision over what to roll out with.  If you leave them in, there’s a natural human temptation to think biggest = better, especially when it’s called revenue and has a dollar sign in front of it.  For a layperson, it’s good to eliminate these distractions.

Then we’ve color-coded the winning parts.  Control wins on cost; test wins on response rate and ROI, gross income and net.  This helps draw attention to the salient bits.  It is amazing how much these little steps can help focus minds.

You will note that I left ROI in there, even though it is evidence that does not support the case you are trying to make.  I’ve talked about testing as a central commandment on the direct marketer’s tablets.  But testing is nothing if there isn’t intellectual honesty.  You have to make the case, but also give your team all of the information to challenge you and make your arguments better.

This is usually where the aesthetic marketers get us data-driven marketers.  They tell quality stories based not on what is true, but on what we wish were true.

We must become equally good storytellers, because a good story plus data beats just a good story.  On Thursday, I’ll talk about how to present data in a compelling way, but first, we have to figure out how to measure our metrics.

The basics of direct marketing reporting – part two

The basics of direct marketing reporting

So there have been some unjustified slaps at Excel over the past week, as well as against hamsters, Ron Weasley, and the masculinity/femininity of people named Kris.  (The one against Clippy was totally justified.)

clippy

It seems only right, then, to talk about things that Excel is actually good at – doing calculations and presenting data.

There are two general schools of marketing people: art versus science.  The art folks appreciate the aesthetics of marketing and aim toward beautiful design and copy.  They will talk about white space and the golden ratio and L-shaped copy and such.  They elevate fad into trend into fashion. They were responsible for the Apple “1984” commercial and don’t understand why the guy with bad toupee on late night commercials is really successful. They can read the nine-point font they are proposing for your Website and don’t care if it is actually usable.

The job of the science people is to make sure that these people don’t damage your organization too much.*  Our motto is “Beauty is beautiful, but what else does it do?”, or it would be if we started having mottos.  Our tools are the well-designed study, the impertinent question (e.g., “I understand that our brand guidelines say to use Garamond, but our testing shows Baskerville converts better. Would we rather stick to the brand guidelines or raise more money?”), and the clear data presentation.

This last one can be hard for us. Too often, when we present our data, the data goes up against a beautiful story that people wish was true and loses.

So we need to cover not only what data you want to collect (today), but how to present it compellingly (tomorrow).

A standard Excel chart for mail pieces

The things I like to see, in approximate order, are:

  • Enough things to identify the piece/panel/list
  • Quantity mailed
  • Response rate
  • Number of donors
  • Average donation
  • Gross revenue
  • Cost
  • Net revenue
  • Gross per thousand
  • Cost per thousand
  • Net per thousand
  • Return on investment
  • Cost to raise a dollar

That’s for a donor piece; for acquisition, I’d recommend adding cost to acquire.

So that’s what data to collect; tomorrow, we will look at how to present it.

* I am framing this as a battle largely for dramatic purposes. Ideally, you have a data person who respects the talents of a high-quality designer and a designer who likes to focus on what works. These together are stronger than any one alone.**

** But if you have to pick one, pick the scientist.

The basics of direct marketing reporting