Metric pairing for fun and nonprofit

There is no one metric you should measure anywhere in direct marketing.  Like Newton would have said if he were a direct marketer, each metric must have an equal and opposite metric.

The problem with any single metric is, as either or both of Karl Pearson and Peter Drucker said, that which is measured improves.  My corollary to this is what isn’t measured is sacrificed to improve that which is measured.

So what metric dyads should you be measuring?

Response rate versus average gift: This one is the obvious one.  If you measured only response rate, someone could reduce the ask string and lower the heck out of the amount of the ask to spike response rates.  If you focused solely on gift amount, you could cherry pick leads and change the ask string to favor higher gifts.  Put together, however, they give a good picture of the response to a piece.

Net income versus file health: Anyone could hit their net income goals by not acquiring new donors.  More on this another time, but suffice it to say this is a bad idea, possibly one of the worst ideas.  Likewise, an acquisition binge can increase the size of a donor base very quickly but spend money even more quickly.

Cost per donor acquired versus number of new donors acquired: If you had to design a campaign to bring in one person, you could do it very inexpensively – probably at a profit.  Each successive donor because harder and harder to acquire, requiring more and more money.  That’s why if only cost is analyzed, few donors will be acquired, and vice versa.

Web traffic (sessions or unique visitors) versus bounce rate: Measuring only one could mean many very poor visitors or only a few very good visitors.  Neither extreme is desirable.

Click-through rate versus conversion rate: If only your best prospective donors click on something, most of them will convert.  More click-throughs mean a lower conversion rate, but no one should be punished for effectiveness in generating interest.

List growth versus engagement rates: Similar to Web site metrics, you want neither too many low-quality constituents nor too few high-quality ones. Picture what would happen if someone put 1,000, 10,000, or 100,000 fake email addresses on your email list.  Your list would grow, but you would have significantly lower open rates and click-throughs.  Same with mail – as your list increases, response rate will go down – you need to find if the response rate is down disproportionately.

Gross and net revenue: Probably don’t even need to mention this one, but if you measure gross revenue only, you will definitely get it.  You will not, however, like what happens to your costs.

Net revenue versus ROI: Usually, these two move in concert.  However, sometimes, additional marginal costs will decrease ROI, but increase net revenue per piece as in the example yesterday.  In fact, most examples of this are more dramatic, involving high-dollar treatments where high-touch treatments increase costs significantly, but increase net revenue per piece more.  A smart direct marketing will make judgment calls balancing these two metrics.

Net revenue versus testing: This is clearly a cheat, as testing is not really a metric, but a way to increase your revenue is not to take risks, mailing all control packages, using the same phone script you always have, and running the same matching gift campaign online that you did last year.  Testing carries costs, but they are costs that must be born to preserve innovation and prevent fatigue in the long run.

These are just a few of the metrics to look out for, but the most important part of this is that any one single metrics can be gamed (whether intentional or un-).  One of the easiest ways to avoid this is thinking in the extreme – how would you logically spike the metrics.  From there, you can find the opposing metric to make sure you maintain a balanced program.

Metric pairing for fun and nonprofit

The basics of direct marketing reporting

So there have been some unjustified slaps at Excel over the past week, as well as against hamsters, Ron Weasley, and the masculinity/femininity of people named Kris.  (The one against Clippy was totally justified.)

clippy

It seems only right, then, to talk about things that Excel is actually good at – doing calculations and presenting data.

There are two general schools of marketing people: art versus science.  The art folks appreciate the aesthetics of marketing and aim toward beautiful design and copy.  They will talk about white space and the golden ratio and L-shaped copy and such.  They elevate fad into trend into fashion. They were responsible for the Apple “1984” commercial and don’t understand why the guy with bad toupee on late night commercials is really successful. They can read the nine-point font they are proposing for your Website and don’t care if it is actually usable.

The job of the science people is to make sure that these people don’t damage your organization too much.*  Our motto is “Beauty is beautiful, but what else does it do?”, or it would be if we started having mottos.  Our tools are the well-designed study, the impertinent question (e.g., “I understand that our brand guidelines say to use Garamond, but our testing shows Baskerville converts better. Would we rather stick to the brand guidelines or raise more money?”), and the clear data presentation.

This last one can be hard for us. Too often, when we present our data, the data goes up against a beautiful story that people wish was true and loses.

So we need to cover not only what data you want to collect (today), but how to present it compellingly (tomorrow).

A standard Excel chart for mail pieces

The things I like to see, in approximate order, are:

  • Enough things to identify the piece/panel/list
  • Quantity mailed
  • Response rate
  • Number of donors
  • Average donation
  • Gross revenue
  • Cost
  • Net revenue
  • Gross per thousand
  • Cost per thousand
  • Net per thousand
  • Return on investment
  • Cost to raise a dollar

That’s for a donor piece; for acquisition, I’d recommend adding cost to acquire.

So that’s what data to collect; tomorrow, we will look at how to present it.

* I am framing this as a battle largely for dramatic purposes. Ideally, you have a data person who respects the talents of a high-quality designer and a designer who likes to focus on what works. These together are stronger than any one alone.**

** But if you have to pick one, pick the scientist.

The basics of direct marketing reporting