Direct marketing budgeting seems easy:
- Take last year’s budget
- Take out the losing communications and replace them with the results of the winners.
- Project that the communications will do the same thing as last year.
And I have had budgets set this way by vendors. However, this overlooks a great deal. In confession, some of these are things I caught before we put the plan in our organizational budget – some I didn’t.
Changing file. OK, you may say – we have 1,000 more donors than we did last year. We’ll assume the communications have the same response rate and average gift as before and just add to our quantity.
Wrong. You need to look both at the number of people on your file and the lifecycle of that file. Let’s say that last year, you did a lot of new acquisition (yay!) and your retention rate stunk (boo!). As a result, your overall number of donors may not have changed much, but your composition is entirely different – you have far more first-time donors who will have lower response and retention rates and far less multi-year and core donors. See my post about the fallacy of file size and single-size retention rates here for details.
Bottom line, if you assume your new donors will perform as they always have done, you are toast.
Spill in and spill out. Accountants have a really good reason to artificially cut things off the way they do. Or so they keep telling me.
The bottom line is with accrual accounting some of your costs will not occur in the year that you are planning to send out a communication. Likewise, some of the revenues from a campaign will spill out of a year into the next year, especially for longer-lead time media like mail and telemarketing pledges.
It’s sometimes OK to assume that spill in from one year will equal spill out into the next year. However, changes in file, size of efforts around year end, and when that darn print vendor decides to send you their invoice can all change whether you hit goal or not.
Communication performance. I had a vendor report that a piece was going to do 4% response rate because that’s what it had averaged over the past three years. When I dug deeper, the response rate over the previous three years was 5%, then 4%, then 3% (these numbers are fictitious; don’t believe any response rate that doesn’t have a point something).
I would argue this is perhaps a dying communication and that this is more likely to have a 2% response rate than a 4% response rate. You don’t see that if you are simply averaging previous years’ performances.
Test failures. If all of your tests are going to work, you are going to have to call them something other than tests. Most of the time, your tests will not do as well as your control will do, so you can’t account for this by assuming you are get the results of last year’s test winners.
Roll-out failures. You had your test last year and it succeed at 95% confidence? Chances are you if you tested at 25,000 pieces, you tested part of some of your better segments, not across all of the segments. Perhaps the piece you have tested into is good for your current donor sets, but doesn’t fit with why your lapsed donors originally signed up with your organization. If that’s half of the audience you were planning to mail to, you will want to dial back your expectations.
Interactions amount communications. Let’s can you had record online revenues last year, but your mail program fell off and your donor file dwindled. A good portion of your online donations are likely people who got their mail piece and decide to donate online; thus, you have to see how aspects of your program affect each other.
Hopefully, these help you make your budget; tomorrow, we’ll talk through scenario planning in your budgeting.