There are two ways to know your constituents better: listening to what they do and asking them what they think. Today, I’ll talk about the former; tomorrow, the latter.
Yesterday’s piece talked about how you can roughly define an individual’s responsiveness by medium, message, and action. The trick is that we often segment by only one, possibly two, of these. We have medium covered: most large-scale programs of my acquaintance distinguish among people who are mail, telemarketing, online, multichannel, etc. responders. And many small-scale programs haven’t begun to integrate medium, so in a way this is its own segmentation.
Sometimes, we will use action as a determiner. We’ll take our online advocates segment and drop it into one of our better-performing donor mail pieces (frequently not customizing the message to advocacy, more’s the pity).
We rarely segment by message, even though picking something that people care about is the most basic precondition of the three. After all, you may not like telefundraising, but you’d at least listen if it was immediately and urgent about something that you care about. And it’s much easier to get someone to do something they haven’t done before for a cause they believe in than to get them to do something they’ve done many times if they don’t believe in the message.
The good news is that you have your constituents’ voting records, of a sort. Consider each donation to a communication a vote for that communication and each non-donation (or, if you can get it from email, non-open or non-clickthrough) as a vote against that communication.
[tangent] This is also a helpful technique for when your executive director comes into your office and says “I’ve had five calls today from people who aren’t happy about [insert name of communication here].” If you reframe it as five people voted against it by calling and five thousand people voted for it by donating, the noisy few are not nearly as concerning.[/tangent]
A proper modeler would use the data from these votes to run a Bayesian model to update continually the priors on whether or not someone would respond to a piece. As you can probably tell, I’m not a proper modeler. I prefer my models fast, free, and explainable. So here’s how I’d use this voting data:
- Take all of your communications over a 3-5 year period and code them by message. So for our hypothetical wetlands organization from yesterday, this might be education, research, and conservation. Hopefully, you don’t have too many communications that mix your messages (people donate to causes, not lists), but if you do, either take it by the primary focus or code it to both messages.
- Determine the mix of your communications. Let’s say that over five years this wetlands organization did 25 conservation appeals, 15 education appeals, and 10 research appeals. This makes the mix 50% conservation, 30% education, and 20% research.
- Take your donor file and pull out only those people who donated an average of at least once per year over that 3-5 year period. This will ensure you are looking only at those people who have even close to sufficient data to draw conclusions.
- Take the coding of communications you have and apply it to the pieces to which the person donated. Generate a response rate for each type of message for each person on your file.
- Now, study that list.
In studying that list, you are probably going to find some interesting results:
- There are going to be some people (a minority of your file but likely a healthy segment) that only gave to one type of message. And you’ll see the pattern immediately. Someone who gave eight times over five years to education appeals and never to conservation or research appeals is clearly an education donor. You will look at all of the other communications you sent this person and all of the people like her in the X-issue-only segments and you will weep a little. But weep not. You can now save your costs and these people’s irritation in the future by sending them only the communications about their issue area (with the occasional test to see if their preferences have changed). It’s only a mistake unless you don’t learn from it; if you do learn from it, it’s called testing.
- You can also probably lump people who gave rarely to other messages in with the X-issue only people. So if someone gave to nine of the ten research appeals and to only one each of education and conservation, they clearly have a strong research preference. This is why it’s helpful to look at these data by response rates — you can see where people have ebbs and flows in their support.
- You will also see people who like two messages, but not a third (or fourth or however many you have; I will warn you to minimize the number of buckets, as you will not have a large enough sample size without). So if someone gave five times, three to education appeals and two to research appeals, education and research both appeal to this person with a 20% response rate. However, conservation doesn’t apparently appeal to them, so you can reduce communications in this realm.
- You’ll also see a contingent of folks who donate to communications in roughly the same proportion that you send them out. These people can probably be classified as organizational or institutional donors. It will take far more digging than mere file analysis to figure out what makes this donor tick.
This leads into an important point: these will not get you to why. Even things like how often a person gives for how long or Target Analytics Group’s Loyalty Insights, which can show if the person is giving uniquely to you or to others, are transactional data. While useful proxies, they can’t tell you the depth of feeling that someone has for an organization or let you know what ties bind them to you. To do that, you must ask. That’s what I’ll cover tomorrow. But hopefully this gets a little closer to information that will help you customize your donor’s experiences.