This post was written by Data Scientist Rajiv Patel.
Email marketing is a proven channel for encouraging repeat purchases and driving additional revenue.
When it comes to emailing your customers, perhaps the most difficult part of email marketing is also the most crucial part—measuring the performance of the campaign.
…This is known as “campaign attribution,” and it involves measuring how much revenue has been generated by a campaign.
If you do a casual Google search about campaign attribution, you’ll likely come across many different models and will soon discover that (unfortunately) there is no single, clear cut, foolproof way to measure campaign performance. (Indeed, Google Analytics offers seven different models for measuring campaign attribution!)
It’s good for ecommerce marketers to be aware of the most commonly used attribution models and understand just how different the results they yield can be. As always, the best decisions are made by looking at your campaign data, but with different tech providers using different attribution models (and without being aware of the inherent differences between those models) you will find yourself comparing apples and oranges.
For this blog post, we will:
(If this is you right now, stick with us! Once broken down it’s all pretty straightforward.)
Put simply: there are too many unknowns! After receiving a marketing email, there are many different paths that the contact can take to make a purchase.
They might click through the email and make a purchase right away, in which case it might be reasonable to attribute the transaction to the campaign.
Alternatively, they may open the email and come back another day, this time directly to the website, and make a purchase. It’s even possible that they never open the email but still go on to make a transaction.
Given that there are so many possible outcomes, it’s difficult to quantify the contribution that the campaign email has actually had towards the transaction.
We know that marketing emails have a residual effect—their influence continues beyond the point of receiving and opening the email. Having said that, it‘s entirely possible that a customer would have made a purchase anyway (without having ever received a marketing campaign email). This is yet another unknown that is difficult to quantify as we try to measure campaign performance.
So, what options are out there when it comes to measuring the success of campaigns? Here are two commonly used attribution models.
This model is the simplest to understand.
The revenue attributed to the campaign is simply the revenue from transactions made in a visit started by clicking through on a campaign email. Transactions made in any other visit do not count.
In this model, revenue from transactions made by contacts within ‘N’ (say, 7) days of receiving a campaign email are attributed to that campaign.
Other variations of this model are to only consider transactions made within N days of opening a campaign email or clicking on a campaign email. These variations can be useful because opening or clicking the email demonstrates some engagement which supports the idea that the email had some influence.
|A quick example
To show you how the choice of attribution model affects the reported attribution, we have put representative results* of each of these attribution models applied to one of our clients over a one month period.
In-session attribution: £10k
7-day attribution (restricted to opened emails): £60k
30-day attribution (restricted to opened emails): £150k
(*The proportions of these attribution figures are representative of those from one of our clients. The actual values have, however, been altered for confidentiality.)
You can clearly see that the choice of model has a serious impact on results and that one should never make a direct comparison of results from different models.
Many campaign providers use the N-day attribution model because it reports higher attribution. This model is open to abuse as increasing the attribution window will exacerbate the effect of falsely crediting the campaign with transactions which would have taken place even if the campaign had not been running.
Given that the in-session attribution model under reports revenue and the N-day attribution model typically over reports revenue, let’s take a look at a model that removes some of this bias – we like to call this the “Proportionate N-day attribution model”.
Proportionate N-day attribution
This model involves having two groups of customers present in your campaigns:
You can then use the N-day attribution model on both groups to calculate an average attribution per customer in the group. For the control group, you take the date of receiving an email to be the date at which the customers in the group qualified for the campaign (and hence would have received the email had they not been in the control group). This gives two “N-day attribution” figures:
There is one major drawback of this approach: you miss out on marketing to the control group and therefore miss out on revenue. Provided you have sufficient volume in your campaigns, you should be able to keep the control group small compared to the test group (e.g. 5% of customers in the control group) and still have accurate results.
However, it is likely that this is 5% more customers than you would like to ignore for marketing purposes! If that is the case, then you can approximate the control group by the customers that received marketing emails but never opened them.
In investigations across many of our clients, we have found that the proportionality factor for a 7 day attribution window is typically 30-40%. This suggests that the standard 7-day attribution model actually over reports attribution by 2-3x.
If you do not have access to this data and rely on your campaign provider to give you attribution figures, you should approach any figures given to you with caution.
As we have seen in this blog post, attribution figures can be misleading (particularly with the standard N-day attribution model) so, when given such figures, you should always find out exactly how they are calculated.
This should apply in particular if you’re making a comparison between two or more different providers, as it’s important to compare like for like when you’re talking about revenue.
If you have access to the data to make the calculation then you could try and implement the proportionate 7-day attribution method we have outlined. It would be interesting to see how this compares to your current way of measuring attribution.