Archive for the ‘Campaign Tracking’ Category
Recently, the agency I work for built a microsite for a client’s campaign.
The client, not the friendliest even in good times, was full of questions about how ‘effective’ the microsite (and the entire campaign) is. To add to the mess, it was an ‘engagement’ campaign, designed to generate thought about the brand, and an awareness of the impact the brand has had on it’s customers. No loud calls-to-action, no sales push.
If you ask me, it’s a great campaign. I can not tell you more about the details because I’m bound by confidentiality. But the buzz, whatever little, is very positive. It’s a B2B service and not the sort of thing that website or microsite visitors will sign up for immediately. It’s not Amazon or eBay.
Naturally, being the Data Monkey here, I was dragged into heated discussions on the campaign and it’s impact. The client wasn’t in a position to share ANY internal data with us. Nor is there any brand health tracking.
While the battle is ongoing, and I’m continuously coming up with different metrics to ‘prove that the campaign works’, which is the most annoying task, I found this eMarketer post on Deep Brand Engagement very useful. It essentially gives benchmarks from a study about how people who interacted with a brand online either buy it, or recommend it.
Fortunately, my client bought the stats (I don’t, completely, without seeing the data and the methodology). I’m posting because someone else might find it useful, too.
Sorry for the ‘Rant’ish nature of the post, it’s been a hard week so far.
This is arguably among the most important real-world issues out there today (and has been, for some time now). It’s an issue in the offline world, as much as it is in the online world. Except that, online, there are a lot of efforts on to try and get an accurate gauge.
I recently came across readings on the topic, which I finally read this evening, after sufficient procrastination. First, the Forrester release on the topic*. And then, two Coremetrics documents, both by Eric Peterson of Web Analytics Demystified. Peterson’s first line is a great introduction
One of the best-kept secrets in online marketing is that most campaign attribution data is completely wrong and the models used to evaluate campaign performance are wholly inappropriate
John Lovett’s Point of View
The Forrester document discusses various issues, but the bottomline is really their model, which boils down to weighting three points of data, (1) frequency (2) recency (there is a time-period cutoff) and (3) time-on-site, which represents site engagement and interest. The image below, once you parse it, explains it all.
It’s an actionable idea, once you get the measurement in place. However, I’d love to see some testing behind the weightage method. Every minute spent on the website is equivalent to one exposure of the ad. How does one reach this conclusion? I agree that this is extremely hard to measure, but I’m sure there should be/can be research on the weightages.
A strong point that Lovett makes is that while most folks will agree that it’s important, there seems to be a paucity of strong case-studies out there. I also think it’ll be a strong source of competitive advantage, and would sympathise if someone who’s figured out the weightages already isn’t sharing them.
As Eric Peterson quotes Andy Fisher (Razorfish) in his white paper
“Coming up with a good weighting system is hard!”
Eric Peterson & Core Metrics
Eric Peterson wrote a white-paper (and a solution brief) on the topic, which was sponsored by Coremetrics. Half-way through the paper, I lost the thread somewhat, especially since I have never, and don’t, use Coremetrics. The paper starts with an intelligent analysis of the curent situation, which decries the absence of multi-touch attribution. Peterson calls it ‘Inappropriate Attribution’, strongly putting down all commonly used attribution systems. He then suggests the 3-touch view (something that Coremetrics does), which is where I started to lose the plot a bit. Perhaps when I get to fiddle around with Coremetrics tool, it might get clearer.
Peterson goes on to classifying each campaign as an acquisition, persuasion or conversion campaign, based on the the Appropriate Attribution Ratio (AAR). In other words, the performance of the campaign would define the kind of campaign that it is. This, I have a disconnect with. I advice clients to think about campaign objectives when planning them, not when reviewing the results. Because, campaign objectives would define the campaign copy, ad placement, and overall campaign design. Of course, this system can be used to see if the campaign met the desired objective or not. I’d also like to nitpick about the term ‘acquisition’. Peterson uses it to refer to acquisition of a lead, while a good part of the industry might refer to it as a sale. It’s a nitpick, but take note, or you might be confused when reading the paper.
Just before I was to post this, I figured out that this is part of Coremetrics’ overall masterplan to track individual users on the net. It has sparked off a debate between individual tracking and aggregate tracking. A subsequent post will parse that.
In all, an important topic, and it’s likely that there will be more posts on this as I explore.