Displacement and Measuring the Unknown in Multi-Device, Multi-Channel Marketing Attribution

Share this:

Hand holding smart phone with abstract glowing squares

The growing panoply of connected devices — from smartphones to smart cars and homes — presents marketers with as much risk as opportunity. Tracking and engaging users across multiple channels (including social media interactions, email opens, and ad clicks) was a challenge even when a user’s entire purchasing journey took place offline or from a desktop. Now, with customers using so many devices to engage companies online, anything short of accessing both Verizon’s Super Cookies and Google Data would leave holes in your attribution data. Today, a strategy I call “displacement” is an essential part of solving the attribution dilemma.

Displacement means measuring the change that occurs when adding an unknown quantity to one that is known. For example: take a beaker, fill it with water, record it’s volume then submerge an object like a rock into that water. The amount displaced, the difference, defines the unknown quantity.

When it comes to cross channel, multi-device attribution, the known quantity would be your sales and conversion rates. Here are a few tips for applying displacement to quantify the impacts of your various marketing campaigns.

Make a Change
Understanding attribution through displacement is impossible without change. Strong impression and click numbers are not a sure sign that a certain touchpoint has a positive influence on conversion rates. Similarly, a landing page with weak click rates can still cause visitors to return and click there or elsewhere using a different device. So, the only way to see how a single campaign influences overall conversion rates is to measure the influence of change.

Test One Change at a Time
Attributing changes in conversion rates to enhancements in a campaign becomes more complex (if not impossible) when you’re testing multiple changes at once. Take an auto-email for example. If you A/B test two emails that are both different from the email you’ve been sending previously, the results could improve, decline, or remain the same. Even if one or both outperform the original in terms of opens or clicks, it’s harder to tell the extent to which these translate to sales. Similarly, a better performing email in terms of sales conversions risks being counterbalanced by an under-performing email, masking the true impact. The status quo is a necessary control group in all testing, ensuring clarity so that you can measure individual factors influencing conversion rates without introducing irrelevant or confusing variables. And, just because a change works in one context does not mean that it should get a pass to other campaigns without equally rigorous testing.

Look for correlations between impressions and clicks in one channel or device and overall conversions. If an up or down-tick in engagement on one landing page — whether online or via mobile — does not correspond to a change in conversions, then you know that the landing page or campaign potentially  targets the wrong audience or may demand major changes like divestment or a complete overhaul. Spending time and money on campaigns that drive high impressions and clicks, without determining whether the steady increase of impressions actually increases overall sales puts you at risk for a great deal of waste.

Count the Little Things
Campaigns that have the least views are often immediately discarded as unsuccessful or ignored in favor of optimizing results for the larger audience. On the contrary, one of the first things marketers need to know is whether low-profile campaigns are wasteful or delivering quality over quantity. Looking at the campaign in isolation, marketers typically resolve the quantity vs quality question by comparing the impression to click ratio to other campaigns. From a multi-channel perspective, we know that repeat visits across devices is a strong indicator of a quality lead but can drive the click percentage down at the same time. So, marketers need to test these seemingly low-performing campaigns to see how changes influence the big picture of sales and conversion rates.

Slow Down
Measuring marketing performance by conversion rates as opposed to merely clicks and impressions means that you’ll face delays in getting the answers you need. Customers engage with you in multiple ways during the purchase cycle. So, lightning may strike today but the thunder of a sales won’t be heard for days, weeks, or months. That’s why it’s important  to tap the breaks when applying the fail fast model to marketing. Success as well as failure takes time. Reacting too quickly to nonexistent changes in conversion rates can lead you to credit a more recent campaign for a recent trend when you’re actually reaping the benefits of seeds sown long before.

Watch the Ripples
While looking at the isolated clicks and impressions of a campaign is never efficient, increases in homepage impressions, organic searches and visits from other channels can signify the enduring interest of customers who may convert down the line. While it’s in no way a final word on the performance of a campaign or the quality of the leads it engages, it can be a near-term indicator of enhanced performance if other factors are controlled for. Bear in mind that seasonal changes and other factors can increase performance across campaigns and channels, but so can a radio spot.

Sales and conversions are not just the bottom line in your marketing strategy. Now more than ever, conversions are the metric used to measure the value of every pit stop along the customer journey as it traverses across multiple devices and channels. And, these displacement-based attribution strategies help marketers stay in step with consumers no matter where the changing technology landscape takes them.

Manpreet-SinghManpreet Singh is Founder and President of TalkLocal, a local startup with apps on iPhone and Android which help consumers find and speak to high-quality local professionals in minutes. Follow him @MSinghCFA.

Tags: