I left my last conversation with CEO Michael Caccavale wondering about my future fantasy football team and mulling over some great insight into the current state of consumer behavior. It’s clear we are convenience-driven and we want what we want when we want it. But brands are still falling short of meeting those needs, despite recognizing their importance. So how do we get there? I’ll start by feeding the boss more coffee and asking questions.
Mobile use presents a lot of opportunities for marketers. Are these missed opportunities without an attribution model?
People will be measuring them, but they’ll be getting bad data. They’ll be assigning the key events to the wrong action. For example, you can’t just add a mobile campaign, see a lift in sales and then funnel all your resources there. The lift occurred because you added a channel, not necessarily because that channel was mobile. So you can’t just abandon your other channels thinking that mobile drove all results. That misconception happens easily and often – with significant consequences – if you aren’t measuring the right data.
Why are so many organizations slow to adopt? That is, what are some of the barriers to adopting a multichannel attribution system?
- Let’s face it - it’s hard.
- You have to decide who’s going to get credit; that creates departmental in-fighting. For example, the digital agency wants to claim credit yet marketing wants to claim credit and the ad agency… But what are they tracking? It’s hard to know where to look to find out exactly why you’re getting lift.
- Data cleanliness and quality. It’s about how you bring that data together. Understanding different levels of data and how you bring it together becomes a challenge if your data is low-quality.
What business goal does attribution serve?
Knowing what to do next time, knowing where to spend the money. Measuring. If I just list the channels, how much you spent in each channel, and the total new customers and new sales we got, that doesn’t tell you the whole story. It’s not just about number of clicks. You have to look deeper. You have to test aggressively, and test the right things. If all your results are great, you’re not testing enough. You should be failing on some of your tests.
What do you mean by aggressively?
I could test putting a single piece of direct mail out, or I could test mailing someone four times a week for eight weeks. It’s really about how much are you dialing up or down a campaign or a spend – not just trying something once. You have to be more aggressive than a one-off.
With more marketers headed in this direction, what kind of pitfalls should we be aware of?
To test, you need critical mass. The measurement group needs to be in the hundreds of thousands and you need to be able to keep an eye on your base.
And you have to balance your short-term and long term business goals, against your testing and learning objectives/goals. There’s a careful balance you have to apply there. We can put an analytical person at the helm and ask every store, “what are the seven tests you’re running every month?” Now the markets are totally focused on the seven tests, and not how they’re doing. Or they can’t measure their base mailing because they’re running so many tests and they’re changing all the time.
Then you have the flip side, mailing people five times a week. They should test by NOT sending to some of those people for a bit. If after 3 months, they don’t see a dropoff, think about that. You may be polluting your message.
Do you agree that bridging offline and digital data remains a work in progress?
That’s exactly what I mean. Some of it is description and location-based, some is not. They may end up using an attribution strategy and it’s wrong. Are they doing it and monitoring it? That’s the real question.
We are really good at this. We just found a problem where customers were starting a purchase with the ecommerce channel but the final order completion was being fulfilled by someone in a call center. The bottom line is when you set an attribution model, you have to constantly be looking at it – does it make sense? Does our approach to attribution make sense? There’s no one formula. You have to do it and monitor it. And in 18 months, you revamp it. It’s not a science yet, it’s not straightforward.
If your marketing strategy is being driven by finance, it’s hard. At some level, you have to say to the CFO that this isn’t going to tie to your balance sheet. That’s not the goal here. Let’s treat different customers differently until we acquire them. There are needs to treat them differently by department.
Why do people say they don’t have match rate issues?
Bridging online/offline is hard. The offline data, is often consumer or house based it goes to the house. Yet, Online data is based on zip 9, DMA and looser criteria, things like that. So knowing who got it is the hard part.
What about opt-in data?
It’s about getting a double opt-in so you can get location data and store it against the user. That allows you to understand where they go in the physical world. It’ll be interesting to see how the geo stuff contributes to data hygiene.
We are very good at this, too. Getting to the household level is a start. But you have to go deeper than that. You have to go to the user. We’re trying to turn on and off digital and turn on and off direct mail and show the impact. The opt-in piece comes back to the fact that everyone has match rate issues, but you just keep testing and studying your results.
Last question: Amazon’s Echo Look – creepy or convenient?
Both. You have to be super comfortable with your internet provider and anyone on your internal network to suggest putting this in the master closet or anywhere it might have a view of the bedroom.