Tuesday, April 14, 2015

Five Ways To Avoid Unproductive Creative Tests

You Need Better Outcomes
If you are running any kind of campaign with a direct marketing focus, it can be difficult for creative pros to set up an execution strategy that leads to, well, some place actionable. So let's get into the weeds of how to do this, in a way that won't just beat your control, but give you true learning points to use in the future.

1) The data likely has noise in it. Drown that noise with data.

I'm old enough to have learned my direct marketing in direct mail, which is to say, postcards, selling letters and catalogs... and the principles have served me well in the past 15 years of online work. So here's a quick history lesson for everyone who didn't start their career with the Post Office.

In classic (aka, offline) direct marketing, you'd split your lists in equal segments, making sure to control for a host of variables that could skew your results. Then, you'd pour enough sends into every cell to get to statistical significance, and in general, not learn anything actionable for months, or even entire business quarters, especially if you were working in niche consumer categories. It was slow and could cost you real money for failure, because it all took time and money, especially when you factored in postage and printing... but it also instilled a certain amount of discipline. No one expected or trusted fast results, unless the data truly overwhelmed any possible bias. Taking your time, and making sure to make every test count, was critical.

I love online work for the speed and economy, but it also tends to inspire compromised and sloppy work, especially when it comes to list segmentation. If you are trying to test, for instance, banner ads, you really shouldn't trust any data that doesn't reach into six figures at least... because the response rates are so low due to banner blindness and viewability issues. With rare events ruling the day, you also can't get great splits for things like recency, daypart, or protecting against click or impression fraud, especially if you are still working with remnant inventory.

Let's not get too depressed about this, because testing can and still needs to be done... but you should bring in your math hat for significant data on what you are really measuring (conversions, hopefully). You might still need to run this for weeks. Plan accordingly, and resist the temptation to cheat the process or declare an early "winner."

2) Test single variables, but make sure they are actionable.

Far too many creative pros will generate a multi-variate plan that tries to build incremental gains on small beer points -- a blonde model instead of a brunette, a round call to action button over a square, light animation against static, and so on, and so on. What usually winds up happening is a small array of results that is then used to create a Frankenstein ad of "optimal" practices... that winds up failing because, well, the ad is a Frankenstein, with too much clutter or disparate elements that contradict each other. There's a better way to go about this.

The key is to make sure that your variables are points that speak to greater learning. This is usually best done at the offer header level, where no offer goes against a low and middle price point (say, branding only vs. free shipping at a minimum order value vs. percentage off for some multiple of the free shipping price point). This simple test gives you clear information on how the v2 ad set should work, based on what segment of the market is providing you the best ROI. If it's premium price, maybe it's time to revisit your fonts, product photography, or call to action. If it's the free shipping offer, the same calculation happens in reverse. If neither offer is working more than the branding only approach, that says something powerful about your brand, which should also inspire your creative team for the next execution. And so on.

The big point is to make sure that your test (a) drives a real difference, and (b) that the difference is exploitable at a strategic marketing level. This can't be stressed enough; testing for the sake of testing, with no "next move" contingency thought process, is simply an irresponsible use of resources.

3) When in doubt, test tactically.

At past start ups in behavioral, email and retargeting, I concentrated on testing points that were more structural than creative, because the gains were far more likely to reappear in subsequent cells. In behavioral, this meant creating "utility" style ads that generated real functionality, like multiple landing page entry points, "second chance" offers that upped the ante when the user looked likely to close or ignore the offer, or tight splits in recency to say something more targeted to the past 7 day, or past 15 day, user. You get the idea. In email, tactical testing revolved around fairly pedestrian concerns like file size, daypart, mobile platform or different ways to express graphic relevance. In retargeting, monitoring competitor ads for format options, adding functionality that generally wouldn't be used in run of network approaches, and varying the aggressiveness of the offer for lapsed users, or providing social network soft entry points, all had their place.

In all of these cases, a winning test was something that led to not just a new approach for v2, but a more optimal way to consider the entire next line of attack. Tactical moves are far more likely to be universal, and have far more staying power, than this season's hot image or copy.

4) Innovate with relevance.

When you live in a consumer category long enough, control ads can feel almost oppressive, especially if they are in consumer categories with high turnover. Take, for instance, life insurance. It's something that people generally don't think about until a major life event, such as childbirth or a change in family status, and consequently, it indexes very highly, for new to file consumers, in the 25 to 40 year old New Parent market. From a creative perspective, this meant that all of the ads, for a good long while at a previous start-up with a high number of insurance providers, looked the same. A family of four, nuclear, mildly affluent, maybe with a dog to soften the image, and the kids are young. They're walking on a beach, and if you lived in this consumer category during the time that I did, you probably saw the same royalty-free image on a half dozen providers.

There's nothing wrong with a winning control... except that all controls fade with effectiveness over time, and there's always the feeling that you can do something better, especially in direct. So my team would scour royalty-free image banks to find new and better images that were in the same vein, maybe in different settings, or with different demographics. Performance plateaued, until I tried a new image approach. In this case, I went away from the family, and focused on the couple. And in a new winning control, a pregnant couple. It worked, and we had an entirely new way of looking at the category, since the truth of the matter was that the prospects in life insurance are usually in market before the kids are grown.

That sort of thinking has borne fruit in any number of categories in my career, and will likely work for you as well. It usually shows up in brainstorm sessions when you know everything about your market backwards and forwards, but maybe aren't seeing the forest for the trees. So innovate, but in an approach that's still relevant. (And by all means, always test that innovation against your control. There's as many cases when a bright idea didn't work, as when it did.)

5) Look beyond your day to day.

Many marketing and advertising pros who live in a single consumer category become truly spectacular experts in said category, and know everything that their competitors are doing, what they've done before, and where the market is likely going to be in the next few business quarters. But what they tend to miss is that there are fallow fields of knowledge in plays that reach the same audience, but in different consumer categories.

I'll bring this back to another real-life example from a past start up. One of our consistent consumer categories was online education, which indexed highly, in the time period in question, towards urban women, 25 and under. The demographic wasn't particularly affluent, which meant that many providers just went for a representative model, placed her in a campus setting, spoke to how fast and easy it was to get a degree that led to a better paying job, and called it a day on the creative.

What they didn't realize was that this exact same demographic was also being targeted extensively for online personals. Since my company saw the data from all tests and worked in many categories, this led to discovery and visibility moments that I was able to bring into my team's day to day.

So instead of showing overly excited models in campus settings -- not the online experience, or representative of the value of the offer -- our ads worked off what was working for this demo in personals. To wit, serious relationships, meaningful connections, and longer copy that spoke to serious commitments and life-changing events. Our education ads moved away from degrees fast or get a high paying job quick concepts, and more into how the value of a degree would span decades of earning potential. The concept wasn't an unequivocal success, as the campaigns in question tended to drive less response, but higher conversion. They weren't right for every client in the category, and we made sure to share that knowledge with top tier clients.

The takeaway for creatives was clear, though. If you pay attention to what's working for your audience outside of your category, it could really inspire a meaningful creative test. And could make you seem like much more of a Creative Genius than you may actually be. :)

* * * * *

You've read this far, so by all means, connect with me personally on LinkedIn.You can always email me at davidlmountain at gmail.com. And, as always, I'd love to hear what you think about this in the comments.

No comments:

Post a Comment