Marketing is a mix of both art and science. But we’re here to talk about the latter.
Science in marketing is not a new concept. In fact, it dates back to the 1920s when Claude C Hopkins published the book Scientific Advertising, which introduced the process of split testing, coupon-based customer tracking, and loyalty schemes. The book details an advertising approach that’s based on testing and measuring where, as Hopkins put it, the advertiser is “playing on the safe side of a hundred to one shot.”
Both the process and results are uncomplicated: it allows advertisers to keep losses from unsuccessful ads to a safe level while amplifying the gains from profitable ads. Psychological concepts have also been used extensively in marketing and advertising. These approaches are predicated on established principles of human behavior, which are often used when creating “buyer personas” or fictional representations of a brand’s ideal customers. These personas, along with other supplementary data and a little bit of intuition, help copywriters and creatives craft the most effective messaging.
A century has passed since Hopkins’ published his book, and his lessons still apply today. Even more so in the digital age where A/B split testing has become the absolute testament to serious marketing. The irony, though, is that despite endless amounts of data being right there at their disposal, many marketers still find insight to be a stumbling block. In an increasingly complex, fast-changing digital economy, marketers are stumped with arbitrary metrics like “clicks” and outdated models like last-touch attribution.
But let’s face it: today’s marketing isn’t exactly a smooth-sailing job. Consumer behavior, industry tech, and tactics tend to evolve at lightning speed. For teams to truly make an impact on brands and make more informed decisions, they need to strive for better measurement. This move can help them close the loop between advertising and sales, craft more impactful omnichannel campaigns, and gain deeper insights that power innovation. This way, they can translate the value of their efforts, demand for better funding, and retain their competitive edge.
Continuous testing, as most successful agencies have found, is the key. And there are a handful of tests available, ranging from simple A/B tests to complex customer experience tests. The question is how do you make sure you’re hitting the nail on the head?
The Power of A/B Split Testing
A/B split testing may seem intimidating at first glance, but the premise of it is quite simple. Picture yourself in this scenario: You’re all set to launch a new landing page you’ve created for an app. It’s looking fresh and crisp, complete with arresting visuals and snappy content. Your client has given you the green light. Everyone in the team is happy with it.
A/B split testing asks you the question: “But will it work?” It also provides you with an answer.
A data-driven agency would first perform a series of tests to make sure the landing page is fully optimized for its audience and the metrics the clients have laid down. These tests will include creating different variants of the landing page to challenge the champion or original landing page. Throughout a set period, one challenger will be launched along with the champion page, and their performance will be compared. The traffic will be split among the two-page variants, meaning the pages will appear to visitors on a predetermined weighting. For example, you might split the traffic 60/40 or 50/50.
But this weighting should depend on how you decide to perform the test, whether you’re going to test multiple page variants simultaneously or new ideas against the champion page. The latter is usually used if the champion page was created from scratch and you want to put several ideas on the table. In this case, you would want to assign equal weight so that each page is treated equally and you can pick a winner. The tricky part is that you’d need to drive as much traffic as you can for the results to be significant or statistically valid.
On the other hand, if you’re testing against a pre-existing page you want to refine or try ideas out on, it would be ideal to give your new page variants a smaller percentage of traffic to mitigate risk. In this scenario, the risk would be losing the traffic that the pre-existing page was already driving to a variant that may not work. The key here is to be meticulous and to create variants that are worthy of being out there.
Of course, A/B split testing isn’t just about finding the winner. You need to learn how to do it properly first, and these tips will set you on the right path.
- Start small – If you’re just getting the hang of A/B testing, proceed with caution. Start with one or two tests first to get a feel of the process and how they’ll work for you. If you manage clients from various industries, keep in mind that how you’ll perform A/B tests can vary depending on unique factors like size, industry, customer personas, traffic, etc.
- Control your test factors – Avoid comparing apples with oranges. Meaning, don’t compare things that are fundamentally different, say a headline with a hero image against a headline and a contact form. In this approach, you will get a winner, but you won’t be able to isolate which factors are responsible for the difference in performance.
- Don’t mix tests – If you wish to see valid results, avoid integrating multiple tests in the same environment. Doing so would mean creating one big A/B test with multiple factors, and adding more factors to analyze could result in more diluted data.>
- Allow sufficient runway – timing is everything in A/B testing. You need to give your tests ample and optimal time for them to give useful statistical data. This could run from a few weeks to months, depending on the size of your business and how much traffic your website has.
Measuring Incremental Revenue
Incrementality is one of the most important things marketers should measure and pay close attention to. Before you can master the tactical nitty-gritty of omnichannel marketing, you must first get an accurate view of how your efforts fuel conversions and which techniques are most effective at delivering positive ROI. And these insights come from measuring incrementality, or the uplift produced by marketing and advertising above native demand.
Across all channels, there’s no doubt that there are big results that simply wouldn’t have happened without some kind of promotion. But for clients to see that their investment is worth it, you must accurately measure the conversions than can be linked to your efforts. More importantly, you should know how to measure incremental lift or incremental revenue, or the difference of ad-driven leads, sales, and other KPIs compared to native demand norms.
If you’re able to draw a clear line from your activities to sales, you’ll have a batter chance of getting senior-level buy-in and securing budget approvals. To get the maximum incremental revenue, you must also strive to keep the incremental cost, or the total advertising and marketing expenditure, to a minimum.
By measuring incremental revenue, you can provide your clients with direct proof of return on ad spending (ROAS), a key metric for digital marketers and businesses. In channels where this metric matters the most, like email, display, and direct advertising, this measurement helps you track the performance of a campaign as a whole. In other words, incremental value proves to be useful for omnichannel marketing where it’s important to see the whole picture rather than drill down on separate touchpoints.
Adopt a Data-Driven Approach to Marketing
Today’s CMOs and omnichannel marketing teams don’t have it easy. They’ll be spending the next few years managing endless change, anticipating trends, staying competitive, and pushing for innovation. And a test-driven, data-centric culture is what they’ll need to achieve all these.
To know more about tests in digital marketing, speak to seasoned CMO at Growth Rocket today.