Listener Q&A: How To Learn From My A/B Tests
Tons of questions coming in about testing and how to learn from them. This is one of my favorite topics straight out of my MIT Grad School work. In this episode, I give you a 3-step guide on how to test and apply your learnings - and how to use your data to inform your next testing strategy.
Want to join our Email Membership?
Monthly Goal Setting & Planning Workshop
Monthly MasterClass with milestones, hands-on activities, and takeaways
Access to the MasterClass library to rewatch or jump into a topic of interest at any time
Monthly Virtual Meet-Up
Monthly Live Coaching Call
Monthly Instagram Live
Ongoing Support from our eSociety Strategy Squad via Slack & Email
Networking community via Facebook Group
To get more info, join the waitlist here.
Send them my way at firstname.lastname@example.org.
Listen to the podcast here:
Listener Q&A: How To Learn From My A/B Tests
This episode is brought to you by Klaviyo. It’s an eCommerce marketing automation platform with automated email flows that you make money while providing a great user experience. With Klaviyo, you don’t need to sacrifice advanced features and powerful functionality for speed and ease of use. To see this tool simplify your eCommerce email and SMS marketing. Go to www.Klaviyo.com.
Welcome to episode 28. I hope all of your Labor Day sales and whatever version you decided to try were a huge success. Some things to know ahead of getting started with our episode is, first, if you're looking for a monthly master class that is surrounded by an email marketing community, our membership may be the thing for you. Join the waitlist for the founder's fee. That's so exciting.
We get a bunch of questions sent to Conversations@EmailGrowthSociety.com but the one that has been coming in a lot is around testing and more or less how to learn and how to apply the learnings. I get it. It is complicated in so many ways but, hopefully, I can bring some color around that. Let's begin. First, do you know what you are testing for? It is important to understand what you are trying to improve.
If it's open rates, you can test the time of day, day of the week, sender name, subject line and preheader preview text. That's it. If you're looking to try to optimize your click-through rate using the send time, that wouldn't be the right measure. If you are looking to improve your click-through rate some things you can try are email template changes, CTA placement, text and colors and user eye path tests.
What the user eye path test mean is naturally our eyes read left to right. With that same sentiment sometimes if you test putting texts next to an image and follow an image with a text versus text above images, you can start to see the response people had based on their click-through rate. One of my favorite metrics, which I have talked to you about before is the click-to-open rate, which is isolating your openers that clicks. You can consider the copy and the images for those folks. If you're looking at that metric, usually it's largely due to the content, the copy and the images in the email.
If you know what you're testing, you have to conduct your tests. This can be a bit trickier than meets the eye. Firstly, you have to ensure statistical significance. As a reminder, statistical significance is your percentage of confidence that if you make this change in an ongoing way, you're going to see continued results. List size is a big contributor to being able to achieve statistical significance.
One great example of this is in one of my clients, Klaviyo Flows. We noticed that there hadn't been any revenue generated from the new lead welcome. My first instinct is like, “Let me go ahead and start doing some subject line testing to improve the open rates.” When you look more closely, if only six people a month are coming through the new lead welcome, it is going to take a long time to ever understand and let the data tell us what to do.
Adding variations there would be virtually impossible. I would want to use my bigger group of leads and test something with them that I would be able to achieve statistical significance with. If you have larger lists, let's say 10,000 per test then you're good to go. If you do not, you will need to conduct your tests a little differently. You will need to set up your test and ESP and you will likely get a winner but it is not going to be statistically significant.
From there, you're going to need to run the test again to validate. It's so important that you do this. A lot of times, one useless thing I see happen a lot is if you don't have a list that has got at least 10,000 people on it then doing those A/B tests where you do send to 20% and send the remainder 60% the winner. Those are never going to be the best path forward because the data is virtually useless. It's like flipping a coin and whatever subject line you had in that test is not going to matter because you cannot achieve statistical significance.
You want to try to formulate your tests in which you can have some control. When I say control, I mean doing a 50/50 split. If you have a smaller list, make sure you validate the test again. If you get two tests with a clear winner that is the same then you likely can conclude that whatever you did for that subject line formula could be applied.
In the case I mentioned before where you have low lead volume coming in your new lead welcome, you can then apply that formula to that lead welcome because then you know at least it worked in a bigger environment. Lastly, you run your test. Some tests are a lot easier to implement than others. For example, once you know your optimal send times for each day of the week, you can certainly send within those time ranges or at that exact time every single time you send on a given day. That's an easy one.
We always say that send times are your biggest win because they're consistent. If you test them twice a year, you might find a small variation but they're going to be your most consistent and your biggest winners. If you have identified a day of the week that is best, I would prioritize your most important emails to go out on that day so you don't overuse the day.
Remember, when we talk about recipient conditioning and if we are always sending emails on Tuesdays, we are getting to the point where maybe we could get in that real rhythmic pattern where people will start to ignore us. Let's say you have two promos that are geared to drive sales and a few educational sends for the month. The promo should be sent on your best day of the week, it still leverages your send times on the other days of the week to send your other emails. Once you find a golden day, don't overuse it.
If you have learned what subject line formulas work best, prioritize them on a sheet. For example, if you've tested asking a question, being direct and front-loading your discount or attention grabbers like emojis, you can prioritize these in orders of your best performer to your worst performer. Having your worst performer lets you steer clear of that unless the context of the email makes sense but you want to use them as they make sense.
Most important sale days would get your best-performing subject lines. You can still use other formulas that are higher performing but they might not be your best. The worst thing you can do is always use the same best-performing subject line formula over and over again. If you're a client where asking a question is the thing to do, could you imagine that in every single email you sent you asked a question? You can't because that's silly and impractical.
That's the best way for subject lines. Let's say we look at templates. If you see that having a navbar in your email template works well then plan to incorporate that in 95% of your emails. There are times when you wouldn't want to incorporate this element because it would cause unnecessary distraction or wouldn't be aligned. This would be like notification emails, transactional emails, for instance, letting you know about shipping and emails about events or even something as simple as an abandoned cart email. Your sole purpose is to focus on their cart so that the navbar could cause a little bit of friction.
Think about the strategy of the email and ask yourself if it makes sense and if the customer does not read your message and goes to the website. Would that be beneficial? In some cases, it wouldn't. A lot of the time it will be okay but not always. Lastly, when it comes to testing content, free shipping banners and lifestyle images, remember to test for your leads and customers separately.
They're not going to work for both groups. It is important to understand what content elements do work well for each group separately and incorporate them as it makes sense. One easy test is you create a template one with the free shipping banner and one without a free shipping banner. You send that to your leads and customers and see, firstly, does the free shipping banner move the needle? Yes or no? Maybe hands down it does not for your customers. You can conclude that adding free shipping for customers does not work and maybe it does for your leads. Be very strategic.
In testing, you have to have hypotheses going in. Let me also give you this tip. A cool way to learn from your email data and create these potential hypotheses for tasks is to put all of your data into a Google or an Excel sheet and run conditional formatting on it, in essence creating a heat map. A lot of times we sit in our ESP, look line by line at emails and say, “That did well. This did well.”
When you start to take all the data out for the month and put it in a sheet, you can see the whole picture conditionally formatted. You have this giant heat map in front of you. This allows you to see some real winners. You can start to collect your winning emails and trends are going to begin to appear when you do this.
I consider a winning email that has the most consistent heat map color from opens to revenue in those warmer colors. We don't want the green forest in a heat map. We're looking for the redder colors. From here, I will capture the email creative and place it into a Google slide with the subject line, when I sent it and to what audience. Over time, you will be able to see some clear trends.
One thing we see often with our clients is that a simple template is driving clicks to sales better than a super heavy image. Many times, clients think that there should be a ton of images but simple tends to work well. Once we see this trend with the data, we could then set out to test against our hypothesis, saying, “Does a simple template work better to drive clicks to revenue than an overly image-heavy driven email?” Once the test is over, we will likely have an answer and then can apply our learnings to future emails when it makes sense. It's brilliant and easy. You need to make sure you're collecting the data. That is the first battle.
Remember that the worst thing you can be doing is sending out emails from your ESP, checking in on open rates to revenue and moving on. You need to be collecting the data in one place, looking closely at it and testing every quarter at least. Your ESP is great and provides some cool insights but data scientists exist for a reason. You need to be able to pull out the data so that you can see your email evolving over time.
If you do the heat map approach over time, you will see all of the green, which is lower-performing content, start to shift a little bit hotter to the better-performing content as you slowly improve. If you pull your email data month-over-month and never see the green going away, you could be testing to try to improve those individual micro-conversions.
If you're going to take the time to pull this data, try to incorporate it into a consistent testing culture. Email is a living, breathing channel and it needs to be supervised. It is not a superhero that will come in and drive sales for you blindly. You need to understand what levers are possible and when best to pull them. For more questions around this, feel free to send me more at Conversations@EmailGrowthSociety.com. Until next time. Happy emailing.