When you’re working with clients who expect a return on investment from their SEO budget (as most clients do and should), it often presents several challenges for the SEO practitioner:
- How do you cost-justify your SEO recommendation without being able to prove its value (beyond, “we’ve seen this work before”)?
- Are there enough client resources (budget or internal capacity) to work on an unproven SEO recommendation?
- Will you find your consultative recommendations being deprioritized and pushed further and further down implementation lines (development, content, or otherwise)?
- What other issues do you run into? Share with the rest of us in the comments for some interactive problem-solving together!
Overcoming client barriers with SEO testing
As someone who works in the space, I have seen many custom solutions developed from agency to agency (like those by the technical SEOs at LOCOMOTIVE Agency) over the years to tackle problems like the above. And, in talking to agency owners and SEOs, I can tell you it used to be much harder to do any kind of split-testing before the advent of SEO SaaS split-testing tools, like SplitSignal.
Read more about SEO split-testing: SEO A/B Split Testing 101
Why SEO split-testing?
Running measurable tests on statistically significant quantities of traffic to your website can break down the barriers to having your recommendations implemented quickly and rapidly demonstrate the value of you as an individual or your agency as a whole.
Whether you have a positive or negative test, you can show value. A positive test can be prioritized for implementation based on the overall impact it’s going to have on the website. A 50% increase in traffic, for example, should be a pretty high priority for your client (and you). A negative test result isn’t a bad thing; it keeps you from wasting precious time and budget on implementing something that isn’t going to work. As you know, what works in one vertical doesn’t always work in another.
6 best practices for split-testing your SEO recommendations
1. Set a hypothesis before you get started
Once your testing tool is ready to go, you need a test you can configure and run. Developing a hypothesis is simply asking a question. For example, will adding “Holidays 2021” (or your current year if you’re reading this in the future!) to your meta title improve the click-through rate to your products during the holiday season?
By choosing high-impact elements within your website (e.g., title tags, heading elements, meta descriptions, etc.), you can draft simple tests with significant rewards. However, if your test is too narrow, you may not be able to observe enough of an impact to draw any helpful conclusions.
Once you have your hypothesis at hand, you’ll be ready to start implementing your test because what you want to test should be clearly defined.
2. Make sure your sample size is large enough
If you aren’t familiar with statistics, one thing you need to know is any statistical test, like split-testing, needs to have a sample size large enough to be able to draw valid conclusions (sort of why you’d rely on a 5-star rating from thousands of restaurant patrons, versus one 5-star rating from a single visitor).
In most instances for SEO, your sample is going to be traffic or clicks to your pages from search engine users. There are a number of other external factors that can impact your sample, things causing major fluctuations in traffic. In general, these can be avoided by making sure your control group of pages and test group are comparably diversified. Also, with a diversified test group, you don’t even have to worry about algorithm changes in the middle of a test because it would affect each group evenly.
3. Communicate your intentions with other stakeholders
If you are implementing an SEO test that may affect other internal or external stakeholders, let them know ahead of time to avoid duplication of effort.
For instance, if you are going to run an SEO-related test on a group of e-commerce product pages, perhaps altering a heading element, you should consider notifying your CRO team. They may be planning a similar test at the same time that might conflict with your experiment.
Asking marketing and other product owners ahead of time avoids unnecessarily invalidating tests and builds cooperation internally or amongst multiple agencies simultaneously.
4. Understand the time it takes to reach statistical significance
In general, it will take time and volume to reach a result significant enough to rely on.
We’ve discussed volume in example 2 above, so let’s talk about time. Time is directly related to volume. The higher your volume, the less time required to reach a reliable result.
With SplitSignal, we’ve found it generally takes 23 weeks to reach a successful test (whether positive or negative) with a minimum of 200,000 clicks for 100 days.
5. Focus on your high-impact pages
You want the results of your tests to provide a high impact for your customer, or yourself, so aligning your tests with the goals for the types of content you have on your website is critical.
Website traffic can vary widely depending on the topic and intent of your content types, be they e-commerce-focused, lead generation, or higher funnel content types like blog posts.
Generally speaking, however, the types of content that will receive the highest prioritization from an implementation standpoint are going to be your money pages, so pages directly related to sales on an e-commerce site, such as product and taxonomy pages, or lead-generation-related pages.
Picking templatized pages, for example, product pages, to use in your test is a great way to go as the templates will cover the same types of content and are great for comparison purposes (you wouldn’t want to create a test on blog pages and control of e-commerce pages, for example).
By focusing on these types of pages, you can easily show the return on investment and get your recommended changes prioritized higher by showing the actual value of implementing the change.
6. Focus on the right metrics for success
And the final recommendation is to make sure you’re focusing on the right metrics for measuring the overall performance of your SEO test.
In general, SEO-related tests are going to focus on traffic metrics like clicks, click-through rates, and impressions. These are the areas you should be paying attention to.
Other metrics which are more engagement and conversion-focused should be avoided (controversial statement, but let me explain further). These are important metrics for conversion testing, but that’s not what we’re doing right now. We’re testing how search engines are reacting to your changes, NOT your on-site users.
Now, I hear some of you saying, “Qualified traffic is what we want, so conversion metrics are important.” and you’re right, and I agree 100%.
If your pages are set up correctly with the right content, in general, Google knows the types of traffic to send to your pages organically for a conversion, so you should be okay. If your pages aren’t written well for conversion, then you should do that first and not worry about split-testing for SEO at the moment.
But for SEO testing, remember, as I said above, we’re testing how search engines’ algorithms react to our changes.
Now you know, and knowing is half the battle
Understanding the ins and outs of a successful SEO split-test campaign is important for your success once you’re running your own tests.
While the above doesn’t represent all of the best practices, it covers six of the most important.
Do you have best practices to share with your colleagues? Let’s hear about them in the comments below.