A/B testing, also known as split testing, is usually applied with an aim to increase website conversion rates, and also carried out within advertising channels, including advertising texts and visuals. SEO A/B testing is not as avidly discussed as other forms, but is a very interesting market development. With it, you can predict the ROI of any variant and fine-tune your website’s SEO implementation.
By putting together a group of similarly formatted webpages and splitting them into two groups – A and B – you can test their performance by using one as a control and the other to model the adjustment you think will be more preferred by your users. Forecasting the progress of A and B, split testing software can often predict the potential of each adjustment.
More SEO traffic with a data-driven approach
Specific SEO A/B testing is a relatively new phenomenon. Like website A/B testing, it should be done early on and continued throughout the design phase. As it is virtually impossible to pinpoint the result of a variant in advance, results are often unpredictable and surprising. Existing business cases often turn out very differently than expected, and what works well in department X does not necessarily work as well, if at all, in department Y. This can also translate to SEO functions.
Usually, one takes a group of website pages and puts an array of variants through the A/B testing process. If a variant results in more organic traffic, it is then implemented throughout the website. Once this adjustment has done its job and increased organic traffic, page formats can remain the same for an indefinite period of time, preventing unnecessary expenditure of time and money.
A positive A/B or split test result therefore leads to the implementation of a variant. A result that provides no significant change means one has the opportunity to cut out development time spent on a worthless variant. A negative A/B result gives the information necessary to prevent the implementation of a damaging variant. All in all, a great way of saving energy, manpower, time and money, but only if used throughout the entire design process and in the correct manner. If you don’t test, you won’t see the results of your SEO choices until much further down the line.
Quick turnover benefits of SEO A/B testing
Google encourages and certainly does not fine split testing unless a test continues for an exceptionally long period of time, where it can be seen as a method to throw off search engine algorithms. Long-term testing may have benefits in terms of data and learning, but implementing positive variants immediately upon discovery makes much more sense than waiting until the test runs its entire course; not only may this save in costs, but the quicker a true winning variant is identified and implemented, the quicker one can move on to the next test. There is no sense in continuing with a split test once this winning variant has been unearthed, and the more one tests, the more results one receives and the higher the positive impact. In short, by discarding a test once a significantly positive variant has been detected and implemented opens up more opportunities for the further testing of other variants.
Efficient sequential testing procedures such as AGILE can quickly detect winning and losing variants of relative significance much more quickly than fixed-sample tests. This software negates the need for a deep understanding of statistics upon which all A/B testing is based. A distinction needs to be made between statistical A/B testing for user preferences and usability testing, more adapted to user behaviour. These testing types are not at all similar in method and should not be confused.
When results are impossible to obtain, such as where content is frequently updated in the testing period it is possible to receive false positive results. High traffic and multi-channels contribute to difficult to identify, control and eliminate variables leading to results that may not be correct. In an ideal world, one would freeze a site during testing periods and hold off from adding content. This is not possible. For SEO, using a specific SEO A/B platform like that developed by Distilled may reduce false positives, but admits that it is not immune.
SEO A/B testing has uncovered some unexpected results. Click-through rates are often lower when more high-volume key words are used. This may be due to competition or the replacement of less popular phrases which on their own may not score highly, but in combination with other key words have a more positive effect than high-volume choices.
Classic website audits have also proved to work only sixty to seventy percent of the time, while the remainder of their recommendations have very little effect whatsoever. This can not only lead to long, drawn out processes when finding the right content, it also creates a sense of futility among staff members. High quality SEO A/B testing software will be able to test multiple variants at any one time and only work on those variants that make the most impact. Even then, website audits seem to have less effect on rankings than previously thought.
One must also be aware that overgeneralisation of A/B test results may turn a positive effect into a negative one. While removing content from one page may increase its test scores and create a much better forecast, this does not mean that this approach should be applied to every page. Again, using the right technology which allows for more specific testing of smaller page groupings is recommended, but testing too few pages might have an opposite effect because it is not possible to completely isolate all page changes. Factors such as new or lost backlinks or growth in search interest for a specific product or product category cannot be completely excluded. Their influence on calculations can be limited by testing a larger number of pages – preferably more than ten per group. This complicates things to the extent that one should probably refer to professionals in the field of A/B testing, as they will know which tests work best on larger page groups, and which give less tainted results in a smaller grouping.
How does SEO A/B testing work?
SEO split testing takes a group of pages and divides them into two groups with approximately the same share in traffic. One of the groups is the control group (original) on which no adjustments are made. The adjustment is made on the pages of the other group (variant group). The page groups must be of the same page types, for example all category, site or product pages.
The difference between SEO A/B and the usual form of A/B testing (for conversion ratio) is that in the latter two variants are available for each page. Visitors get to see one of either variant. What is more, Googlebot is not aware of this test. A CRO A/B test will therefore not tell you whether there is an effect on organic traffic. This only happens over a period of time due to the overlapping of control and variant group results. Once data has been gathered this information must then be tested in terms of its significance.
In the specific case of SEO, both technical and content elements can be tested. These should at least include:
1. Adjusting page titles and meta tags
2. Adjusting, adding or removing on-page content
a. Textual content (headings or body copy)
b. Visual content (images, videos, maps, etc.)
c. Internal links
3. Adding or adjusting technical elements
a. Highlight structured data
b. Canonical or hreflang attributes
When is a test result significant?
Determining significance is a must. After all, you want to rule out the possibility that the result could have happened by chance. The difficulty with SEO A/B testing is that there are two groups of pages that are not completely identical. They have an overlap in terms of design but differ in terms of subject and content. For this reason results cannot be compared one at a time.
To compare them, one is obligated to use time series analyses. Based on historical visitor data from both the control group and the variant group a forecast is made of what the visitor numbers would have looked like if the test had not taken place. In this way it is possible to make clear what the effect of the variant has been on the visitor volume. A test is considered successful if the actual numbers of the variant significantly exceed the prediction, provided that this does not also apply to the original.
The calculation model is based on Google’s own research. The use of this method is interesting because other site-wide factors can be excluded. This includes Google algorithm updates, seasonal influences and industry-specific trends. These will not only have an effect on the results of the variant group, but also on those of the control group.
It all sounds positive, but SEO A/B testing is far from easy due to the following three main reasons. Firstly, content management systems (CMS) generally work with a fixed template per page type. This means that A/B testing concerning layout, design and layout of the page becomes a challenge. Some of the better quality software makes it possible to add adjustments outside of the CMS. However, there does not seem to be a product on the market that can do every one of the recommendations found within this article. Secondly, SEO split testing cannot be applied to every type of website, because the test is carried out only on pages with the same template. Service websites are therefore less suitable for this type of testing, while e-commerce websites will get the most out of it. The third reason why SEO A/B testing can present a challenge is that a lot of organic traffic is needed to establish a high-quality predictive model. As a rule, your pages should have at least 1000 users per day.
What tools and software are available for SEO A/B testing?
High-quality tools which can deal with a wider range of variables and factors are limited, but not impossible to find. Try one of the following:
1. Distilled ODN SaaS/Edge solution.
2. RankScience automated SEO A/B test platform
Increasingly ‘meta CMS’ solutions are entering the market and are often based on Cloudflare Workers like Spark and Updatable; however, time series analysis (open source code) has not yet been integrated into these tools.
Is SEO A/B testing worth it?
Search Engine Optimisation A/B testing is certainly an interesting development, providing tools for ROI predictions of SEO variants. With this form of testing you can then focus on the implementations that actually deliver results, rather than waste significant time and energy on following dead-end trails. SEO A/B testing is not cheap and not yet available as a complete package owing to the limitation listed under the ‘Challenges’ heading. The lack of high-quality tools also means developers can charge high prices. However, for businesses which invest in SEO optimisation and are already generating interest, this recent method of turning a search engine listing into a reason to click is more than worth further research.