Home 9 Conversion Rate Optimization 9 A/B Tests – How to increase conversions in eCommerce?

A/B Tests – How to increase conversions in eCommerce?

by | 9 October 2022 | Conversion Rate Optimization, eCommerce analytics | 0 comments

One of the main goals in eCommerce is to boost conversions from the traffic already acquired. The A/B tests, among other, are used for this purpose. There are a number of reasons for this, including the rising costs of PPC marketing, increasing competition on the online sales market and the costs of implementing new technologies, which mean that the sales process should be optimised as much as possible to ensure the highest possible return on investment from every penny spent on advertising.


Let us assume that we have a brick-and-mortar shop in an offline model with traffic of 1,000 customers per day and 30 transaction per day. As owners, we take care of merchandising, i.e., the best possible offer presentation, interior and interior designers.


In grocery stores, such as Lidl or Biedronka, we can see shelves arranged in a very analogous manner, conducive to increasing the value of our carts. Initially, we start shopping from the vegetables department, then we move on to the fruit department. The next department is usually dairy. The aim behind such layout of shelves is to get as many people as possible to add ‘healthy products’ to their cart as soon as they enter the shop. As a result, we are more likely to add more products to the shopping cart.


Merchandising is a field that aims, in simple terms, at the correct and best converting (selling) arrangement of merchandise in a shop. Similarly, for eCommerce, the correct placement of products on a website is called e-merchandising. On the other hand, improving the customer journey by optimizing website items that enhance trust and increase purchase intent is CRO (Conversion Rate Optimization).


What are the A/B tests?


At the core of CRO are the A/B tests, i.e., marketing campaigns (offside) or e-commerce site campaigns (onside) aimed at comparing two of the same parts of a creative / page / ad / email, and then introducing a marginal change and testing which version converts better.

The main KPIs are CTR (clicks), OR (open rate), CR (transaction processing). A key assumption when implementing the A/B tests is to test 1 item at a time as part of a single test. The objective is to accurately analyse and assess whether a module boosts or lowers conversions.


The A/B tests and their types in eCommerce


In eCommerce, there are 2 main types of the A/B tests: offside and onside. The offside tests involve off-page operations, mainly marketing campaigns: emails, organic social media campaigns, pay-per-click campaigns (Google Ads, Facebook Ads) and email marketing.


The A/B offside tests in email marketing

The main idea in the A/B tests in email marketing for eCommerce is to verify and enhance key metrics: VR (view rate), OR (open rate), CTR (click through rate), CTOR (click to open rate), and CR (conversion rate). The contact database enables various creatives (versions of mailings) to be sent to different audience groups. Then, based on the results obtained, we can compare which performs better. A list of the most tested items:

  • Subject
  • Message block layout
  • Graphic creative
  • Message text
  • CTA (Call to action) button
  • Product selection


The A/B tests in PPC campaigns for Facebook Ads / Google Ads

The PPC campaigns are other channels in which we can use the A/B tests. The ad set creation options on the meta-advertising platform or Google Ads are used for this. As above, we test and compare key performance metrics, such as CTR (click through rate) and CR (conversion rate). The most tested items in PPC campaigns:

  • Graphic creative
  • Video creative
  • Campaign test
  • CTA (Call to action) button test


The A/B onside tests on an online store website

The A/B tests within an online store website will be the area we focus on most in this article. These are operations combining analysis, hypothesis generation, design & prototyping, programming, and result analysis. The A/B tests involve testing at least two variants of the same website. Variant A is the original while Variant B contains at least one modified item from the original (for example, a change in the CTA colour). The A/B tests also cover completely different website version.


The A/B test of one item of an online store website.


Analysis in the A/B tests

At this stage, we consider which items / locations of a website should be tested. The main store structure is as follows:

→ Home page,

→ Collection page,

→ Product detail page,

→ Blog listing,

→ Blog page,

→ Cart page/cart drawer, and

→ Checkout.

At each stage, a customer has a different purchase intention. The checkout is very close to transaction processing. So, an important item to test may be to verify whether it has the insurance option for shipping, extended warranties or displaying positive feedback (raising trust).

On the category page, we will review the most common items to improve offer presentation, i.e., displaying filters, product design boxes or additional items (e.g., product ratings, flags) to encourage adding a product to the cart or moving to the product card to learn more about the offer.

The analysis process is also an objective determination of the resources available for test implementation and the time we can allocate to it. To begin with, a quick-wins strategy can help in choosing the simplest and quickest possible enhancements in a short period of time. Examples of such enhancements include changing the Add To Cart button’s colour and shape, adding a ‘progress bar / number of steps’ at the checkout page, adding a sticky menu to improve navigation and product search when the page is scrolled.

Analysis formulation in the A/B tests

Before starting the A/B tests using Google Optimize or an equivalent, it is necessary to prepare hypotheses. This is nothing more than a list of enhancements to improve conversions on a website. A list of tools and strategies to help explore items and structure hypotheses:

  • HotJar
  • Google Analytics
  • Satisfaction Questionnaires
  • List of online stores of industry leaders
  • Hotjar in the A/B test preparation

We can record individual user sessions or create click maps. It is the user behaviour monitoring that is a potential source for preparing the A/B tests. Locations where users are confused or abandon a website signify points that need to be looked at or tested.

Google Analytics in formulating hypotheses

Reports, such as product performance, purchase behaviour, bounce rate, behaviour at transaction processing will be helpful in analysing traffic and user behaviour.

 Questionnaire or assistance from a friend

Another way to gather hypotheses could be to ask a group of friends to process a transaction from the moment they enter a website to an order completion. Then collecting a detailed history of whether they had any problems at any stage in processing the transaction.

When it comes to a questionnaire, we can send post-purchase questionnaires to customers asking them to share their shopping feedback or place a widget on the page asking them to briefly rate the website performance. To encourage users to share their feedback, we may offer, for example, a discount on subsequent purchases.

Competitive Stores/ eCommerce Leaders

It is very likely that industry leaders are already using tested solutions to enhance conversion rates. It is therefore worth observing which solutions have been implemented among such online stores. This will save time in testing ineffective solutions and probably focus on tested

Below are examples of a few hypotheses:

“The addition of sticky menus will enable better navigation on a website. Thus, it will raise the chances of a customer finding the desired information and products, which in turn will translate into higher conversions.”

“Our customers mostly make one-off purchases. The vast majority of our customers do not return for more products. This means that we are a new unknown brand for them. By adding feedback at the cart page, we will attempt to raise trust in our brand, thereby increasing the prospect of a purchase.”

“We have a large product database and an extensive category tree. The addition of a sticky filter button, available at all times and ‘stuck’ to the lower part of the display when scrolling, will improve navigation and thus facilitate finding the right product for a customer.”

Prototyping & design in the A/B test modelling

In the next step, we model the pre-prepared hypothesis graphically following the UX / UI principles and adapting it to the eCommerce technology applied. The example below shows the design and layout of the filter button before and after the design phase on the Shopify PLUS platform. The idea behind the following test is to see if changing the design of the ‘filters’ button on the collection page will positively affect the number of clicks on products and increase the number of transactions on a website.

Collection view with filter button before the A/B test.

“Filter” button view after the prototype & design phase ready for implementation on the Shopify PLUS platform.

The A/B test implementation

The next stage is to implement the A/B test. Software implementation of the test with trials on various mobile devices (IOS, Android) and browsers (Chrome, Mozilla, Firefox, Safari). Once it has been correctly coded and the implementation tested, the final step is to set up the A/B test in Google Optimize.

Setting up the test details in Google Optimize:

  • Technology, choice of device type (mobile)
  • Weight of the test 50% (How much traffic will be directed to the new modified module)
  • Report selection and customisation (transactions / product viewed)



Once the test is complete, we can decide whether it meets the criteria adopted in the hypothesis and whether it will be implemented. In the case discussed here, the length of the test will be approximately 90 000 sessions and will last 14 days. In our case, transactions and product viewed metrics are being taken into account. The percentage growth in transactions is 14%.

Thus, the hypothesis has been confirmed and can be implemented permanently on a store website once the test has been completed.

The A/B test result Sticky collection page after 12 days

Find out how to analyse the data in Google Optimize and the report types here.

The most common errors in the A/B tests

Testing several related items simultaneously

By definition, we do not implement 2 A/B tests on the same page item at the same time, e.g., on the shopping cart. In this case, we will not know which cart item improves conversion. In contrast, we can test 2 or more changes on a website. The redirect test is designed for this and compares the performance of pages under different links.

Insufficient number of sessions

Insufficient traffic acquisition to clearly identify whether a test actually converts better or is a statistical bias error.

Ending tests too quickly

It can result in a skewed result (premature test cancellation) through insufficient trial and time to verify the hypothesis. Often, by analysing the last days of the test, we are able to verify whether the test result was actually better.

Focusing on low effort items

Testing pages that have a very low conversion rate, i.e., locations on a website where users are in the early stages of the sales funnel or are at the learning or research stage. We are wasting time that we could be using to improve the conversion of leads with a much higher purchase intent, such as a product page, cart page or checkout.

Hypothesis formation based solely on gut feeling/intuition

The most effective hypotheses for the A/B tests are those that combine: industry experience, competitor research, data analysis (Google Analytics) and customer behaviour (heatmap). Relying solely on our own intuition may result in choosing hypotheses for the A/B teste that have a low chance of boosting conversions.

Focusing solely on the A/B tests without considering the environment

Sometimes we forget that conversion is also affected by many other factors. In terms of store technology, conversion is affected by the speed at which a page loads. On the marketing side, it is the quality of the leads delivered (matching the brand profile), the presentation and appeal of the offer or currently active promotional (marketing) campaigns. When surrounded by discount codes, promotions, or limited-edition product releases (highly desirable to customers), conversions increase, and results can be significantly higher than a survey in a standard environment.

A list of some popular eCommerce testing programs

Selected Conversion Rate Optimization and A/B test programs:

  • Google Optimize
  • Optimizely
  • A/B Tasty
  • VWO
  • Omniconvert



The A/B tests are one of the essential tools in eCommerce Conversion Rate Optimization. The A/B tests are divided into offside and onside. The off-side tests mainly involve marketing campaigns (PPC campaigns and email marketing). The on-side tests are all the operations aimed at modifying and improving items of the customer journey on an online store’s website. The A/B test implementation process can be described as follows: analysis, hypothesis generation and formulation, prototyping & design, test implementation, testing and result analysis. The test display length depends on where the test is performed and on the number of sessions. For calculations, you can use this calculator here.


Grzegorz Sękowski
Grzegorz Sękowski

eCommerce Manager at Paul Rich – multimillion jewellery premium watch brand. Shopify & Shopify PLUS consultant, blogger and ex-founder of napnell.pl

Oceń post