How to improve your startup’s conversion rate through A/B testing.

We’ve done several A/B tests and here are some of the things we’ve learnt.

A/B testing is the science of testing changes to see if they improve the conversion of your landing pages, whether on email or website. In this short article we’ve covered;

  1. Deciding what to A/B test
  2. Prioritizing valuable tests
  3. Tracking and recording your results

Testing makes or breaks growth. We’ve worked with companies that were failing to convert their traffic. But after three months of landing page A/B testing, they got traction. What changed is they consistently made their messaging more clear and their offer more compelling to the user. To grow your brand, you should have an A/B test running everyday to maximize the traffic your getting. Its important to not that A/B isn’t about perfection with each page, its about repetition. Repetition until you get what works.

A/B testing process

  • Decide on and prioritize high-leverage changes.
  • Show some a percentage of your visitors the change.
  • Run it until you reach a statistically significant sample size.
  • Implement changes that improve conversion.
  • Save the design log to help you in doing future tests.

How to test ideas

  1. Survey users. Ask them what they love about your product.
  2. Use tools like Hotjar or FullStory to find engagement patterns: What are they clicking vs ignoring.
  3. Your best ads have value props, text, and imagery that can be repurposed for A/B tests.

How to source for content

  1. Mine competitors’ sites for inspiration. Do they structure their content differently? Do they talk to visitors differently? 
  2. Your support/sales teams interact with customers & know best what appeals to them.
  3. Revisit past A/B tests for new ideas.

Two ways to prioritize your tests

  •  Micro variants are small, quick changes: This may include  for example changing a CTA button color.
  • Macro variants are significant changes: This can be something like completely rewriting your landing page. Its good to note that you should prioritize your macro changes because they give you a higher leverage and may result in a much bigger conversion.

The reason you need to frequently A/B test earlier parts of the funnel

  1. Earlier steps have larger sample sizes and you need a sufficient sample size to finish a test.
  2. It’s easier to change ads, pages, and emails than it is down-funnel assets like in-product experience.

The two most important elements of setting up A/B tests 

1. Run one A/B at a time. Otherwise, visitors can criss-cross through multiple tests when changing devices (e.g. mobile to desktop) across sessions. 2. Run A/B variants in parallel. Otherwise, the varying traffic sources will invalidate your results.

Most important tools for running tests

  1. Google Optimize. Its a free A/B testing tool that integrates with Google Analytics and Ads.
  2. Optimizely. It give you better flexibility and insights. We suggest starting with Google Optimize, then getting a demo from Optimizely to see if it’s worth the upgrade.

Here is what you require to validate your tests 

  1. 1,000+ visits to validate a 6.3%+ conversion increase
  2. 10,000+ visits to validate a 2%+ increase Without lots of traffic, focus on macro > micro variants. Macros can produce 10-20%+ improvements vs micros 1-5% increases.

Here are some important questions you need to ask yourself when setting up an A/B test

  1. How confident are you the test will succeed?
  2. If a test succeeds, will it significantly increase conversion?
  3. How easy it is to implement?
  4. Is your test similar to an old test that failed? Start w/ low effort, high-leverage changes

Here is something worth noting

The closer an experiment’s conversion objective is to revenue, the more worthwhile it may be to confirm small conversion boosts. e.g. a 2% improvement in purchase conversion is more impactful than a 2% improvement in “learn more” CTA clicks.

To track tests, mark the following

• Conversion target you’re optimizing for: Clicks, views, etc.
• Before & after: Screenshots + descriptions of what’s being tested.
• Reasoning: Why is this test worth running? Use your prioritization framework here.
• Start & end dates.
• Sample size reported by your tool.
• Results: The change in conversion, and whether the result was neutral, success, or failure. If it was a success, note whether the variant was implemented.

To finalize, ask the following

What can we learn from the test?
1. Use heatmaps to figure out why your variant won. e.g. maybe users were distracted by a misleading CTA in one variant.
2. Survey customers who were impacted by the test. Figuring out why each sample won will help you when doing future tests.