Get in touch
shape

Imagine running an A/B test where 100,000 users see your new design … or do they?  

Are your A/B Tests telling the truth?

Many A/B tests count impressions/views/sessions on page load regardless of whether the user scrolls to that change or not. 

This can lead to:  

  • Misleading or over inflated data – Counting users who never actually saw the change. 
  • Slower decision-making – Making choices based on incomplete data. 
  • Longer time to reach statistical significance – Delaying test conclusions. 

Recommended Approach

Here at Fabric Analytics, we recommend view-based tracking which ensures that only the users who could’ve been influenced by the test are counted. 

By doing this, you: 

  • Improve data accuracy – You’re measuring real engagement, not just visits. 
  • Make better decisions – More reliable data leads to more confident optimisations. 
  • Potentially get faster results – Less overinflated data may help reach statistical significance sooner. 

We implement view-based tracking manually in to each test using the JavaScript Intersection Observer API, which will detect when an element enters the viewport and then fire the tracking. 

Example code: 

Better Tracking = Better Results

If you are using A/B testing in your field of work or know colleagues that do, I would highly recommend implementing view-based tracking as an additional event. You’ll soon find out how much that extra noise has been impacting your results.