Before you get started with your Conversion Rate Optimisation programme (and we hope Nudge is helping you with it!), it's really important that you understand what conclusions you can draw from your experiments, and why you should tread cautiously when it comes to analysing your data.

That's where this funny old thing called statistical significance comes in. As a non-mathmetician and non-data scientist I won't get bogged down in the maths of it all, because I'll confuse myself and that's no good for anyone. There's a fantastic calculator that can work it all out for you, and there isn't a single graph in sight.

## Size matters

What's really important to understand, is that when you are optimising conversion rates you need a large enough sample size to be sure that the results you get are actually significant. Here's an example:

Imagine you have a sample size of 20 people, split into two equal groups. After testing two different marketing messages on the two groups, group A has a 10% conversion rate while group B has 20%. You'd be forgiven for exclaiming '20% conversion rate?! We have a winner!', but, I'm sorry to say, you'd be wrong. You see, 10% only represents one person, and 20% represents two. Out of 10 people in each group, one and two conversions respectively really isn't enough to know for certain which marketing message was the true winner. In fact, if group B had converted at 40%, the experiment would have indeed achieved statistical significance and you could have celebrated (not least because a 40% conversion rate is excellent by any industry's standards)!

This is because statistical significance is generally regarded to be 95% - that is, that if you run the exact same test 100 times, you would get the same outcome 95 out of 100 times. Whilst this isn't an absolute guarantee of certainty, it is generally accepted amongst statisticians that in most cases this is as good as certain.

It is so crucial to understand this basic concept of statistics and probability because it is all too easy to make huge decisions about changes to your website based on insignificant data. So how do you ensure statistical significance on any experiment you set out to do? The most important thing is to gather enough data by letting the test run for long enough in order to let enough data to accumulate - how long this takes depends on the volume of traffic to your website.

**In summary**, you must remember not to rush. It can be really exciting to find out what little changes to your website can unlock massive potential - but you need to make sure the changes you implement permanently are the right ones. You'll be really glad that you read this post and took a bit of time to understand it - it could save you some serious headaches later on down the line!