A/B Test Significance
Calculator
Find out if your A/B test results are statistically significant. Stop guessing and make data-driven decisions.
Control (A)
Variation (B)
Test Results
Enter your test data to see results.
What is Statistical Significance?
Statistical significance tells you whether the difference between your control and variation is real or just due to random chance. When a result is statistically significant, you can be confident that the variation actually performs differently from the control.
A 95% confidence level means there's only a 5% chance that the observed difference happened by random chance. This is the industry standard for most business decisions.
The Formula
- p₁ = control conversion rate
- p₂ = variation conversion rate
- p = pooled conversion rate (combined)
- n₁, n₂ = sample sizes for each group
Frequently Asked Questions
How many visitors do I need?
It depends on your baseline conversion rate and the size of the difference you want to detect. Generally, you need hundreds to thousands of visitors per variation. Smaller differences require larger sample sizes.
What does the p-value mean?
The p-value is the probability that you'd see a difference this large if there was actually no real difference. A p-value below 0.05 (for 95% confidence) means the result is statistically significant.
Why isn't my test reaching significance?
Common reasons include: not enough traffic yet, the real difference is too small to detect, or there genuinely is no difference. Try running the test longer or testing bolder changes.
Should I stop when it reaches significance?
No — this is called "peeking" and can lead to false positives. Decide on your sample size before the test starts and run it to completion.