Free Tool

A/B Test Statistical Significance Calculator

Compare a control against a variant with a two-proportion z-test. Enter visitors and conversions to see conversion rates, lift, p-value, confidence level, and whether the result is a winner, loser, or still inconclusive.

Z-test formulas

p_pool = (c1 + c2) / (n1 + n2)

SE = sqrt(p_pool * (1 - p_pool) * (1 / n1 + 1 / n2))

Z = (p2 - p1) / SE

p = 2 * (1 - normcdf(|Z|))

Experiment inputs

Use whole numbers only. Results recalculate instantly as you edit either arm of the test.

Control

Control

Baseline
Variant

Variant

Challenger

Results

PageDuel uses a two-sided z-test for two proportions and updates the verdict in real time.

Winner
Control conversion rate
4.50%

Baseline conversion rate from the control visitors and conversions.

Variant conversion rate
5.19%

Observed conversion rate for the challenger variant.

Relative lift
15.25%

Percentage increase or decrease of the variant relative to the control.

P-value
0.0136

Probability of seeing a difference this large if there is no true effect.

Confidence level
98.64%

Calculated as (1 - p-value) × 100 for a quick significance readout.

Interpretation
Winner

The variant is significantly outperforming the control.

What statistical significance means in A/B testing

Statistical significance is the guardrail that keeps A/B testing from turning into wishful thinking. A variant can look better for a few hours or even a few days just because of random variation in who happened to see each version. Significance testing asks a stricter question: if there were no real difference between the control and the variant, how likely is it that you would still observe a gap this large? In this calculator, that question is answered with a two-proportion z-test, which is a standard frequentist method for comparing conversion rates between two groups.

The z-test starts by estimating each conversion rate, then pooling the data to calculate a shared baseline probability. From there, it measures the distance between the two observed rates relative to the amount of random noise expected from the sample sizes. Larger samples reduce noise. Bigger performance gaps increase the z-score. The resulting p-value summarizes how surprising your result would be under the null hypothesis. Low p-values suggest the difference is unlikely to be pure chance, which is why teams often use a threshold such as 0.05 before calling a result statistically significant.

How to interpret p-values and confidence levels

A p-value is often misunderstood. It does not mean there is a 95% chance your variant is better, and it does not prove the size of the lift will hold forever. It means that if the null hypothesis were true, the observed difference or a more extreme one would happen with probability p. So a p-value of 0.0320 means the result would be this extreme roughly 3.2% of the time if there were no true difference. This calculator also shows a simple confidence-level view by converting the p-value into (1 - p) * 100, because many marketers think in terms of 90%, 95%, or 99% confidence.

Confidence is useful, but it should not be confused with certainty. A result at 95% confidence can still be wrong. It simply passes a commonly accepted decision threshold. If the business impact is large, the safest approach is to combine significance with effect size, sample quality, and practical context. A tiny lift with a huge sample can be statistically significant and still not matter enough to justify rollout work. On the other hand, a large lift from a very small sample can look exciting while remaining too noisy to trust.

Type I and Type II errors

Every significance threshold creates tradeoffs. A Type I error is a false positive: you conclude there is a meaningful difference when the variant is not actually better. If you use a 0.05 cutoff, you are accepting that false positives will happen sometimes. A Type II error goes the other way. You fail to detect a real effect because the sample is too small or the test is underpowered. Teams that stop tests early, split traffic across too many experiments, or expect tiny uplifts from low-traffic pages often run straight into Type II errors and mistake them for evidence that the idea did not work.

This is why sample size discipline matters. The more traffic you have, the easier it is to separate signal from noise. If your conversion rate is low, you usually need more visitors than you expect before the math becomes decisive. That does not mean the test is broken. It means the uncertainty is still too high. A calculator can quantify that uncertainty, but it cannot fix weak experimental design. Clear hypotheses, clean traffic allocation, and enough time to gather representative data still matter more than any single statistic on the screen.

Common significance pitfalls

The most common mistake is peeking too early and declaring victory as soon as one variant crosses 95% confidence. Repeatedly checking results inflates the chance of false positives unless your methodology accounts for sequential testing. Another common problem is running an A/B test on biased traffic. If one version receives a different audience mix, a different device distribution, or a different time window, the p-value cannot rescue the analysis. The underlying data is already distorted. Poor event tracking causes the same issue. If conversions are missing or duplicated, the significance output becomes a precise answer to the wrong question.

It is also easy to ignore practical significance. A statistically significant lift of 0.3% may be interesting, but it might not move revenue, pipeline, or activation enough to matter. The reverse mistake is treating inconclusive tests as failures. Inconclusive only means you do not yet have enough evidence for a reliable decision. It may be worth collecting more traffic, increasing the contrast between variants, or changing the primary metric to something closer to business value. Good experimentation programs treat significance as one part of a decision system, not the whole system.

Use this calculator as a fast validation step when you need a clean read on an A/B test. If you are running experiments regularly, the manual workflow eventually becomes the bottleneck. Want this automated? PageDuel calculates significance in real-time.

Want this automated? PageDuel calculates significance in real-time.

Launch tests, track conversions, and see statistically grounded winners without copying numbers into a spreadsheet or calculator.