How this sample size calculator works
A/B test sample size determines how many visitors you need in both the control and the variant before you can trust the result. If your sample is too small, noise can look like a win. If your sample is large enough, you gain a realistic chance of detecting a meaningful improvement without reacting to random swings. This calculator uses the standard two-variant sample size formula for conversion rate experiments. You enter your current conversion rate, choose the minimum lift worth detecting, and select the confidence settings that match your testing standards.
The current conversion rate is your baseline, often called p1. The minimum detectable effect, or MDE, is the relative improvement you care about finding. If your current rate is 5% and your MDE is 10%, the tool assumes your variant needs to reach 5.5% for the test to detect that lift reliably. Significance level controls how strict you are about false positives. A 95% significance threshold is the default for many teams because it keeps the risk of declaring a winner too early reasonably low. Power controls the chance of detecting a true lift when it actually exists. Higher power means more certainty, but it also means you need more traffic.
The math matters because sample size is not linear. Small gains are much harder to detect than large ones. If you want to detect a 2% relative lift instead of a 10% lift, the traffic requirement can jump dramatically. The same is true for low baseline conversion rates. A page converting at 1% usually needs far more visitors than a page converting at 15% because the signal is weaker. That is why experienced growth teams size tests before launch instead of waiting to see how the numbers look after a few days.
Why sample size matters in A/B testing
Good experimentation is not just about building variants. It is about running tests that can answer a business question with enough confidence to act. If you stop a test after a handful of conversions, you are mostly measuring volatility. Early spikes are common, especially on low-traffic pages or during weekdays when audience quality changes. Sample size planning protects you from interpreting those temporary swings as product insight.
A clear sample size target also helps with prioritization. Before you launch, you can estimate whether a test is realistic given your traffic. If a pricing page only gets a few hundred visitors per week, a tiny MDE may require months of runtime. In that case, you can either raise the MDE threshold, test a bolder change, or move to a higher-traffic page first. This prevents teams from filling their roadmap with tests that will never collect enough data to matter.
The optional daily traffic field adds a practical planning layer. Once you know the total visitor requirement, you can estimate how many days the experiment will need under a typical 50/50 split. That estimate is useful for launch calendars, stakeholder expectations, and deciding whether a seasonal campaign will run long enough to complete a valid test.
Common mistakes to avoid
The most common mistake is choosing an unrealistically small MDE. Teams often say they want to detect any lift at all, but the traffic cost of proving a tiny improvement can be too high. Choose a lift that is meaningful for the business. If a 3% relative gain would not change a decision, do not size the test around it. Another mistake is using the wrong baseline. Pull recent conversion data from a representative period rather than a single good week, otherwise the sample size estimate will be distorted from the start.
It is also easy to confuse relative and absolute lift. A 10% relative lift on a 5% baseline means moving to 5.5%, not to 15%. This calculator uses relative MDE because that is how most growth teams discuss expected improvement. Finally, do not treat the output as permission to peek every day and stop once the graph looks favorable. Sample size is the minimum traffic requirement for a sound read, not a shortcut around statistical discipline.
When to use this calculator
Use this tool before launching any conversion-focused A/B test on landing pages, pricing pages, checkout flows, sign-up forms, or paid campaign destinations. It is especially useful when you need to answer basic planning questions quickly: How much traffic do we need, how long will the test run, and is the experiment worth doing with our current volume? If you already know the control conversion rate and the smallest lift you care about, you can get a practical estimate in a few seconds.
This calculator is best for standard binary conversion outcomes where users either convert or they do not. It is a strong fit for sign-up rate, purchase rate, lead form completion, or click-through to a next step. For more complex situations like revenue-per-visitor, multi-armed tests, sequential methods, or heavy user segmentation, you may need a different model. But for most website A/B testing workflows, this formula is a solid planning baseline.
Run the experiment once you know the numbers
PageDuel helps you launch variants fast, keep experiments organized, and move from sample size planning to live A/B tests without enterprise overhead.