Definition
An A/B test compares two variants (A and B) to measure whether a change improves an outcome (e.g., conversion rate).
Example
Variant A converts at 2.0% and Variant B at 2.3%; the test checks if the lift is real.
How to use it
- Define a primary metric and a fixed test duration/sample size before starting.
- Avoid peeking and stopping early based on noisy intermediate results.
- Randomize exposure so groups are comparable.
- Run the test long enough to cover conversion lag.
- Use the same traffic allocation rules across variants to avoid bias.
- Keep landing pages and offers consistent so only one variable changes.
Common mistakes
- Changing multiple variables at once and losing causal clarity.
- Declaring a winner without enough sample size.
- Letting traffic allocation drift mid-test.
- Running overlapping tests that contaminate the same audience.
- Switching metrics after seeing early results.
Why this matters
This term matters because it affects how you interpret performance and make budget decisions. If you use inconsistent definitions or windows, ROAS/CPA can look "better" while profit gets worse.
Practical checklist
- Write a 1-line definition for "A/B Test" that your team will use consistently.
- Keep the time window consistent (weekly/monthly/quarterly) when comparing trends.
- Segment results (channel/plan/cohort) before drawing big conclusions from blended averages.
- Use a calculator that references this term (e.g., A/B Test Sample Size Calculator) to sanity-check assumptions.
- Read the related guide (e.g., A/B test sample size: how to plan conversion experiments) for context and common pitfalls.
Where to use this on MetricKit
Calculators
- A/B Test Sample Size Calculator: Estimate sample size per variant for a conversion rate A/B test given baseline CVR, MDE, significance, and power.
Guides
- A/B test sample size: how to plan conversion experiments: A practical guide to A/B test planning: baseline CVR, MDE, alpha, power, sample size, and common pitfalls like peeking.