Paid Ads

Statistical Power

Statistical power is the probability of detecting an effect of a given size if it truly exists (1 - beta). Higher power requires larger sample sizes.

Updated 2026-01-23

Definition

Statistical power is the probability of detecting an effect of a given size if it truly exists (1 - beta). Higher power requires larger sample sizes.

Formula

Power = 1 - beta

Example

At 80% power, you have an 80% chance to detect the target lift if it is real.

How to use it

  • Typical targets are 80% or 90% power depending on decision risk.
  • Higher power reduces false negatives but increases required sample size.
  • Power depends on baseline rate, effect size, and variance.
  • Plan power before running experiments to avoid underpowered tests.
  • Use power analysis to set realistic test duration targets.

Common mistakes

  • Using too low power and missing real improvements.
  • Choosing an unrealistically small effect size without enough traffic.
  • Stopping early before reaching the planned sample size.
  • Ignoring seasonality that changes baseline conversion rates.
  • Using power targets without checking data quality or bot traffic.

Why this matters

This term matters because it affects how you interpret performance and make budget decisions. If you use inconsistent definitions or windows, ROAS/CPA can look "better" while profit gets worse.

Practical checklist

  • Write a 1-line definition for "Statistical Power" that your team will use consistently.
  • Keep the time window consistent (weekly/monthly/quarterly) when comparing trends.
  • Segment results (channel/plan/cohort) before drawing big conclusions from blended averages.
  • Use a calculator that references this term (e.g., A/B Test Sample Size Calculator) to sanity-check assumptions.
  • Read the related guide (e.g., A/B test sample size: how to plan conversion experiments) for context and common pitfalls.

Where to use this on MetricKit

Calculators

Guides