Free sample size calculator. Find required n for estimating a mean or proportion at any confidence level and margin of error.
Use before data collection to determine how many observations are needed to achieve a desired precision. Essential for surveys, clinical trials, and quality control sampling plans.
Sample size has diminishing returns — halving the margin of error requires 4× as many samples. More confidence also requires more samples (wider z*). If σ is unknown, use a pilot study or literature estimate.
Want ME ≤ 3 points, σ=18, 95% confidence.→ n = (1.96×18/3)² = (11.76)² = 138.3 → need n = 139
Ready to Calculate?
Open the free Sample Size Calculator instantly — no login, no download, works in any browser.
▶ Launch Sample Size Calculator FreeSample Size Calculator is trusted by:
- Students — verify homework, understand formulas, and learn statistics step-by-step
- Researchers & Academics — run quick calculations during data analysis or peer review
- Data Analysts & Scientists — validate results and generate exportable reports
- Teachers & Professors — generate worked examples for class demonstrations
- Professionals — surveys, clinical trials, market research, opinion polling, quality control
No software installation needed. Works in Chrome, Firefox, Safari, and Edge on desktop and mobile.
📚 Also explore: Confidence Interval Calculator, Hypothesis Testing Step-by-Step Guide, Standard Error Calculator, Sample Size Determination
- Confidence Interval Calculator — Estimation
- Hypothesis Testing Step-by-Step Guide — Guide
- Standard Error Calculator — Estimation
- Sample Size Determination — Guide
- What Is a P-Value? Explained — Guide
- Type I and Type II Errors — Guide
Sample Size Planning: Comprehensive Guide for All Study Types
Sample size planning is one of the most important — and most frequently done poorly — aspects of research design. An underpowered study wastes resources and can produce misleading results. An overpowered study wastes resources and may flag trivially small effects as "significant." This section provides a comprehensive framework for getting it right.
The Four Pillars of Sample Size Calculation
- Effect size (δ or d): The minimum clinically/practically important difference. This is the most critical and most subjective input. Base it on domain expertise or minimum important change thresholds, not on what you hope to find.
- Population variability (σ): Estimate from pilot data, published studies, or conservative bounds. Getting σ wrong matters less than getting the effect size wrong.
- Significance level (α): The maximum acceptable false positive rate. Usually 0.05; use 0.01 or lower for high-stakes decisions.
- Power (1−β): Minimum acceptable probability of detecting the effect. Usually 0.80; use 0.90 for critical studies.
Sample Size Formulas for Common Scenarios
| Test | Formula (n per group) | Notes |
|---|---|---|
| One-sample t-test | n = (z_α/2 + z_β)²σ²/δ² | δ = μ−μ₀ |
| Two-sample t-test | n = 2(z_α/2 + z_β)²σ²/δ² | Equal groups assumed |
| One proportion | n = p(1−p)(z_α/2/E)² | E = margin of error |
| Two proportions | Complex formula (use calculator) | Arcsine approximation |
| Correlation (r) | n = (z_α/2 + z_β)²/ζ² + 3 | ζ = Fisher Z of ρ |
| One-way ANOVA | Use F-distribution power | Depends on k groups, f² |
Adjustments for Real-World Studies
- Expected dropout: Inflate n by 1/(1−dropout rate). If 20% dropout expected: n_enrolled = n_analysis / 0.80.
- Unequal group sizes: If k = n₁/n₂, total n increases by factor (1+k)²/(4k). Equal groups are most efficient.
- Multiple primary outcomes: Use Bonferroni-adjusted α = 0.05/m for m outcomes.
- Clustering: Multiply n by the design effect DE = 1 + (cluster size − 1) × ICC, where ICC is intraclass correlation.
- Unequal variance assumption: Welch's correction increases required n by a small factor.
A Priori vs. Post-Hoc Power Analysis
A priori power analysis (before data collection) is the only valid use of power analysis — it tells you how many subjects you need. Post-hoc (observed) power analysis (after data collection, using the observed effect size) is mathematically circular and scientifically uninformative: if p < 0.05, observed power is always > 50%; if p > 0.05, observed power is always < 50%. Never report post-hoc power — instead, report confidence intervals, which directly show the precision of your estimate and the range of effect sizes consistent with your data.
Sequential and Adaptive Designs
Classical fixed-n designs commit to a sample size before data collection. Sequential designs allow early stopping for efficacy (if effect is clear) or futility (if effect is clearly absent). Group sequential methods (O'Brien-Fleming, Pocock boundaries) pre-specify interim analyses with adjusted significance thresholds to maintain overall α. Adaptive designs can modify sample size mid-study based on blinded interim data. These designs are particularly valuable in clinical trials where recruiting too many patients to an inferior treatment is ethically problematic.
Worked Examples: Sample Size Calculator Step by Step
Practice is essential for mastering statistical methods. The following worked examples cover a range of scenarios — from simple textbook cases to realistic research situations — building your confidence and intuition through active application of the concepts above.
Example 1: Basic Application
Consider a standard scenario for the Sample Size Calculator. Begin by identifying the research question and null hypothesis, then select appropriate parameters, check all assumptions, compute the test statistic, determine the p-value, and state conclusions in the context of the problem.
Example 2: Applied Research Scenario
In applied research, data rarely arrives perfectly formatted. You may encounter missing values, measurement error, borderline assumption violations, and multiple candidate analytical approaches. Working through realistic examples builds the judgment needed to navigate these situations correctly.
Example 3: Interpreting Computer Output
Statistical software (R, Python, SPSS, Stata, SAS) produces rich output including test statistics, p-values, confidence intervals, and diagnostic information. Learning to read and critically evaluate this output — identifying what is essential, what is supplementary, and what might indicate problems — is a critical skill for any data analyst.
Key Formulas Summary
For quick reference, here are the essential formulas, the conditions under which they are valid, and the R and Python commands used to compute them. Having these organized and accessible accelerates your workflow and reduces the risk of applying the wrong formula in a high-pressure situation.
Practice Problems with Solutions
The best way to solidify your understanding is to work through problems yourself before checking the solution. Start with simpler cases to build confidence, then tackle more complex scenarios that require judgment about assumptions, multiple testing, and effect size interpretation. Our free online calculator handles the computation — focus your energy on the setup, interpretation, and critical evaluation of results.
Connection to Other Statistical Concepts
Statistical methods do not exist in isolation. This procedure connects to hypothesis testing principles, the sampling distribution theory established by the Central Limit Theorem, effect size measures, confidence interval construction, and the broader framework of statistical inference. Understanding these connections makes you a more versatile and insightful analyst.
Frequently Confused Concepts
Certain pairs of concepts are persistently confused even by experienced practitioners. Clearing up these confusions transforms your statistical reasoning.
Statistical Significance vs. Clinical/Practical Significance
A result can be statistically significant (p < 0.05) but clinically trivial (effect size near zero with enormous sample size), or clinically important but not statistically significant (large effect size in an underpowered small study). Always assess both dimensions. The confidence interval is the key tool: it shows both whether the result is significant (excludes the null value) and the magnitude of the effect (the range of plausible values).
One-Tailed vs. Two-Tailed Tests
A one-tailed test is justified only when the research hypothesis specifies the direction of the effect before data collection. If you specify a one-tailed test after seeing the data direction (to halve a borderline p-value), this is p-hacking and produces inflated false positive rates. When in doubt, use a two-tailed test — it is the more conservative and generally accepted default.
The P-Value Is Not the Probability H₀ Is True
The p-value = P(data this extreme | H₀ is true). It is NOT P(H₀ true | this data). Computing the latter requires Bayes' theorem with a prior on H₀. With a high prior probability that H₀ is true (common in exploratory research), even p = 0.001 may correspond to only modest posterior probability that H₁ is true. This is one reason many statisticians advocate for Bayesian methods or effect size reporting over binary significance testing.
Statistical Reasoning: Building Intuition Through Examples
Statistical mastery comes from seeing the same concepts applied across many different contexts. The following worked examples and case studies reinforce the core principles while showing their breadth of application across medicine, social science, business, engineering, and natural science.
Case Study 1: Healthcare Research Application
A clinical researcher wants to evaluate whether a new physical therapy protocol reduces recovery time after knee surgery. The study design, data collection, statistical analysis, and interpretation each require careful thought. The researcher must choose appropriate sample sizes, select the right statistical test, verify all assumptions, compute the test statistic and p-value, report the effect size with confidence interval, and interpret the result in terms patients and clinicians can understand. Each step builds on a solid understanding of statistical theory.
Case Study 2: Business Analytics Application
An e-commerce company wants to know if customers who see a new product recommendation algorithm spend more money per session. They have access to data from 50,000 user sessions split evenly between the old and new algorithms. The statistical question is clear, but practical considerations — multiple testing across different metrics, confounding by device type and geography, and the distinction between statistical and business significance — require careful navigation. Understanding the underlying statistical framework guides every analytical decision.
Case Study 3: Educational Assessment
A school district implements a new math curriculum and wants to evaluate its effectiveness using standardized test scores. Before-after comparisons, control group selection, and the inevitable regression-to-the-mean effect must all be addressed. Measuring whether changes are genuine improvements or statistical artifacts requires the full toolkit: descriptive statistics, assumption checking, appropriate tests for the design, effect size calculation, and honest acknowledgment of limitations.
Understanding Output from Statistical Software
When you run this analysis in R, Python, SPSS, or Stata, the software produces detailed output with more numbers than you need for any single analysis. Knowing which numbers are essential (test statistic, df, p-value, CI, effect size) vs. diagnostic vs. supplementary is a critical skill. Our calculator extracts the key results and presents them in a clear, interpretable format — but understanding what each number means, where it comes from, and what would make it change is what separates a statistician from a button-pusher.
Integrating Multiple Analyses
Real research rarely involves a single statistical test in isolation. Typically, a full analysis includes: (1) data quality checks and outlier investigation, (2) descriptive statistics for all key variables, (3) visualization of distributions and relationships, (4) assumption verification for planned inferential tests, (5) primary inferential analysis with effect size and CI, (6) sensitivity analyses testing robustness to assumption violations, and (7) subgroup analyses if pre-specified. This holistic approach produces more trustworthy and complete results than any single test alone.
Statistical Software Commands Reference
For those implementing these analyses computationally: R provides comprehensive implementations through base R and packages like stats, car, lme4, and ggplot2 for visualization. Python users rely on scipy.stats, statsmodels, and pingouin for statistical testing. Both languages offer excellent power analysis tools (R: pwr package; Python: statsmodels.stats.power). SPSS and Stata provide menu-driven interfaces alongside powerful command syntax for reproducible analyses. Learning at least one of these tools is essential for any applied statistician or data scientist.
Frequently Asked Questions: Advanced Topics
These questions address subtle points that often confuse even experienced analysts:
Can I use this test with non-normal data?
For large samples (generally n ≥ 30 per group), the Central Limit Theorem ensures that test statistics based on sample means are approximately normally distributed regardless of the population distribution. For small samples with clearly non-normal data, use a non-parametric alternative or bootstrap methods. The key question is not "is my data normal?" but "is the sampling distribution of my test statistic approximately normal?" These are different questions with different answers.
How do I handle missing data?
Missing data is ubiquitous in real research. Complete case analysis (listwise deletion) is the default in most software but can introduce bias if data is not Missing Completely At Random (MCAR). Better approaches: multiple imputation (creates several complete datasets, analyzes each, and pools results using Rubin's rules) and maximum likelihood methods (FIML/EM algorithm). The choice depends on the missing data mechanism and the nature of the analysis. Never delete variables with many missing values without considering the implications.
What is the difference between a one-sided and two-sided test?
A two-sided test rejects H₀ if the test statistic is extreme in either direction. A one-sided test rejects only in the pre-specified direction. The one-sided p-value is half the two-sided p-value for symmetric test statistics. Use a one-sided test only if: (1) the research question is inherently directional, (2) the direction was specified before data collection, and (3) results in the opposite direction would have no practical meaning. Never switch from two-sided to one-sided after seeing which direction the data points — this doubles the effective false positive rate.
How should I report results in a research paper?
Follow APA 7th edition: report the test statistic with its symbol (t, F, χ², z, U), degrees of freedom in parentheses (except for z-tests), exact p-value to two-three decimal places (write "p = .032" not "p < .05"), effect size with confidence interval, and the direction of the effect. Example for a t-test: "The experimental group (M = 72.4, SD = 8.1) scored significantly higher than the control group (M = 68.1, SD = 9.3), t(48) = 1.88, p = .033, d = 0.50, 95% CI for difference [0.34, 8.26]." This one sentence communicates the complete statistical story.