What Is Effect Size In Statistics

Learn about effect size, a statistical measure quantifying the strength of a relationship or the magnitude of a difference between groups, crucial for interpreting research findings.

Have More Questions →

What is Effect Size?

Effect size is a statistical metric that quantifies the magnitude of the difference between two groups, or the strength of a relationship between two variables. Unlike p-values, which only indicate whether a result is statistically significant (i.e., unlikely to have occurred by chance), effect size tells us how large or meaningful that difference or relationship actually is. It provides a standardized measure that can be compared across different studies.

Key Principles and Types of Effect Size

Effect sizes are generally categorized into two main families: difference-based and correlation-based. Difference-based measures, such as Cohen's d, quantify the difference between means in standard deviation units. Correlation-based measures, like Pearson's r, describe the strength and direction of a linear relationship between two continuous variables. Other types include odds ratios and relative risks, often used in categorical data.

A Practical Example

Imagine two teaching methods (A and B) for mathematics. A study finds that students using method A scored 5 points higher on average. A p-value might tell you this difference is statistically significant. However, the effect size (e.g., Cohen's d = 0.8) would tell you that the 5-point difference is large enough to be considered a practically important improvement, equivalent to an 80% increase in standard deviation.

Importance and Applications

Effect size is critical for evaluating the practical significance of research findings beyond just statistical significance. It helps researchers and practitioners understand the real-world impact of an intervention, treatment, or relationship. It is also essential for meta-analyses, allowing for the quantitative synthesis of results from multiple studies, and for power analysis when designing new experiments.

Frequently Asked Questions

How is effect size different from a p-value?
What is a 'small,' 'medium,' or 'large' effect size?
Why is effect size important for meta-analysis?
Can a non-significant result have a large effect size?