What is the difference between the standard deviation and the standard error is the standard error a standard deviation?

The terms “standard error” and “standard deviation” are often confused.1 The contrast between these two terms reflects the important distinction between data description and inference, one that all researchers should appreciate.

The standard deviation (often SD) is a measure of variability. When we calculate the standard deviation of a sample, we are using it as an estimate of the variability of the population from which the sample was drawn. For data with a normal distribution,2 about 95% of individuals will have values within 2 standard deviations of the mean, the other 5% being equally scattered above and below these limits. Contrary to popular misconception, the standard deviation is a valid measure of variability regardless of the distribution. About 95% of observations of any distribution usually fall within the 2 standard deviation limits, though those outside may all be at one end. We may choose a different summary statistic, however, when data have a skewed distribution.3

When we calculate the sample mean we are usually interested not in the mean of this particular sample, but in the mean for individuals of this type—in statistical terms, of the population from which the sample comes. We usually collect data in order to generalise from them and so use the sample mean as an estimate of the mean for the whole population. Now the sample mean will vary from sample to sample; the way this variation occurs is described by the “sampling distribution” of the mean. We can estimate how much sample means will vary from the standard deviation of this sampling distribution, which we call the standard error (SE) of the estimate of the mean. As the standard error is a type of standard deviation, confusion is understandable. Another way of considering the standard error is as a measure of the precision of the sample mean.

The standard error of the sample mean depends on both the standard deviation and the sample size, by the simple relation SE = SD/√(sample size). The standard error falls as the sample size increases, as the extent of chance variation is reduced—this idea underlies the sample size calculation for a controlled trial, for example. By contrast the standard deviation will not tend to change as we increase the size of our sample.

So, if we want to say how widely scattered some measurements are, we use the standard deviation. If we want to indicate the uncertainty around the estimate of the mean measurement, we quote the standard error of the mean. The standard error is most useful as a means of calculating a confidence interval. For a large sample, a 95% confidence interval is obtained as the values 1.96×SE either side of the mean. We will discuss confidence intervals in more detail in a subsequent Statistics Note. The standard error is also used to calculate P values in many circumstances.

The principle of a sampling distribution applies to other quantities that we may estimate from a sample, such as a proportion or regression coefficient, and to contrasts between two samples, such as a risk ratio or the difference between two means or proportions. All such quantities have uncertainty due to sampling variation, and for all such estimates a standard error can be calculated to indicate the degree of uncertainty.

In many publications a ± sign is used to join the standard deviation (SD) or standard error (SE) to an observed mean—for example, 69.4±9.3 kg. That notation gives no indication whether the second figure is the standard deviation or the standard error (or indeed something else). A review of 88 articles published in 2002 found that 12 (14%) failed to identify which measure of dispersion was reported (and three failed to report any measure of variability).4 The policy of the BMJ and many other journals is to remove ± signs and request authors to indicate clearly whether the standard deviation or standard error is being quoted. All journals should follow this practice.

Notes

Competing interests: None declared.

References

1. Nagele P. Misuse of standard error of the mean (SEM) when reporting variability of a sample. A critical evaluation of four anaesthesia journals. Br J Anaesthesiol 2003;90: 514-6. [PubMed] [Google Scholar]

4. Olsen CH. Review of the use of statistics in Infection and Immunity. Infect Immun 2003;71: 6689-92. [PMC free article] [PubMed] [Google Scholar]

When it comes to statistics, it's important to be able to distinguish between different types of data. In particular, you need to be able to understand the difference between standard deviation and standard error. Standard deviation is a measure of how spread out data is, while standard error is a measure of how accurately a sample represents the population. Here's a closer look at the difference between standard deviation and standard error.

What is Standard Deviation?

Standard deviation is a measure of how spread out data is. It is calculated by taking the square root of the variance. The variance is calculated by taking the sum of the squared differences between each data point and the mean, and then dividing by the number of data points. The standard deviation is represented by the Greek letter sigma ().

What is Standard Error?

Standard error is a measure of how accurately a sample represents the population. It is calculated by taking the square root of the variance of the sample. The variance of the sample is calculated by taking the sum of the squared differences between each data point and the mean of the sample, and then dividing by the number of data points in the sample. The standard error is represented by the Greek letter epsilon ().

How to Calculate Standard Deviation

To calculate the standard deviation of a population, you need to know the mean and the variance. The mean is simply the sum of all the data points divided by the number of data points. The variance is calculated by taking the sum of the squared differences between each data point and the mean, and then dividing by the number of data points. The standard deviation is then calculated by taking the square root of the variance.

How to Calculate Standard Error

To calculate the standard error of a sample, you need to know the mean and the variance of the sample. The mean is simply the sum of all the data points in the sample divided by the number of data points in the sample. The variance of the sample is calculated by taking the sum of the squared differences between each data point and the mean of the sample, and then dividing by the number of data points in the sample. The standard error is then calculated by taking the square root of the variance of the sample.

What is the Difference Between Standard Deviation and Standard Error?

The main difference between standard deviation and standard error is that standard deviation is a measure of how spread out data is, while standard error is a measure of how accurately a sample represents the population. Standard deviation is calculated by taking the square root of the variance, while standard error is calculated by taking the square root of the variance of the sample.

What is the difference between standard deviation and standard error?

Standard error and standard deviation are both measures of variability. The standard deviation reflects variability within a sample, while the standard error estimates the variability across samples of a population.

What is the difference between standard deviation and standard error quizlet?

A standard deviation is a measure of variability for a distribution of scores in a single sample or in a population of scores. A standard error is the standard deviation in a distribution of means of all possible samples of a given size from a particular population of individual scores.

What is the difference between standard error and standard error of mean?

Standard deviation measures how much observations vary from one another, while standard error looks at how accurate the mean of a sample of data is compared to the true population mean.

What is the relationship between standard error and standard deviation?

The standard error of the sample mean depends on both the standard deviation and the sample size, by the simple relation SE = SD/√(sample size).