# What is the Standard Deviation?

The standard deviation is a so-called dispersion measure, which makes a statement about how far the data points in a data set are from the mean value. In practice, the Greek letter σ (sigma) is used as a symbol.

### What is the Standard Deviation?

In statistics, there are various characteristic values that describe a data set or a distribution of values more precisely. Often, for example, the expected value is used for this purpose, which in the case of a probability distribution indicates the value that is most likely to occur.

 $E(X) = x_1 \cdot P(X = x_1) + x_2 \cdot P(X = x_2) + … + x_n \cdot P(X = x_n)$

However, the value alone is not sufficient to provide detailed information about a data set. Suppose we want to compare two school classes that have written the same exam and, after grading, have achieved the same grade point average, i.e., the same expected value, of 2.5. Would we now assume that the students of both classes have approximately the same knowledge?

Probably only if the students in the two classes have achieved similar grades. In class A, however, the average of 2.5 comes from the fact that some strong students wrote a 1.0, while other, weaker students could only achieve a 4.0 on the exam. In Class B, on the other hand, students were much closer together, mainly scoring 2s and 3s. In contrast, there were no outliers up or down at all.

In statistics, these characteristic values are called the measure of dispersion. We look at how far the individual values, in our case the students, are from the expected value, i.e. the grade point average. Two data sets can have the same expected value, but very different measures of dispersion.

### What is variance and how do you calculate it?

The variance is a measure of the dispersion from statistics. It calculates the sum of the average deviation of the data sets from the mean value and squares this difference. By squaring, positive and negative deviations from the mean are included and cannot cancel each other out. In addition, squaring makes large deviations much more significant than small ones.

 $\sigma^2 = \sum_{i=1}^{n}(x_{i} – E(X)) \cdot p_{i}$

If you have been paying attention up to this point, you will notice that the variance does not have its own symbol or Greek letter, but is denoted by σ^2. As we said before, σ stands for the standard deviation. Thus, the variance is the squared standard deviation.

### How to calculate the standard deviation?

Now that we already know the relationship between variance and standard deviation, the associated formula is fairly simple to set up, since it is simply the root of the variance:

 $\sigma = \sqrt{\sigma^2} = \sqrt{\sum_{i=1}^{n}(x_{i} – E(X)) \cdot p_{i}}$

### How to interpret the value?

As we have already explained, with variance it makes perfect sense to square the difference between the data point and the expected value. However, this also makes the variance much more difficult to interpret, as it is not really practical.

With the standard deviation, on the other hand, it’s different, because here we take the root again and are thus back in the original unit. For our exam example, a standard deviation of 1.2 would mean that the class achieves an average grade that is 1.2 above or below the grade point average of 2.5. This value, therefore, opens up an interval of 1.3 to 3.7, since the direction of the deviation is not specified.

Thus, a lower standard deviation generally means that the data set is relatively close to the expected value and that the individual data sets deviate from it by very little.

### When do you use the standard deviation for the population and when for the sample?

In some literature, two different standard deviations are distinguished, namely for the population, which is then described by σ, and that for the sample, which is denoted by s. The two terms differ in the underlying quantity studied:

• The sample contains individual elements of all objects (e.g. society) from which data are collected in a study. These can then be used for statistical analysis.
• The population is the summary of all units of investigation. For this group, one wants to be able to make statements with the help of statistical analysis.

In statistics, it is actually not possible or simply not practicable to survey the entire population. Therefore, an attempt is made to find a study unit that is as representative as possible and that allows generalization to the population.

In the formula for the standard deviation, the two variants differ only in that for the population one divides by the size of the sample, and for the standard deviation of the sample one divides only by the size of the sample – 1.

### What is the empirical rule of normal distribution?

The empirical rule, also known as the 68-95-99.7 rule, is a statistical guideline for normal distribution. It states that:

• Approximately 68% of the data are within one standard deviation of the mean.
• Approximately 95% of the data are within two standard deviations of the mean.
• Approximately 99.7% of the data are within three standard deviations of the mean.

This rule can be helpful in interpreting and understanding data that follows a normal distribution. For example, if we know that a data set is normally distributed and we calculate its mean and standard deviation, we can use the empirical rule to estimate the proportion of the data that falls within certain ranges.

It is important to note that the empirical rule is only an approximation and does not apply to all normal distributions. Also, it only applies to continuous data that follow a normal distribution, and not to categorical or discrete data. Nevertheless, the empirical rule can be a useful tool for gaining insight into normally distributed data.

### What is often misunderstood about the standard deviation?

There are several common misconceptions about standard deviation that can lead to misinterpretation of data. Some of these misconceptions are:

• The belief that a small value indicates that the data is accurate or precise.
• The belief is that a high standard deviation means that the data are unreliable or flawed.
• Assuming that the normal distribution is the only distribution to which this parameter can be applied.
• Misunderstanding the difference between the population standard deviation and the sample standard deviation and when to use the two values.

To avoid these misunderstandings, it is important to understand the underlying concepts and their limitations. Here are some tips to avoid common misunderstandings:

• Don’t equate a small deviation with accuracy or precision. A low standard deviation only indicates that the data points are close to the mean, not necessarily that they are accurate or precise.
• A high deviation does not necessarily mean that the data is unreliable or in error. It just means that the data points are further from the mean.
• This parameter can be used for any distribution, not just the normal distribution. However, it is important to know the characteristics of the distribution in question before using the standard deviation.
• If you are working with a sample, use the sample standard deviation instead of the population deviation. The sample standard deviation provides a better estimate of the population deviation.

If analysts and researchers are aware of these misconceptions and know how to avoid them, they can ensure that they use the standard deviation correctly.

### How are standard deviation, hypothesis tests and confidence intervals related?

The standard deviation plays a crucial role in hypothesis testing and confidence intervals. Hypothesis testing is a statistical method used to determine whether a hypothesis about a population parameter is confirmed by the data. Confidence intervals are used to estimate the range of values into which the true value of a population parameter is likely to fall.

In hypothesis testing, the variance is used to calculate the test statistic, which is then compared to a critical value to determine whether the null hypothesis can be rejected. The test statistic is calculated as the difference between the sample mean and the hypothesized population mean divided by the standard error of the mean. The standard error of the mean is the standard deviation of the sampling distribution of the mean, which represents the variability of the sample means when the sampling procedure is repeated multiple times.

Confidence intervals are constructed using the sample mean and the deviation value. The confidence interval is calculated as the sample mean plus or minus a margin of error determined by multiplying the standard error of the mean by a critical value based on the desired confidence level. The standard deviation plays a key role in determining the width of the confidence interval.

By using the deviation metric in hypothesis testing and confidence intervals, analysts can make more informed decisions about the population parameters they are examining. The standard deviation provides a better understanding of the variability of the data, which in turn leads to more accurate estimates and conclusions.

### This is what you should take with you

• The standard deviation is a so-called measure of the dispersion from statistics.
• It provides information about how far away the individual data points are from the expected value on average. A low standard deviation indicates that the data points are relatively close to the expected value and vice versa.
• The standard deviation is closely related to the variance, as it is simply the square root of the variance.

### Other Articles on the Topic of Standard Deviation

Statista offers a detailed article on the topic.