Skip to content

What is the z-score?

The z-score, also known as the standard value, is a statistical concept used for data analysis and hypothesis testing. It measures how many standard deviations an individual observation or data point is away from the mean of the distribution. The z-score can be used to identify outliers and also to understand the variability of the data. This metric is used in a wide variety of applications, such as finance, engineering and medicine.

This article deals with the basic concept of the z-score, the mathematical calculation, and the advantages and disadvantages of its use.

What is the z-score?

The z-score is a statistical measure, also known as the standard value, which indicates the number of standard deviations of a data point from the mean value of the data set. Simply put, this metric measures how far a data point is from the mean. This key figure is used to standardize data and make data sets comparable with each other. The data point itself is often not meaningful enough for this.

This becomes particularly clear in the following example from school. Two students from different classes at the same level have received their exam grades in mathematics. Student A has written a B+, while student B has received the exam back with a grade of C. Student A now claims to be better at math, as a B+ is a better grade than a C. Student B, on the other hand, replies that her exam was particularly difficult and that she is, therefore, the better math student.

Standard Deviation
Example of the Standard Deviation of a Dataset | Source: Author

To support this claim, she argues that the exam in her class had an average grade of D, while the average grade in student A’s class was B. This means that she performed significantly better than the average compared to pupil A. Although student B is generally correct with her statement, for a neutral comparison not only the average of the class but also the standard deviation must be taken into account. She can use the z-score for this. How this is calculated is explained in more detail in the next chapter.

How to calculate the z-score?

The formula for calculating the z-score is relatively straightforward. First, the deviation of the data point from the mean value of the data set is calculated. To standardize this difference and thus make it comparable, it is then divided by the standard deviation. This results in the formula:

\(\) \[z = \frac{(x – \mu)}{\sigma}\]

Where:

  • z is the z-score
  • x is the data point or value
  • μ is the mean value of the data set
  • σ is the standard deviation of the data set

To calculate the z-score in practice, the following steps must be carried out:

  1. Calculate the mean value (μ) of the data set: The arithmetic mean is used for this. This means that all data points are added together and then divided by the total number of data points.
  2. Calculate the standard deviation (σ) of the data set: The standard deviation measures the average deviation of a data point from the mean value. To do this, the difference in the mean value is calculated for each data point and then squared. These squared differences are then added together and divided by the total number of data points. Finally, the square root of this result must be formed so that the value is again in the same unit as the data set.
  3. Select a specific data point: The z-score is always calculated for a specific data point and not for the entire data set. A specific point must therefore be selected for a calculation.
  4. Calculate the difference between the mean value and the data point.
  5. Divide the result by the standard deviation (σ).

A positive z-score means that the selected data point is above the mean and a negative value means that it is below the mean. The higher the absolute value, the more the data point deviates from the mean value. This value can be used to determine the relative position of the data point in the distribution and the data point values can be determined.

How is it involved in the standardization of data?

The z-score is a statistical measure that measures the deviation of a data point from the mean as a function of the standard deviation. It can be used to standardize data by abstracting the complete data set from the underlying scale and measuring each data point only by the associated z-score. Otherwise, different variables may show distortions as they have scale differences, i.e. are measured in different values. A data set of household incomes in a country, for example, has an higher deviation from the mean value, as this is measured in thousand euro increments, than a data set on bread prices in different cities.

Standardization using the z-score makes it easier to compare and contrast data sets and also makes it easier to identify patterns and relationships in the data. In addition, z-scores are useful for detecting outliers or other unusual observations that are very far from the mean.

What are the applications of the score in hypothesis testing and statistical inference?

The z-score plays an important role in various statistical tests. It is mainly used in the following areas:

  1. Hypothesis testing: A hypothesis test aims to investigate whether a hypothesis deviates significantly from a normal distribution or not. The z-score can be used to calculate how far the sample mean deviates from the population mean, i.e. the normal distribution. This value can then be compared with a critical value of the normal distribution to determine whether the sample value differs significantly from the population value. If this is not the case, the hypothesis is accepted.
  2. Confidence intervals: The z-score is also used to calculate confidence intervals. This value of a sample mean can be used to estimate the range of values into which the population value falls with a certain probability.
  3. Detection of outliers: By defining a threshold, outliers can be easily detected by calculating the z-score for each data point. Data whose values lie outside a defined range can be classified as outliers.
  4. Normality tests: The z-score also makes it possible to check whether data follows a normal distribution. This calculation and the visualization of the distribution in a normal probability diagram can be used to assess whether a normal distribution is present.

The z-score forms the basis for many statistical applications. It is mainly used for hypothesis tests and for testing the normal distribution.

How to interpret the z-score?

The z-score is relatively easy to interpret. To do this, two variables are considered, firstly the sign and secondly the actual value. A positive value means that the data point is above the mean value, while a negative value means that the data point is below the mean value. The size of the z-score in turn measures how far away the data point is from the mean value. For better comparability, this is measured in standard deviations and not in absolute numbers.

A z-score of 1 therefore means that the point is above the mean for the data set and is also one standard deviation away. A value of 2 can be interpreted in the same way. This data point is therefore two standard deviations above the mean. Finally, a data point with a z-score of -1 is below the mean and one standard deviation away.

What are the advantages and disadvantages of using the z-score?

The z-score is a popular tool in the field of statistics that is used to standardize data from different sources and thus make data sets comparable. However, this key figure not only has advantages but also some limitations. In this section, we therefore look at the advantages and disadvantages of the z-score so that it can be used effectively depending on the application.

Advantages:

  • The z-score provides a simple and standardized measure to assess how far a data point is from the average of the data set.
  • This ensures that different data sets can be compared independently and on a standardized scale.
  • The z-score is also a useful way of identifying outliers or extreme values in a data set.
  • This metric can be used in numerous applications, such as hypothesis testing, to draw statistical conclusions and decide whether a hypothesis should be rejected.

Disadvantages:

  • The z-score is based on a normal distribution and assumes that the data set follows a normal distribution. In some cases, this assumption may not be true.
  • The key figure is based on the average, which can be influenced by outliers. This means that the z-score itself is also indirectly influenced by outliers.
  • Care must be taken when using this key figure for small data sets, as the distribution may not be sufficiently established and therefore no meaningful values are obtained.

What are alternative statistical measures that can be used instead or combined?

Depending on the application, the z-score may not be the optimal measure or should be supplemented by other key figures for a balanced analysis. The most popular key figures for this are

  • T-score: This is a similar measure to the z-score, which also measures the number of standard deviations from the mean. It is generally used for smaller samples where the z-score may not provide good results.
  • Percentiles: Percentiles are a way of categorizing data points based on their relative position within the distribution. For example, the 75th percentile includes all data points that fall below 75% of the data.
  • Effect sizes: Effect sizes quantify the extent of the difference between two variables. These are used to compare results between data sets with different measures or scales.
  • Confidence intervals: Confidence intervals indicate a range within which the true parameter of the population is expected to fall. These can be used to evaluate estimates and compare the results of studies.

The z-score is a widely used tool, especially for hypothesis testing. However, it can also be replaced by other measures to compare data sets.

This is what you should take with you

  • The z-score is a key figure in statistics that is used to determine how far a data point is from the mean value of a data set. This distance is measured in standard deviations.
  • This key figure is used to standardize data and can also be used to compare data sets on different scales.
  • This measure is often used in hypothesis tests and for statistical conclusions. Especially when the population value and the standard deviation can be.
  • The z-score is particularly easy to calculate and interpret. However, it also has its disadvantages, as it is, for example, very dependent on the population parameter and is also sensitive to outliers.
  • Other statistical measures can also be used, such as the t-score or confidence intervals. These can be used either as an alternative or in conjunction with the z-score.
Gibbs Sampling / Gibbs-Sampling

What is Gibbs Sampling?

Explore Gibbs sampling: Learn its applications, implementation, and how it's used in real-world data analysis.

Bias

What is a Bias?

Unveiling Bias: Exploring its Impact and Mitigating Measures. Understand, recognize, and address bias in this insightful guide.

Varianz / Variance

What is the Variance?

Explore variance's role in statistics and data analysis. Understand how it measures data dispersion.

Kullback-Leibler Divergence / Kullback-Leibler Divergenz / KL Divergence

What is the Kullback-Leibler Divergence?

Explore Kullback-Leibler Divergence, a vital metric in information theory and machine learning, and its applications.

Maximum Likelihood Estimation / MLE / Maximum Likelihood Methode

What is the Maximum Likelihood Estimation?

Unlocking insights: Understand Maximum Likelihood Estimation (MLE), a potent statistical tool for parameter estimation and data modeling.

Variance Inflation Factor (VIF) / Varianzinflationsfaktor

What is the Variance Inflation Factor (VIF)?

Learn how Variance Inflation Factor (VIF) detects multicollinearity in regression models for better data analysis.

The measure can be calculated in Python too using a library like SciPy. Please find the documentation for it here.

Das Logo zeigt einen weißen Hintergrund den Namen "Data Basecamp" mit blauer Schrift. Im rechten unteren Eck wird eine Bergsilhouette in Blau gezeigt.

Don't miss new articles!

We do not send spam! Read everything in our Privacy Policy.

Niklas Lang

I have been working as a machine learning engineer and software developer since 2020 and am passionate about the world of data, algorithms and software development. In addition to my work in the field, I teach at several German universities, including the IU International University of Applied Sciences and the Baden-Württemberg Cooperative State University, in the fields of data science, mathematics and business analytics.

My goal is to present complex topics such as statistics and machine learning in a way that makes them not only understandable, but also exciting and tangible. I combine practical experience from industry with sound theoretical foundations to prepare my students in the best possible way for the challenges of the data world.

Cookie Consent with Real Cookie Banner