TensorFlow, or TF for short, is a framework for Deep Learning and Artificial Intelligence that was developed by Google and initially only used internally. For several years now, however, it has been open-source and can be used in many programming languages, such as Python.
What is TensorFlow?
TensorFlow is an open-source framework from Google for creating Machine Learning models. Although the software is written in C++, it is otherwise language-independent and can therefore be used very easily in various programming languages. For many users, the library has now become the standard for Machine Learning, since common models can be built comparatively simply. In addition, state-of-the-art ML models can also be used via TF, such as various transformers.
Via TensorFlow Keras (High-Level API), individual neural networks can additionally be built without having to program the respective layers by hand. This makes TF usable and customizable for a wide variety of applications. In addition, it offers a variety of free introductory courses and examples on its own website, which further facilitates work with the framework.
What are Tensors?
The name TensorFlow may seem a bit strange at first since there is no direct connection to Machine Learning. However, the name comes from the so-called tensors, which are used to train Deep Learning models and therefore form the core of TF.
The tensor is a mathematical function from linear algebra that maps a selection of vectors to a numerical value. The concept originated in physics and was subsequently used in mathematics. Probably the most prominent example that uses the concept of tensors is general relativity.
In the field of Machine Learning, tensors are used as representations for many applications, such as images or videos. In this way, a lot of information, some of it multidimensional, can be represented in one object. An image, for example, consists of a large number of individual pixels whose color value in turn is composed of the superposition of three color layers (at least in the case of RGB images). This complex construction can be represented compactly with a tensor.
How does TensorFlow work?
Now that we have understood what tensors are and what influence they have in Machine Learning, we will now deal with the second part of the name, namely “Flow”. In TF, the built models are represented as a data flow, more precisely in a directed graph. This means that every model we build in TensorFlow can be converted into a graph that is directed, i.e. each arrow can only be traversed in one direction.
A computational operation is performed at each node in the graph. In the example of a neural network, this means, for example, that the nodes are the individual layers at which computational operations take place. The edges, on the other hand, are the tensors already described, which move from one node to the next.
However, this structure has changed somewhat since 2019, as the second version was released this year, which changed some, even basic, functionalities. Since then, the high-level API Keras is used, which was still a separate module in the first version. This provides a fairly simple way to build a neural network simply by calling the individual layers and thus making TF more user-friendly.
# Building a neural network by defining the single layers
import tensorflow as tf
import tf.keras.layers as layers
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10))
In addition, the update also made TF significantly more performant to handle the more complex, modern Machine Learning models. However, there is no compatibility between the two versions, so old code must be rewritten to work in TF 2.0. However, the first version is still supported, so you can run the code in this version as well, albeit without the new features.
How to install the TensorFlow Python version?
TensorFlow can be installed in Python relatively easily, like many other modules, with a terminal command. In a Jupyter notebook, an additional “!” must be placed in front of the command so that it is recognized as a terminal call:
!pip install tensorflow
However, there are still some peculiarities with TF, which is why this general installation command may not work or be sub-optimal.
macOS
On the Apple operating system macOS, there can be problems with the normal installation, especially if the new Apple processors M1 or M2 are installed. To install TF so that it is optimized for these Mac processors, the following command is used:
!pip install tensorflow-macos
In addition, another plugin is also required. Detailed instructions for installing TF on macOS can be found on the Apple website.
Windows & Ubuntu without GPU
On Windows and Ubuntu, the basic installation already described works. However, this version with CUDA is already optimized for dedicated (external) graphics cards. If you do not want to use these GPUs or simply do not have an external GPU installed, you can also directly install only the CPU version of TF:
!pip install tensorflow-cpu
However, training neural networks with CPUs naturally has performance penalties compared to a powerful GPU.
What is the Architecture of TF?
TensorFlow’s architecture supports many systems and applications, so models can also be used in web or mobile environments.
In training, TF offers the possibility of reading your own data sets and converting them into optimized data types. In addition or alternatively, prepared data sets can also be obtained from the TensorFlow Hub or an entire, prepared model can already be loaded. When building a model, you can either build your own model with the Keras high-level API or use so-called premade estimators, which provide predefined models for specific use cases.
The training itself can then be distributed relatively flexibly to available components. TensorFlow supports the training of neural networks on the processor (CPU), even if this is not very performant in most cases. If possible, one can also have a model trained on the Graphics Processing Unit (GPU) to have the shortest possible training times. If you want to train the Machine Learning model in Google Colab, there is also the possibility to use the so-called Tensor Processing Unit (TPU). These are special processors that have been optimized for tensor calculations and are therefore ideal for use in Machine Learning.
After the model has been trained, it can be saved and used for new predictions. So that it can be used in many use cases, TensorFlow offers the most diverse deployment options. Among other things, it can be used in other programming languages, such as C or Java. With TensorFlow Serving, a powerful system is provided that offers the option to run the models either in the cloud or on on-premise servers. Additionally, there is also the option to make a model available on mobile devices via TensorFlow Lite, which we will take a closer look at in the next chapter.
What can TensorFlow Lite be used for?
Mobile devices rarely have enough power to calculate predictions from already trained neural networks. The small installation space does not make it possible to install powerful processors and certainly not external GPUs. However, nowadays many users use their cell phones almost more often than a laptop or computers so it is almost inevitable for many companies to deploy their Machine Learning models for mobile devices as well.
The fingerprint sensor or the facial recognition of smartphones, for example, use Machine Learning models to perform the classification. These features must also work without Internet access so that the cell phone can be used even in flight mode. Therefore, manufacturers are forced to perform the calculation of the model on the device.
This is where TensorFlow Lite comes in, which provides special Machine Learning models optimized for mobile devices. This is done by training any model in TF as usual and then converting it to a mobile device-friendly version using TensorFlow Lite. In doing so, the model size and complexity can also be reduced to ensure that it can be computed quickly. While this simplification leads to a degradation in accuracy, this is accepted in many cases to improve computation times as a result.
What Models does TensorFlow have?
Among some other advantages, TensorFlow is also used so often because it comes with many state-of-the-art Machine Learning models that can be used with only a few lines of code. Some of these are already pre-trained and can be used for predictions without training (even if this only makes sense very rarely).
Among the most famous models are:
- Various Deep Neural Network Layers: The Keras API provides the most common layers to build many types of Deep Neural Networks quickly and easily. These include, for example, Convolutional Neural Networks or Long Short-Term Memory (LSTM) models.
- Transformer Models: For Natural Language Processing there is currently no way around Transformer Models, such as BERT, etc.. However, building these from scratch requires a lot of data and a lot of computing power. A large number of these models are already available and pre-trained via TensorFlow. These can be “fine-tuned” to the application with comparatively little data and considerably less effort.
- Residual Networks (RESNet): These models are used in image recognition and are also available pre-trained via TensorFlow.
- Big Transfer: Similar to Transformer, these are complex models that have already been pre-trained on a large amount of data and are then adapted to specific applications with significantly fewer data. This allows very good results to be achieved in various areas of image processing.
TensorFlow vs PyTorch
TensorFlow and PyTorch are two possible Machine Learning frameworks in Python that differ in some ways but offer fundamentally similar functionalities. PyTorch was developed and used by Facebook, while TensorFlow comes from Google. This is another reason why the choice between the two alternatives is more of a matter of taste in many cases.
We’ll save a detailed comparison of the two frameworks for a separate post. In a nutshell, however, the choice of TensorFlow vs Pytorch can be broken down into these three main points:
Availability of New Models
In many areas, such as image recognition or natural language processing, building a model from scratch is simply no longer up-to-date. Due to the complexity of the applications, pre-trained models have to be used. In Research and Development, PyTorch is very strong and has provided researchers with a good framework for training their models for years. As a result, their new models and findings are mostly shared on PyTorch. Therefore, PyTorch is ahead of the game on this point.
Deployment
In an industrial setting, however, what matters is not the very last percentage points of accuracy that might be extracted with a new model, but rather that the model can be easily and quickly deployed and then made available to employees or customers.
At this point, TensorFlow is the better alternative, especially due to the additional components TensorFlowLite and TensorFlow Serving, and offers many possibilities to easily deploy trained models. In this framework, the focus is on the end-to-end Deep Learning process, i.e. the steps from the initial data set to a usable and accessible model.
Ecosystem
Both TensorFlow and PyTorch offer different platforms in which repositories with working and pre-trained models can be shared and evaluated. The different platforms are separated primarily by the topics of the models. Overall, the comparison on this point is very close, but TensorFlow has a bit of a lead here, as it offers a working end-to-end solution for almost all thematic areas.
For an even more detailed overview of these three points, we recommend this article by AssemblyAI.
For which Applications can you use TensorFlow?
TensorFlow is already standard in many industries when it comes to training Machine Learning models that have been trained specifically for a use case. On their website you can already see many companies using TF and some case studies explain where exactly the framework is used:
- Twitter: The social network uses the Machine Learning framework to populate users’ timelines. It must be ensured that only the most relevant new tweets are displayed, even if the user follows a large number of accounts. To do this, TF was used to build a model that suggests only the best tweets.
- Sinovation Ventures: This medical company uses TensorFlow offerings to train image classifiers that diagnose different types of diseases on images of the retina. Such classifications of images are needed in many applications, including those outsides of medicine.
- Spotify: The streaming service provider uses the advanced version of TensorFlow (TFX) to provide personalized song recommendations to its customers. Compared to Twitter applications, the input, in particular, poses a major challenge, as it must also be ensured that parameters such as the genre, rhythm, or speed of the songs match. These values are much more difficult to represent numerically than the text of tweets.
- PayPal: The payment service provider has built a complex model to detect fraudulent payments at an early stage. It was particularly important that the model should classify legitimate payments as fake as rarely as possible so as not to worsen the user experience.
How are TensorFlow and Keras connected?
TensorFlow and Keras are two popular Machine Learning libraries that are often used together in building and training deep learning models. TensorFlow is an open-source software library for dataflow and differentiable programming across a range of tasks, while Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow. It provides a simple and intuitive interface for building neural networks, while TensorFlow offers low-level APIs for building and training machine learning models.
The connection between TensorFlow and Keras can be understood in two ways. First, it can be used as a high-level interface to build and train deep learning models in TensorFlow. TensorFlow provides the computational backend for Keras, allowing users to easily build complex neural networks with minimal code. Keras can run on top of TensorFlow as a module and provides a set of user-friendly functions and classes for building neural networks. By using Keras with TensorFlow, users can take advantage of TensorFlow’s powerful features for optimizing and scaling models, while enjoying the simplicity and ease of use of Keras.
Second, it can also be used as a standalone library, independent of TensorFlow. In this case, Keras uses other backend libraries such as Theano or CNTK for performing the computations. However, when it is used with TensorFlow, it offers several benefits, including faster computation times and better compatibility with other TensorFlow tools and frameworks. Additionally, the framework can take advantage of TensorFlow’s distributed computing capabilities, allowing users to train large-scale deep learning models across multiple machines.
Overall, the connection between TensorFlow and Keras provides a powerful framework for building and training deep learning models. By using the combination, users can take advantage of the best features of both libraries, including Keras’ user-friendly interface and TensorFlow’s scalability and optimization capabilities. This makes it easier for developers to build and deploy deep learning models for a variety of applications, from image recognition and natural language processing to predictive analytics and autonomous systems.
This is what you should take with you
- TensorFlow, TF for short, is a framework for Deep Learning and Artificial Intelligence developed by Google and initially only used internally.
- It offers a comprehensive and powerful platform for developing new Machine Learning models or using existing models.
- With the help of various components, such as TensorFlow Lite or Serving, the deployment of the models is particularly easy.
- Many large and well-known companies rely on the functionalities of TensorFlow.
What is XOR?
Explore XOR: The Exclusive OR operator's role in logic, encryption, math, AI, and technology.
What are Python Modules?
Explore Python modules: understand their role, enhance functionality, and streamline coding in diverse applications.
What are Python Comparison Operators?
Master Python comparison operators for precise logic and decision-making in programming.
What are Python Inputs and Outputs?
Master Python Inputs and Outputs: Explore inputs, outputs, and file handling in Python programming efficiently.
How can you use Python for Excel / CSV files?
This article shows how you can use Python for Excel and CSV files to open, edit and write them.
How can you do Python File Handling?
Unlock the power of Python file handling with our comprehensive guide. Learn to read, write, and navigate files efficiently.
Other Articles on the Topic of TensorFlow
The following articles provide a practical guide to model creation in TensorFlow:
- Convolutional Neural Network for Image Classification
- Sentiment Analysis with Transformers for Text Classification