Skip to main content

Using TensorBoard with Machine Learning

TensorBoard is a web-based tool to provide the visualizations and metrices needed during the machine learning process. TensorBoard is tightly integrated with TensorFlow and can be used seamlessly with it. It is highly efficient to track metrices like loss and accuracy, visualizing the model graph, histograms, and much more. Let’s understand more about how to collect metrices on TensorBoard and analyze those.

TensorBoard as callback

Consider the animal-building classification scenario in our post, Exploring CNN with TensorFlow & Keras. We have included TensorBoard as one of the callbacks while training the model. Callback is a tool to customize and extend the behavior of a model during training, evaluation, or inference, such as, model checkpointing (to periodically save your model during training), data augmentation (to increase the diversity of the training dataset), learning rate adjustments (dynamically adjust the learning rate during training based on certain conditions), TensorBoard (log metrics, visualizations, and other information to TensorBoard for monitoring of the training process) etc.


As per the above code, metrices will be saved in the specified log directory with the training process.

Visualizing the metrics

You have to run the TensorBoard with the command in terminal as, TensorBoard –logdir=<your_log_location>. By default, TensorBoard will be started on port 6006 (as mentioned before, TensorBoard is a tool, which runs as a web application on server).


Below are the augmented images, accuracy and loss graphs captured in TensorBoard.



TensorBoard can also be used to display various metrics, obtained by experimentation on different hyperparameters, such as, learning rates, optimizers, dropout rate, batch size etc. You may need to experiment with the combinations of different hyperparameters, in order to achieve best accuracy of the model.



More information about experimenting with different hyper parameters and visualize the results on HParam dashboard in TensorBoard, please refer to the post, Tuning Hyperparameters and visualizing on TensorBoard.

There are various other scenarios also available, where TensorBoard can be very helpful, such as, viewing model checkpoints, visualizing histograms etc.

Comments

Popular posts from this blog

Exploring CNN with TensorFlow & Keras

Convolutional Neural Network or CNN for short, is one of the widely used neural network architecture for image recognition. It’s use cases can be widely extended to various powerful tasks, such as, object detection within an image, image classification, facial recognition, gesture recognition etc. Indeed, Convolutional Neural Networks (CNNs) are designed with some level of resemblance to the image recognition process in the human brain. For instance, In the visual cortex, neurons have local receptive fields, meaning they respond to stimuli only in a specific region of the visual field, which is achieved by CNN using kernels or filters. Both human brain and CNN process the visual information in hierarchical manner. Basic information of an image is extracted via lower level of neurons in human brain, and higher-level neurons integrate the information from lower-level neurons to identify the complex patterns. On the other hand, in CNN, we use multiple convolutional layers to extract hiera...

Finetuning of Transformers in Natural Language Processing

Transformers are the essential parts of deep neural network, and widely used in Natural language processing tasks. We have a wide variety of usages where transformers are used in real time scenarios, such as, translations, text generation, question answering and various other NLP tasks. One of the widely used examples of transformer is Chat GPT. More information about transformer architecture and its mechanism can be accessed on page Understanding Transformers (BERT & GPT) . One of the very important processes in transformers is Finetuning. Finetuning is the way for adapting the OOB (out of the box) model for your specific tasks. In other words, it is the process of training a pre-trained model on your specific datasets to adapt the knowledge from new dataset. During fine-tuning, the parameters of the pre-trained model are adjusted based on the task-specific dataset. The goal is to adapt the model’s knowledge to perform well on the particular task of interest. Let’s understand how ...

Tuning Hyperparameters and visualizing on TensorBoard

Hyperparameters tuning is one of the most crucial steps of machine or deep learning process. Hyperparameters are configurations for a machine learning model that are not learned from the data but are set before the training process begins. These parameters are essential for controlling the overall behavior of the model. While training a machine learning model, you may have to experiment with different hyperparameters such as learning rate, batch size, dropout size, optimizers etc. in order to achieve the model with best accuracy. Performing experiments with hyperparameters one by one can be a tedious and time-consuming process. For instance, you initiate the training process with a specific combination of hyperparameters, and subsequently, you repeat the procedure with a different set of hyperparameters, and so forth. TensorFlow allows you to run experiments with different sets of hyperparameters in a single execution, enabling you to visualize the metrics on HParam dashboard in Tenso...