Skip to main content

Finetuning of Transformers in Natural Language Processing



Transformers are the essential parts of deep neural network, and widely used in Natural language processing tasks. We have a wide variety of usages where transformers are used in real time scenarios, such as, translations, text generation, question answering and various other NLP tasks. One of the widely used examples of transformer is Chat GPT. More information about transformer architecture and its mechanism can be accessed on page Understanding Transformers (BERT & GPT).

One of the very important processes in transformers is Finetuning. Finetuning is the way for adapting the OOB (out of the box) model for your specific tasks. In other words, it is the process of training a pre-trained model on your specific datasets to adapt the knowledge from new dataset. During fine-tuning, the parameters of the pre-trained model are adjusted based on the task-specific dataset. The goal is to adapt the model’s knowledge to perform well on the particular task of interest.

Let’s understand how finetuning works. Below is the example of finetuning a pretrained BERT transformer from Hugging Face library on Squad dataset. Squad dataset is a prebuilt opensource question answering dataset published by Stanford University, and widely used for experimenting and training on NLP based deep learning models.

Loading and importing modules

You have to load modules such as transformers, dataset for finetuning transformers. This example has used TensorFlow, so install TensorFlow also if not done already.



Loading Dataset

Squad is really a very huge dataset and more than enough to train your NLP model. I have used only 4000 entries from that dataset for demonstrating purposes. If you notice, I have also loaded the pretrained BERT tokenizer from the Hugging face library in the same cell, which we will use to tokenize the squad data compatible with BERT. Different models have different tokenizers in the library, and you should be loading the correct tokenizer compatible to the model you are fine tuning. For example, BertTokenizer is for BERT model, GPTTokenizer is for GPT model etc. AutoTokenizer class is helpful to load the correct tokenizer for you based on the name of the model passed.

train_test_split operation has been invoked over the dataset to split it in 80/20 ratio. You can change the test size according to your need, but 20% of the entire data for validation is an ideal approach.


Squad contains the five fields in it, such as, Id, title, context, question and answer. Context, question and answer are important fields, which will be needed to train or finetune the model for question answering.


Preprocessing the Dataset

For Question answering tasks, you may have to preprocess your data. Below function will pre-process your data, that will include tokenization of data, truncation and padding of data, adding special tokens ([CLS], [SEP]) to the sequences, creating attention masks etc. The specific transformer tokenizer (in this case, BertTokenizer) will perform all these steps. To know more about tokenization, please refer to Data Preprocessing in NLPs section in Data Pre-processing with Datasets and Data Loaders page.

While truncating the data, you may need to be careful about texts which exceed the maximum input length of the model. To deal with longer sequences, truncate the context by setting truncation=’only_second’.

Map the start and end positions of the answer to the context by setting return_offset_mapping=True. Once the mapping is done, you can find the start and end tokens of the answer. Use the sequence_ids method to find which part of the offset corresponds to the question and which corresponds to the context.

Below is the detailed preprocessing function, which you have to apply on the entire dataset.


Training the model

Next step is to train the model on above preprocessed dataset. While fine tuning transformer model, I have chosen one of the most popular pretrained BERT models, named ‘bert-base-uncased‘. Also, I have prepared the above dataset compatible to TensorFlow using prepare_tf_dataset in TensorFlow library. This will convert the dataset to TensorFlow compatible tensors.

Define the loss function and optimizer and train the model on training dataset for specified number of epochs. Here, I have chosen the number of epochs as 4 for demonstration purposes. If you see the below output, the is keep on decreasing on each epoch. This is our goal.


Model’s performance on matplot is given below.



This is just for demonstration purposes and may not be the ideal performance of model. You may need to adjust the hyperparameters to get the optimum performance.

For more information about training and validation process, please refer to the page, Training and Validation Process of Model.


Comments

Popular posts from this blog

Exploring CNN with TensorFlow & Keras

Convolutional Neural Network or CNN for short, is one of the widely used neural network architecture for image recognition. It’s use cases can be widely extended to various powerful tasks, such as, object detection within an image, image classification, facial recognition, gesture recognition etc. Indeed, Convolutional Neural Networks (CNNs) are designed with some level of resemblance to the image recognition process in the human brain. For instance, In the visual cortex, neurons have local receptive fields, meaning they respond to stimuli only in a specific region of the visual field, which is achieved by CNN using kernels or filters. Both human brain and CNN process the visual information in hierarchical manner. Basic information of an image is extracted via lower level of neurons in human brain, and higher-level neurons integrate the information from lower-level neurons to identify the complex patterns. On the other hand, in CNN, we use multiple convolutional layers to extract hiera...

Tuning Hyperparameters and visualizing on TensorBoard

Hyperparameters tuning is one of the most crucial steps of machine or deep learning process. Hyperparameters are configurations for a machine learning model that are not learned from the data but are set before the training process begins. These parameters are essential for controlling the overall behavior of the model. While training a machine learning model, you may have to experiment with different hyperparameters such as learning rate, batch size, dropout size, optimizers etc. in order to achieve the model with best accuracy. Performing experiments with hyperparameters one by one can be a tedious and time-consuming process. For instance, you initiate the training process with a specific combination of hyperparameters, and subsequently, you repeat the procedure with a different set of hyperparameters, and so forth. TensorFlow allows you to run experiments with different sets of hyperparameters in a single execution, enabling you to visualize the metrics on HParam dashboard in Tenso...