Sequence-to-Sequence Modeling with nn.Transformer and TorchText
This is a tutorial on how to train a sequence-to-sequence model that uses the nn.Transformer <https://pytorch.org/docs/master/nn.html?highlight=nn%20transformer#torch.nn.Transformer> module. PyTorch 1.2 release includes a standard transformer module based on the paper Attention is All You
Need <https://arxiv.org/pdf/1706.03762.pdf> The transformer model has been proved to be superior in quality for many sequence-to-sequence problems while being more parallelizable. The nn.Transformer module relies entirely on an attention mechanism (another module recently implemented as nn.MultiheadAttention <https://pytorch.org/docs/master/nn.html?highlight=multiheadattention#torch.nn.MultiheadAttention>) to draw global dependencies between input and output. The nn.Transformer module is now highly modularized such that a single component (like nn.TransformerEncoder <https://pytorch.org/docs/master/nn.html?highlight=nn%20transformerencoder#torch.nn.TransformerEncoder>in this tutorial) can be easily adapted/composed.
Define the model
In this tutorial, we train nn.TransformerEncoder model on a language modeling task. The language modeling task is to assign a probability for the likelihood of a given word (or a sequence of words) to follow a sequence of words. A sequence of tokens are passed to the embedding layer first, followed by a positional encoding layer to account for the order of the word (see the next paragraph for more details). The nn.TransformerEncoder consists of multiple layers of nn.TransformerEncoderLayer <https://pytorch.org/docs/master/nn.html?highlight=transformerencoderlayer#torch.nn.TransformerEncoderLayer>. Along with the input sequence, a square attention mask is required because the self-attention layers in nn.TransformerEncoder are only allowed to attend the earlier positions in the sequence. For the language modeling task, any tokens on the future positions should be masked. To have the actual words, the output of nn.TransformerEncoder model is sent to the final Linear layer, which is followed by a log-Softmax function.
PositionalEncoding module injects some information about the relative or absolute position of the tokens in the sequence. The positional encodings have the same dimension as the embeddings so that the two can be summed. Here, we use sine and cosine functions of different frequencies.
defforward(self, x): x = x + self.pe[:x.size(0), :] return self.dropout(x)
Load and batch data
The training process uses Wikitext-2 dataset from torchtext. The vocab object is built based on the train dataset and is used to numericalize tokens into tensors. Starting from sequential data, the batchify() function arranges the dataset into columns, trimming off any tokens remaining after the data has been divided into batches of size batch_size. For instance, with the alphabet as the sequence (total length of 26) and a batch size of 4, we would divide the alphabet into 4 sequences of length 6:
These columns are treated as independent by the model, which means that the dependence of G and F can not be learned, but allows more efficient batch processing.
import torchtext from torchtext.data.utils import get_tokenizer TEXT = torchtext.data.Field(tokenize=get_tokenizer("basic_english"), init_token='<sos>', eos_token='<eos>', lower=True) train_txt, val_txt, test_txt = torchtext.datasets.WikiText2.splits(TEXT) TEXT.build_vocab(train_txt) device = torch.device("cuda"if torch.cuda.is_available() else"cpu") defbatchify(data, bsz): data = TEXT.numericalize([data.examples[0].text]) # Divide the dataset into bsz parts. nbatch = data.size(0) // bsz # Trim off any extra elements that wouldn't cleanly fit (remainders). data = data.narrow(0, 0, nbatch * bsz) # Evenly divide the data across the bsz batches. data = data.view(bsz, -1).t().contiguous() return data.to(device)
/home/bool_tbb/miniconda3/envs/pytorch/lib/python3.8/site-packages/torchtext/data/field.py:150: UserWarning: Field class will be retired in the 0.8.0 release and moved to torchtext.legacy. Please see 0.7.0 release notes for further information.
warnings.warn('{} class will be retired in the 0.8.0 release and moved to torchtext.legacy. Please see 0.7.0 release notes for further information.'.format(self.__class__.__name__), UserWarning)
/home/bool_tbb/miniconda3/envs/pytorch/lib/python3.8/site-packages/torchtext/data/example.py:78: UserWarning: Example class will be retired in the 0.8.0 release and moved to torchtext.legacy. Please see 0.7.0 release notes for further information.
warnings.warn('Example class will be retired in the 0.8.0 release and moved to torchtext.legacy. Please see 0.7.0 release notes for further information.', UserWarning)
1
data = TEXT.numericalize([train_txt.examples[0].text])
Functions to generate input and target sequence
get_batch() function generates the input and target sequence for the transformer model. It subdivides the source data into chunks of length bptt. For the language modeling task, the model needs the following words as Target. For example, with a bptt value of 2, we’d get the following two Variables for i = 0:
It should be noted that the chunks are along dimension 0, consistent with the S dimension in the Transformer model. The batch dimension N is along dimension 1.
The model is set up with the hyperparameter below. The vocab size is equal to the length of the vocab object.
1 2 3 4 5 6 7 8
ntokens = len(TEXT.vocab.stoi) # the size of vocabulary emsize = 200# embedding dimension nhid = 200# the dimension of the feedforward network model in nn.TransformerEncoder nlayers = 2# the number of nn.TransformerEncoderLayer in nn.TransformerEncoder nhead = 2# the number of heads in the multiheadattention models dropout = 0.2# the dropout value model = nn.DataParallel(model) model = TransformerModel(ntokens, emsize, nhead, nhid, nlayers, dropout).to(device)
Run the model
CrossEntropyLoss <https://pytorch.org/docs/master/nn.html?highlight=crossentropyloss#torch.nn.CrossEntropyLoss> is applied to track the loss and SGD <https://pytorch.org/docs/master/optim.html?highlight=sgd#torch.optim.SGD> implements stochastic gradient descent method as the optimizer. The initial learning rate is set to 5.0. StepLR <https://pytorch.org/docs/master/optim.html?highlight=steplr#torch.optim.lr_scheduler.StepLR>is applied to adjust the learn rate through epochs. During the training, we use nn.utils.clip_grad_norm\_ <https://pytorch.org/docs/master/nn.html?highlight=nn%20utils%20clip_grad_norm#torch.nn.utils.clip_grad_norm_> function to scale all the gradient together to prevent exploding.
defevaluate(eval_model, data_source): eval_model.eval() # Turn on the evaluation mode total_loss = 0. ntokens = len(TEXT.vocab.stoi) with torch.no_grad(): for i in range(0, data_source.size(0) - 1, bptt): data, targets = get_batch(data_source, i) output = eval_model(data) output_flat = output.view(-1, ntokens) total_loss += len(data) * criterion(output_flat, targets).item() return total_loss / (len(data_source) - 1)
Loop over epochs. Save the model if the validation loss is the best we’ve seen so far. Adjust the learning rate after each epoch.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
best_val_loss = float("inf") epochs = 100# The number of epochs best_model = None
for epoch in range(1, epochs + 1): epoch_start_time = time.time() train() val_loss = evaluate(model, val_data) print('-' * 89) print('| end of epoch {:3d} | time: {:5.2f}s | valid loss {:5.2f} | ' 'valid ppl {:8.2f}'.format(epoch, (time.time() - epoch_start_time), val_loss, math.exp(val_loss))) print('-' * 89)
if val_loss < best_val_loss: best_val_loss = val_loss best_model = model
scheduler.step()
| epoch 1 | 200/ 2981 batches | lr 4.07 | ms/batch 9.71 | loss 5.39 | ppl 218.21
| epoch 1 | 400/ 2981 batches | lr 4.07 | ms/batch 9.50 | loss 5.39 | ppl 220.00
| epoch 1 | 600/ 2981 batches | lr 4.07 | ms/batch 9.55 | loss 5.20 | ppl 181.36
| epoch 1 | 800/ 2981 batches | lr 4.07 | ms/batch 9.60 | loss 5.26 | ppl 193.18
| epoch 1 | 1000/ 2981 batches | lr 4.07 | ms/batch 9.55 | loss 5.23 | ppl 186.05
| epoch 1 | 1200/ 2981 batches | lr 4.07 | ms/batch 9.55 | loss 5.26 | ppl 192.45
| epoch 1 | 1400/ 2981 batches | lr 4.07 | ms/batch 9.55 | loss 5.29 | ppl 197.86
| epoch 1 | 1600/ 2981 batches | lr 4.07 | ms/batch 9.60 | loss 5.33 | ppl 206.42
| epoch 1 | 1800/ 2981 batches | lr 4.07 | ms/batch 9.59 | loss 5.27 | ppl 193.88
| epoch 1 | 2000/ 2981 batches | lr 4.07 | ms/batch 9.70 | loss 5.30 | ppl 200.64
| epoch 1 | 2200/ 2981 batches | lr 4.07 | ms/batch 9.64 | loss 5.17 | ppl 176.64
| epoch 1 | 2400/ 2981 batches | lr 4.07 | ms/batch 9.62 | loss 5.26 | ppl 192.57
| epoch 1 | 2600/ 2981 batches | lr 4.07 | ms/batch 9.63 | loss 5.28 | ppl 195.69
| epoch 1 | 2800/ 2981 batches | lr 4.07 | ms/batch 9.64 | loss 5.21 | ppl 182.35
-----------------------------------------------------------------------------------------
| end of epoch 1 | time: 30.36s | valid loss 5.55 | valid ppl 256.32
-----------------------------------------------------------------------------------------
| epoch 2 | 200/ 2981 batches | lr 3.87 | ms/batch 9.73 | loss 5.26 | ppl 192.78
| epoch 2 | 400/ 2981 batches | lr 3.87 | ms/batch 9.65 | loss 5.27 | ppl 194.78
| epoch 2 | 600/ 2981 batches | lr 3.87 | ms/batch 9.68 | loss 5.08 | ppl 160.59
| epoch 2 | 800/ 2981 batches | lr 3.87 | ms/batch 9.67 | loss 5.14 | ppl 171.46
| epoch 2 | 1000/ 2981 batches | lr 3.87 | ms/batch 9.68 | loss 5.10 | ppl 164.78
| epoch 2 | 1200/ 2981 batches | lr 3.87 | ms/batch 9.70 | loss 5.14 | ppl 171.25
| epoch 2 | 1400/ 2981 batches | lr 3.87 | ms/batch 9.71 | loss 5.18 | ppl 177.40
| epoch 2 | 1600/ 2981 batches | lr 3.87 | ms/batch 9.70 | loss 5.23 | ppl 186.59
| epoch 2 | 1800/ 2981 batches | lr 3.87 | ms/batch 9.70 | loss 5.16 | ppl 173.63
| epoch 2 | 2000/ 2981 batches | lr 3.87 | ms/batch 9.61 | loss 5.19 | ppl 179.61
| epoch 2 | 2200/ 2981 batches | lr 3.87 | ms/batch 9.61 | loss 5.06 | ppl 158.22
| epoch 2 | 2400/ 2981 batches | lr 3.87 | ms/batch 9.66 | loss 5.14 | ppl 170.97
| epoch 2 | 2600/ 2981 batches | lr 3.87 | ms/batch 9.63 | loss 5.16 | ppl 173.44
| epoch 2 | 2800/ 2981 batches | lr 3.87 | ms/batch 9.62 | loss 5.10 | ppl 163.57
-----------------------------------------------------------------------------------------
| end of epoch 2 | time: 30.54s | valid loss 5.44 | valid ppl 231.52
-----------------------------------------------------------------------------------------
| epoch 3 | 200/ 2981 batches | lr 3.68 | ms/batch 9.74 | loss 5.15 | ppl 172.66
| epoch 3 | 400/ 2981 batches | lr 3.68 | ms/batch 9.81 | loss 5.16 | ppl 174.57
| epoch 3 | 600/ 2981 batches | lr 3.68 | ms/batch 9.76 | loss 4.98 | ppl 145.22
| epoch 3 | 800/ 2981 batches | lr 3.68 | ms/batch 9.69 | loss 5.04 | ppl 154.24
| epoch 3 | 1000/ 2981 batches | lr 3.68 | ms/batch 9.92 | loss 5.02 | ppl 150.68
| epoch 3 | 1200/ 2981 batches | lr 3.68 | ms/batch 9.75 | loss 5.05 | ppl 156.65
| epoch 3 | 1400/ 2981 batches | lr 3.68 | ms/batch 9.81 | loss 5.08 | ppl 161.32
| epoch 3 | 1600/ 2981 batches | lr 3.68 | ms/batch 9.87 | loss 5.13 | ppl 168.46
| epoch 3 | 1800/ 2981 batches | lr 3.68 | ms/batch 9.73 | loss 5.06 | ppl 158.11
| epoch 3 | 2000/ 2981 batches | lr 3.68 | ms/batch 9.78 | loss 5.09 | ppl 162.57
| epoch 3 | 2200/ 2981 batches | lr 3.68 | ms/batch 9.80 | loss 4.97 | ppl 143.40
| epoch 3 | 2400/ 2981 batches | lr 3.68 | ms/batch 9.84 | loss 5.05 | ppl 156.10
| epoch 3 | 2600/ 2981 batches | lr 3.68 | ms/batch 9.78 | loss 5.07 | ppl 158.92
| epoch 3 | 2800/ 2981 batches | lr 3.68 | ms/batch 9.80 | loss 5.01 | ppl 149.25
-----------------------------------------------------------------------------------------
| end of epoch 3 | time: 30.91s | valid loss 5.46 | valid ppl 234.33
-----------------------------------------------------------------------------------------
Evaluate the model with the test dataset
Apply the best model to check the result with the test dataset.
1 2 3 4 5
test_loss = evaluate(best_model, test_data) print('=' * 89) print('| End of training | test loss {:5.2f} | test ppl {:8.2f}'.format( test_loss, math.exp(test_loss))) print('=' * 89)
=========================================================================================
| End of training | test loss 5.48 | test ppl 238.72
=========================================================================================