一枚NLPer小菜鸡

transformer语言模型训练

1
%matplotlib inline

Sequence-to-Sequence Modeling with nn.Transformer and TorchText

This is a tutorial on how to train a sequence-to-sequence model
that uses the
nn.Transformer <https://pytorch.org/docs/master/nn.html?highlight=nn%20transformer#torch.nn.Transformer> module.
PyTorch 1.2 release includes a standard transformer module based on the
paper Attention is All You Need <https://arxiv.org/pdf/1706.03762.pdf>
The transformer model
has been proved to be superior in quality for many sequence-to-sequence
problems while being more parallelizable. The nn.Transformer module
relies entirely on an attention mechanism (another module recently
implemented as
nn.MultiheadAttention <https://pytorch.org/docs/master/nn.html?highlight=multiheadattention#torch.nn.MultiheadAttention>)
to draw global dependencies between input and output. The nn.Transformer module is now highly modularized such that a single component (like nn.TransformerEncoder <https://pytorch.org/docs/master/nn.html?highlight=nn%20transformerencoder#torch.nn.TransformerEncoder>in this tutorial) can be easily adapted/composed.

avatar

Define the model

In this tutorial, we train nn.TransformerEncoder model on a
language modeling task. The language modeling task is to assign a
probability for the likelihood of a given word (or a sequence of words)
to follow a sequence of words. A sequence of tokens are passed to the embedding
layer first, followed by a positional encoding layer to account for the order
of the word (see the next paragraph for more details). The
nn.TransformerEncoder consists of multiple layers of
nn.TransformerEncoderLayer <https://pytorch.org/docs/master/nn.html?highlight=transformerencoderlayer#torch.nn.TransformerEncoderLayer>. Along with the input sequence, a square
attention mask is required because the self-attention layers in
nn.TransformerEncoder are only allowed to attend the earlier positions in
the sequence. For the language modeling task, any tokens on the future
positions should be masked. To have the actual words, the output
of nn.TransformerEncoder model is sent to the final Linear
layer, which is followed by a log-Softmax function.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
import math
import torch
import torch.nn as nn
import torch.nn.functional as F

class TransformerModel(nn.Module):

def __init__(self, ntoken, ninp, nhead, nhid, nlayers, dropout=0.5):
super(TransformerModel, self).__init__()
from torch.nn import TransformerEncoder, TransformerEncoderLayer
self.model_type = 'Transformer'
self.src_mask = None
self.pos_encoder = PositionalEncoding(ninp, dropout)
encoder_layers = TransformerEncoderLayer(ninp, nhead, nhid, dropout)
self.transformer_encoder = TransformerEncoder(encoder_layers, nlayers)
self.encoder = nn.Embedding(ntoken, ninp)
self.ninp = ninp
self.decoder = nn.Linear(ninp, ntoken)

self.init_weights()

def _generate_square_subsequent_mask(self, sz):
mask = (torch.triu(torch.ones(sz, sz)) == 1).transpose(0, 1)
mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))
return mask

def init_weights(self):
initrange = 0.1
self.encoder.weight.data.uniform_(-initrange, initrange)
self.decoder.bias.data.zero_()
self.decoder.weight.data.uniform_(-initrange, initrange)

def forward(self, src):
if self.src_mask is None or self.src_mask.size(0) != len(src):
device = src.device
mask = self._generate_square_subsequent_mask(len(src)).to(device)
self.src_mask = mask

src = self.encoder(src) * math.sqrt(self.ninp)
src = self.pos_encoder(src)
output = self.transformer_encoder(src, self.src_mask)
output = self.decoder(output)
return output
1
2
3
4
sz=10
mask=(torch.triu(torch.ones(sz, sz)) == 1).transpose(0,1)
mask
mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))
tensor([[0., -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf],
        [0., 0., -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf],
        [0., 0., 0., -inf, -inf, -inf, -inf, -inf, -inf, -inf],
        [0., 0., 0., 0., -inf, -inf, -inf, -inf, -inf, -inf],
        [0., 0., 0., 0., 0., -inf, -inf, -inf, -inf, -inf],
        [0., 0., 0., 0., 0., 0., -inf, -inf, -inf, -inf],
        [0., 0., 0., 0., 0., 0., 0., -inf, -inf, -inf],
        [0., 0., 0., 0., 0., 0., 0., 0., -inf, -inf],
        [0., 0., 0., 0., 0., 0., 0., 0., 0., -inf],
        [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])

PositionalEncoding module injects some information about the
relative or absolute position of the tokens in the sequence. The
positional encodings have the same dimension as the embeddings so that
the two can be summed. Here, we use sine and cosine functions of
different frequencies.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
class PositionalEncoding(nn.Module):

def __init__(self, d_model, dropout=0.1, max_len=5000):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(p=dropout)

pe = torch.zeros(max_len, d_model)
position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0).transpose(0, 1)
self.register_buffer('pe', pe)

def forward(self, x):
x = x + self.pe[:x.size(0), :]
return self.dropout(x)

Load and batch data

The training process uses Wikitext-2 dataset from torchtext. The
vocab object is built based on the train dataset and is used to numericalize
tokens into tensors. Starting from sequential data, the batchify()
function arranges the dataset into columns, trimming off any tokens remaining
after the data has been divided into batches of size batch_size.
For instance, with the alphabet as the sequence (total length of 26)
and a batch size of 4, we would divide the alphabet into 4 sequences of
length 6:

These columns are treated as independent by the model, which means that
the dependence of G and F can not be learned, but allows more
efficient batch processing.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
import torchtext
from torchtext.data.utils import get_tokenizer
TEXT = torchtext.data.Field(tokenize=get_tokenizer("basic_english"),
init_token='<sos>',
eos_token='<eos>',
lower=True)
train_txt, val_txt, test_txt = torchtext.datasets.WikiText2.splits(TEXT)
TEXT.build_vocab(train_txt)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def batchify(data, bsz):
data = TEXT.numericalize([data.examples[0].text])
# Divide the dataset into bsz parts.
nbatch = data.size(0) // bsz
# Trim off any extra elements that wouldn't cleanly fit (remainders).
data = data.narrow(0, 0, nbatch * bsz)
# Evenly divide the data across the bsz batches.
data = data.view(bsz, -1).t().contiguous()
return data.to(device)

batch_size = 20
eval_batch_size = 10
train_data = batchify(train_txt, batch_size)
val_data = batchify(val_txt, eval_batch_size)
test_data = batchify(test_txt, eval_batch_size)
/home/bool_tbb/miniconda3/envs/pytorch/lib/python3.8/site-packages/torchtext/data/field.py:150: UserWarning: Field class will be retired in the 0.8.0 release and moved to torchtext.legacy. Please see 0.7.0 release notes for further information.
  warnings.warn('{} class will be retired in the 0.8.0 release and moved to torchtext.legacy. Please see 0.7.0 release notes for further information.'.format(self.__class__.__name__), UserWarning)
/home/bool_tbb/miniconda3/envs/pytorch/lib/python3.8/site-packages/torchtext/data/example.py:78: UserWarning: Example class will be retired in the 0.8.0 release and moved to torchtext.legacy. Please see 0.7.0 release notes for further information.
  warnings.warn('Example class will be retired in the 0.8.0 release and moved to torchtext.legacy. Please see 0.7.0 release notes for further information.', UserWarning)
1
data = TEXT.numericalize([train_txt.examples[0].text])

Functions to generate input and target sequence

get_batch() function generates the input and target sequence for
the transformer model. It subdivides the source data into chunks of
length bptt. For the language modeling task, the model needs the
following words as Target. For example, with a bptt value of 2,
we’d get the following two Variables for i = 0:

avatar

It should be noted that the chunks are along dimension 0, consistent
with the S dimension in the Transformer model. The batch dimension
N is along dimension 1.

1
2
3
4
5
6
bptt = 35
def get_batch(source, i):
seq_len = min(bptt, len(source) - 1 - i)
data = source[i:i+seq_len]
target = source[i+1:i+1+seq_len].view(-1)
return data, target
1
get_batch(train_data,0)[0][:10]
tensor([[    3,    25,  1849,   570,     7,     5,     5,  9258,     4,    56,
             0,     7,     6,  6634,     4,  6603,     6,     5,    65,    30],
        [   12,    66,    13,  4889,   458,     8,  1045,    21, 19094,    34,
           147,     4,     0,    10,  2280,  2294,    58,    35,  2438,  4064],
        [ 3852, 13667,  2962,    68,     6, 28374,    39,   417,     0,  2034,
            29,    88, 27804,   350,     7,    17,  4811,   902,    33,    20],
        [ 3872,     5,     9,     4,   155,     8,  1669,    32,  2634,   257,
             4,     5,     5,    11,  4568,  8205,    78,  5258,  7723, 12009],
        [  884,    91,   963,   294,     4,   548,    29,   279,    37,     4,
           391,    31,     4,  2614,   948, 13583,   405,   545,    15,    16],
        [   12,    25,     5,     5,  1688,     0,    39,    59,  8785,     0,
             6,    13,  3026,    43,    11,     6,     0,   349,  3134,  4538],
        [    3,     6,    82,  1780,    21,     6,  2158,     4,     8,     8,
            27,  1485,     0,   194,    96,   195,  3545,   101,  1150,  3486],
        [    3,    25,    13,   885,     4,  6360,    15,   670,     0,    13,
            26,    17,     5,   417,   894,    10,     5,     5,  2998,    27],
        [20003,   190,    33,  1516,  1085,    34,   680,  3597,  2475,   664,
            47,    11,   127,    63,     6,    46, 24995,    72, 10190,    26],
        [   86,  9076, 10540,     6,     9,    74,   198,     7,     6,    17,
          3134,  5312,     4,     4,     3, 25509,     5,  2034,     5,    86]])

Initiate an instance

The model is set up with the hyperparameter below. The vocab size is
equal to the length of the vocab object.

1
2
3
4
5
6
7
8
ntokens = len(TEXT.vocab.stoi) # the size of vocabulary
emsize = 200 # embedding dimension
nhid = 200 # the dimension of the feedforward network model in nn.TransformerEncoder
nlayers = 2 # the number of nn.TransformerEncoderLayer in nn.TransformerEncoder
nhead = 2 # the number of heads in the multiheadattention models
dropout = 0.2 # the dropout value
model = nn.DataParallel(model)
model = TransformerModel(ntokens, emsize, nhead, nhid, nlayers, dropout).to(device)

Run the model

CrossEntropyLoss <https://pytorch.org/docs/master/nn.html?highlight=crossentropyloss#torch.nn.CrossEntropyLoss>
is applied to track the loss and
SGD <https://pytorch.org/docs/master/optim.html?highlight=sgd#torch.optim.SGD>
implements stochastic gradient descent method as the optimizer. The initial
learning rate is set to 5.0. StepLR <https://pytorch.org/docs/master/optim.html?highlight=steplr#torch.optim.lr_scheduler.StepLR>is
applied to adjust the learn rate through epochs. During the
training, we use
nn.utils.clip_grad_norm\_ <https://pytorch.org/docs/master/nn.html?highlight=nn%20utils%20clip_grad_norm#torch.nn.utils.clip_grad_norm_>
function to scale all the gradient together to prevent exploding.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
criterion = nn.CrossEntropyLoss()
lr = 5.0 # learning rate
optimizer = torch.optim.SGD(model.parameters(), lr=lr)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1.0, gamma=0.95)

import time
def train():
model.train() # Turn on the train mode
total_loss = 0.
start_time = time.time()
ntokens = len(TEXT.vocab.stoi)
for batch, i in enumerate(range(0, train_data.size(0) - 1, bptt)):
data, targets = get_batch(train_data, i)
optimizer.zero_grad()
output = model(data)
loss = criterion(output.view(-1, ntokens), targets)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.5)
optimizer.step()

total_loss += loss.item()
log_interval = 200
if batch % log_interval == 0 and batch > 0:
cur_loss = total_loss / log_interval
elapsed = time.time() - start_time
print('| epoch {:3d} | {:5d}/{:5d} batches | '
'lr {:02.2f} | ms/batch {:5.2f} | '
'loss {:5.2f} | ppl {:8.2f}'.format(
epoch, batch, len(train_data) // bptt, scheduler.get_lr()[0],
elapsed * 1000 / log_interval,
cur_loss, math.exp(cur_loss)))
total_loss = 0
start_time = time.time()

def evaluate(eval_model, data_source):
eval_model.eval() # Turn on the evaluation mode
total_loss = 0.
ntokens = len(TEXT.vocab.stoi)
with torch.no_grad():
for i in range(0, data_source.size(0) - 1, bptt):
data, targets = get_batch(data_source, i)
output = eval_model(data)
output_flat = output.view(-1, ntokens)
total_loss += len(data) * criterion(output_flat, targets).item()
return total_loss / (len(data_source) - 1)

Loop over epochs. Save the model if the validation loss is the best
we’ve seen so far. Adjust the learning rate after each epoch.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
best_val_loss = float("inf")
epochs = 100 # The number of epochs
best_model = None

for epoch in range(1, epochs + 1):
epoch_start_time = time.time()
train()
val_loss = evaluate(model, val_data)
print('-' * 89)
print('| end of epoch {:3d} | time: {:5.2f}s | valid loss {:5.2f} | '
'valid ppl {:8.2f}'.format(epoch, (time.time() - epoch_start_time),
val_loss, math.exp(val_loss)))
print('-' * 89)

if val_loss < best_val_loss:
best_val_loss = val_loss
best_model = model

scheduler.step()
| epoch   1 |   200/ 2981 batches | lr 4.07 | ms/batch  9.71 | loss  5.39 | ppl   218.21
| epoch   1 |   400/ 2981 batches | lr 4.07 | ms/batch  9.50 | loss  5.39 | ppl   220.00
| epoch   1 |   600/ 2981 batches | lr 4.07 | ms/batch  9.55 | loss  5.20 | ppl   181.36
| epoch   1 |   800/ 2981 batches | lr 4.07 | ms/batch  9.60 | loss  5.26 | ppl   193.18
| epoch   1 |  1000/ 2981 batches | lr 4.07 | ms/batch  9.55 | loss  5.23 | ppl   186.05
| epoch   1 |  1200/ 2981 batches | lr 4.07 | ms/batch  9.55 | loss  5.26 | ppl   192.45
| epoch   1 |  1400/ 2981 batches | lr 4.07 | ms/batch  9.55 | loss  5.29 | ppl   197.86
| epoch   1 |  1600/ 2981 batches | lr 4.07 | ms/batch  9.60 | loss  5.33 | ppl   206.42
| epoch   1 |  1800/ 2981 batches | lr 4.07 | ms/batch  9.59 | loss  5.27 | ppl   193.88
| epoch   1 |  2000/ 2981 batches | lr 4.07 | ms/batch  9.70 | loss  5.30 | ppl   200.64
| epoch   1 |  2200/ 2981 batches | lr 4.07 | ms/batch  9.64 | loss  5.17 | ppl   176.64
| epoch   1 |  2400/ 2981 batches | lr 4.07 | ms/batch  9.62 | loss  5.26 | ppl   192.57
| epoch   1 |  2600/ 2981 batches | lr 4.07 | ms/batch  9.63 | loss  5.28 | ppl   195.69
| epoch   1 |  2800/ 2981 batches | lr 4.07 | ms/batch  9.64 | loss  5.21 | ppl   182.35
-----------------------------------------------------------------------------------------
| end of epoch   1 | time: 30.36s | valid loss  5.55 | valid ppl   256.32
-----------------------------------------------------------------------------------------
| epoch   2 |   200/ 2981 batches | lr 3.87 | ms/batch  9.73 | loss  5.26 | ppl   192.78
| epoch   2 |   400/ 2981 batches | lr 3.87 | ms/batch  9.65 | loss  5.27 | ppl   194.78
| epoch   2 |   600/ 2981 batches | lr 3.87 | ms/batch  9.68 | loss  5.08 | ppl   160.59
| epoch   2 |   800/ 2981 batches | lr 3.87 | ms/batch  9.67 | loss  5.14 | ppl   171.46
| epoch   2 |  1000/ 2981 batches | lr 3.87 | ms/batch  9.68 | loss  5.10 | ppl   164.78
| epoch   2 |  1200/ 2981 batches | lr 3.87 | ms/batch  9.70 | loss  5.14 | ppl   171.25
| epoch   2 |  1400/ 2981 batches | lr 3.87 | ms/batch  9.71 | loss  5.18 | ppl   177.40
| epoch   2 |  1600/ 2981 batches | lr 3.87 | ms/batch  9.70 | loss  5.23 | ppl   186.59
| epoch   2 |  1800/ 2981 batches | lr 3.87 | ms/batch  9.70 | loss  5.16 | ppl   173.63
| epoch   2 |  2000/ 2981 batches | lr 3.87 | ms/batch  9.61 | loss  5.19 | ppl   179.61
| epoch   2 |  2200/ 2981 batches | lr 3.87 | ms/batch  9.61 | loss  5.06 | ppl   158.22
| epoch   2 |  2400/ 2981 batches | lr 3.87 | ms/batch  9.66 | loss  5.14 | ppl   170.97
| epoch   2 |  2600/ 2981 batches | lr 3.87 | ms/batch  9.63 | loss  5.16 | ppl   173.44
| epoch   2 |  2800/ 2981 batches | lr 3.87 | ms/batch  9.62 | loss  5.10 | ppl   163.57
-----------------------------------------------------------------------------------------
| end of epoch   2 | time: 30.54s | valid loss  5.44 | valid ppl   231.52
-----------------------------------------------------------------------------------------
| epoch   3 |   200/ 2981 batches | lr 3.68 | ms/batch  9.74 | loss  5.15 | ppl   172.66
| epoch   3 |   400/ 2981 batches | lr 3.68 | ms/batch  9.81 | loss  5.16 | ppl   174.57
| epoch   3 |   600/ 2981 batches | lr 3.68 | ms/batch  9.76 | loss  4.98 | ppl   145.22
| epoch   3 |   800/ 2981 batches | lr 3.68 | ms/batch  9.69 | loss  5.04 | ppl   154.24
| epoch   3 |  1000/ 2981 batches | lr 3.68 | ms/batch  9.92 | loss  5.02 | ppl   150.68
| epoch   3 |  1200/ 2981 batches | lr 3.68 | ms/batch  9.75 | loss  5.05 | ppl   156.65
| epoch   3 |  1400/ 2981 batches | lr 3.68 | ms/batch  9.81 | loss  5.08 | ppl   161.32
| epoch   3 |  1600/ 2981 batches | lr 3.68 | ms/batch  9.87 | loss  5.13 | ppl   168.46
| epoch   3 |  1800/ 2981 batches | lr 3.68 | ms/batch  9.73 | loss  5.06 | ppl   158.11
| epoch   3 |  2000/ 2981 batches | lr 3.68 | ms/batch  9.78 | loss  5.09 | ppl   162.57
| epoch   3 |  2200/ 2981 batches | lr 3.68 | ms/batch  9.80 | loss  4.97 | ppl   143.40
| epoch   3 |  2400/ 2981 batches | lr 3.68 | ms/batch  9.84 | loss  5.05 | ppl   156.10
| epoch   3 |  2600/ 2981 batches | lr 3.68 | ms/batch  9.78 | loss  5.07 | ppl   158.92
| epoch   3 |  2800/ 2981 batches | lr 3.68 | ms/batch  9.80 | loss  5.01 | ppl   149.25
-----------------------------------------------------------------------------------------
| end of epoch   3 | time: 30.91s | valid loss  5.46 | valid ppl   234.33
-----------------------------------------------------------------------------------------

Evaluate the model with the test dataset

Apply the best model to check the result with the test dataset.

1
2
3
4
5
test_loss = evaluate(best_model, test_data)
print('=' * 89)
print('| End of training | test loss {:5.2f} | test ppl {:8.2f}'.format(
test_loss, math.exp(test_loss)))
print('=' * 89)
=========================================================================================
| End of training | test loss  5.48 | test ppl   238.72
=========================================================================================
1
2
PATH = './transformer_net.pth'
torch.save(model.state_dict(), PATH)
1
model_dict=model.load_state_dict(torch.load(PATH))
1
type(model_dict)
torch.nn.modules.module._IncompatibleKeys

REFERENCE:SEQUENCE-TO-SEQUENCE MODELING WITH NN.TRANSFORMER AND TORCHTEXT.

O(∩_∩)O哈哈~