Sequence-to-Sequence Learning for Machine Translation ā
In so-called sequence-to-sequence problems such as machine translation (as discussed in :numref:sec_machine_translation), where inputs and outputs each consist of variable-length unaligned sequences, we generally rely on encoderādecoder architectures (:numref:sec_encoder-decoder). In this section, we will demonstrate the application of an encoderādecoder architecture, where both the encoder and decoder are implemented as RNNs, to the task of machine translation [153], [172].
Here, the encoder RNN will take a variable-length sequence as input and transform it into a fixed-shape hidden state. Later, in :numref:chap_attention-and-transformers, we will introduce attention mechanisms, which allow us to access encoded inputs without having to compress the entire input into a single fixed-length representation.
Then to generate the output sequence, one token at a time, the decoder model, consisting of a separate RNN, will predict each successive target token given both the input sequence and the preceding tokens in the output. During training, the decoder will typically be conditioned upon the preceding tokens in the official "ground truth" label. However, at test time, we will want to condition each output of the decoder on the tokens already predicted. Note that if we ignore the encoder, the decoder in a sequence-to-sequence architecture behaves just like a normal language model. Figure illustrates how to use two RNNs for sequence-to-sequence learning in machine translation.
Sequence-to-sequence learning with an RNN encoder and an RNN decoder.
In Figure, the special "<eos>" token marks the end of the sequence. Our model can stop making predictions once this token is generated. At the initial time step of the RNN decoder, there are two special design decisions to be aware of: First, we begin every input with a special beginning-of-sequence "<bos>" token. Second, we may feed the final hidden state of the encoder into the decoder at every single decoding time step [172]. In some other designs, such as that of Sutskever et al. [153], the final hidden state of the RNN encoder is used to initiate the hidden state of the decoder only at the first decoding step.
using Pkg; Pkg.activate("../../d2lai")
using d2lai
using Flux
using Downloads
using StatsBase
using Plots
using CUDA, cuDNN
import d2lai: StackedRNN, AbstractEncoderDecoder Activating project at `/workspace/d2l-julia/d2lai`Teacher Forcing ā
While running the encoder on the input sequence is relatively straightforward, handling the input and output of the decoder requires more care. The most common approach is sometimes called teacher forcing. Here, the original target sequence (token labels) is fed into the decoder as input. More concretely, the special beginning-of-sequence token and the original target sequence, excluding the final token, are concatenated as input to the decoder, while the decoder output (labels for training) is the original target sequence, shifted by one token: "<bos>", "Ils", "regardent", "."
Our implementation in :numref:subsec_loading-seq-fixed-len prepared training data for teacher forcing, where shifting tokens for self-supervised learning is similar to the training of language models in :numref:sec_language-model. An alternative approach is to feed the predicted token from the previous time step as the current input to the decoder.
In the following, we explain the design depicted in Figure in greater detail. We will train this model for machine translation on the EnglishāFrench dataset as introduced in :numref:sec_machine_translation.
Encoder ā
Recall that the encoder transforms an input sequence of variable length into a fixed-shape context variable
Consider a single sequence example (batch size 1). Suppose the input sequence is
In general, the encoder transforms the hidden states at all time steps into a context variable through a customized function
For example, in Figure, the context variable is just the hidden state
In this example, we have used a unidirectional RNN to design the encoder, where the hidden state only depends on the input subsequence at and before the time step of the hidden state. We can also construct encoders using bidirectional RNNs. In this case, a hidden state depends on the subsequence before and after the time step (including the input at the current time step), which encodes the information of the entire sequence.
Now let's [implement the RNN encoder]. Note that we use an embedding layer to obtain the feature vector for each token in the input sequence. The weight of an embedding layer is a matrix, where the number of rows corresponds to the size of the input vocabulary (vocab_size) and number of columns corresponds to the feature vector's dimension (embed_size). For any input token index
struct Seq2SeqEncoder{E, R , A} <: AbstractModel
embedding::E
rnn::R
args::A
end
function Seq2SeqEncoder(vocab_size, embed_size, num_hiddens, num_layers, dropout=0)
embedding = Embedding(vocab_size => embed_size)
rnn = StackedRNN(embed_size, num_hiddens, num_layers)
args = (; vocab_size, embed_size, num_hiddens, num_layers)
Seq2SeqEncoder(embedding, rnn, args)
end
function (m::Seq2SeqEncoder)(x, args)
embs = m.embedding(x)
out, state = m.rnn(embs)
return out, state
endLet's use a concrete example to [illustrate the above encoder implementation.] Below, we instantiate a two-layer GRU encoder whose number of hidden units is 16. Given a minibatch of sequence inputs X (batch size enc_outputs returned by the encoder's recurrent layers) are a tensor of shape (number of time steps, batch size, number of hidden units).
vocab_size, embed_size, num_hiddens, num_layers = 10, 8, 16, 2
batch_size, num_steps = 4, 9
encoder = Seq2SeqEncoder(vocab_size, embed_size, num_hiddens, num_layers)
X = ones(Int64, num_steps, batch_size)
enc_outputs, enc_state = encoder(X, nothing)
@assert size(enc_outputs) == (num_hiddens, num_steps, batch_size)Decoder ā
Given a target output sequence
To predict the subsequent token
:eqlabel:eq_seq2seq_s_t
After obtaining the hidden state of the decoder, we can use an output layer and the softmax operation to compute the predictive distribution
Following Figure, when implementing the decoder as follows, we directly use the hidden state at the final time step of the encoder to initialize the hidden state of the decoder. This requires that the RNN encoder and the RNN decoder have the same number of layers and hidden units. To further incorporate the encoded input sequence information, the context variable is concatenated with the decoder input at all the time steps. To predict the probability distribution of the output token, we use a fully connected layer to transform the hidden state at the final layer of the RNN decoder.
struct Seq2SeqDecoder{E, R, D, A} <: AbstractModel
embedding::E
rnn::R
dense::D
args::A
end
function Seq2SeqDecoder(vocab_size::Int, embed_size::Int, num_hiddens, num_layers, dropout=0)
embedding = Embedding(vocab_size => embed_size)
rnn = StackedRNN(embed_size + num_hiddens, num_hiddens, num_layers; rnn = Flux.LSTM)
dense = Dense(num_hiddens, vocab_size)
args = (; vocab_size, embed_size, num_hiddens, num_layers)
Seq2SeqDecoder(embedding, rnn, dense, args)
end
function d2lai.init_state(::Seq2SeqDecoder, enc_all_out, args)
enc_all_out
end
function (m::Seq2SeqDecoder)(x, state)
embs = m.embedding(x)
enc_output, hidden_state = state
context = enc_output[:, end, :]
context_copied = [copy(context) for i in 1:size(embs, 2)]
context_hcat = reduce(hcat, context_copied)
context_reshaped = reshape(context_hcat, :, size(embs, 2), size(embs, 3))
embs_and_context = vcat(embs, context_reshaped)
rnn_out, new_hidden_state = m.rnn(embs_and_context, hidden_state)
outputs = m.dense(rnn_out)
return outputs, (enc_output, new_hidden_state)
endTo illustrate the implemented decoder, below we instantiate it with the same hyperparameters from the aforementioned encoder. As we can see, the output shape of the decoder becomes (batch size, number of time steps, vocabulary size), where the final dimension of the tensor stores the predicted token distribution.
decoder = Seq2SeqDecoder(vocab_size, embed_size, num_hiddens, num_layers)
state = d2lai.init_state(decoder, encoder(X, nothing), nothing)
decoder_out, state = decoder(X, state)
@assert size(decoder_out) == (vocab_size, num_steps, batch_size)
@assert size(state[1]) == (num_hiddens, num_steps, batch_size)The layers in the above RNN encoderādecoder model are summarized in Figure.
Layers in an RNN encoderādecoder model.
EncoderāDecoder for Sequence-to-Sequence Learning ā
Putting it all together in code yields the following:
struct Seq2Seq{E, D, T, A} <: d2lai.AbstractEncoderDecoder
encoder::E
decoder::D
tgt_pad::T
args::A
end
function Seq2Seq(encoder::AbstractModel, decoder::AbstractModel, tgt_pad)
return Seq2Seq(encoder, decoder, tgt_pad, (;))
endSeq2Seqfunction d2lai.training_step(m::AbstractEncoderDecoder, batch)
y_pred = d2lai.forward(m, batch[1:end-1]...)
loss_ = d2lai.loss(m, y_pred, batch[end])
return loss_
end
function d2lai.validation_step(m::AbstractEncoderDecoder, batch)
y_pred = d2lai.forward(m, batch[1:end-1]...)
loss_ = d2lai.loss(m, y_pred, batch[end])
return loss_ , nothing
endLoss Function with Masking ā
At each time step, the decoder predicts a probability distribution for the output tokens. As with language modeling, we can apply softmax to obtain the distribution and calculate the cross-entropy loss for optimization. Recall from :numref:sec_machine_translation that the special padding tokens are appended to the end of sequences and so sequences of varying lengths can be efficiently loaded in minibatches of the same shape. However, prediction of padding tokens should be excluded from loss calculations. To this end, we can [mask irrelevant entries with zero values] so that multiplication of any irrelevant prediction with zero equates to zero.
function d2lai.loss(model::AbstractEncoderDecoder, y_pred, y)
# Compute per-token cross entropy loss (shape: vocab Ć seq_len Ć batch)
target_oh = Flux.onehotbatch(y, 1:model.decoder.args.vocab_size)
loss = Flux.logitcrossentropy(y_pred, target_oh; agg = Flux.identity)
# Create mask (ensure it's same type and device as loss)
mask = reshape(y, 1, 9, :) .!= model.tgt_pad
mask = eltype(loss).(mask)
# Apply mask and normalize
masked_loss = mask .* loss
return sum(masked_loss) / (sum(mask) + eps(eltype(masked_loss))) # to avoid divide-by-zero
endTraining ā
Now we can create and train an RNN encoderādecoder model for sequence-to-sequence learning on the machine translation dataset.
data = d2lai.MTFraEng(128)
embed_size, num_hiddens, num_layers, dropout = 256, 256, 2, 0.2
encoder = Seq2SeqEncoder(length(data.src_vocab), embed_size, num_hiddens, num_layers)
decoder = Seq2SeqDecoder(length(data.tgt_vocab), embed_size, num_hiddens, 1)
model = Seq2Seq(encoder, decoder, data.tgt_vocab["<pad>"])
opt = Flux.Adam(0.01)
trainer = Trainer(model, data, opt; max_epochs = 30, gpu = true, gradient_clip_val = 1.)
m, _ = d2lai.fit(trainer); [ Info: Train Loss: 3.7725425, Val Loss: 4.8512206
[ Info: Train Loss: 2.9626951, Val Loss: 4.4104686
[ Info: Train Loss: 2.7549093, Val Loss: 4.4569564
[ Info: Train Loss: 2.4077005, Val Loss: 4.207527
[ Info: Train Loss: 2.120146, Val Loss: 4.7731814
[ Info: Train Loss: 1.8031303, Val Loss: 4.5170074
[ Info: Train Loss: 1.5689857, Val Loss: 4.5816607
[ Info: Train Loss: 1.3473984, Val Loss: 4.587737
[ Info: Train Loss: 1.2663335, Val Loss: 4.894618
[ Info: Train Loss: 1.1931585, Val Loss: 4.6093106
[ Info: Train Loss: 0.9825569, Val Loss: 4.663696
[ Info: Train Loss: 0.9533752, Val Loss: 4.8820148
[ Info: Train Loss: 0.8541412, Val Loss: 4.4741316
[ Info: Train Loss: 0.80364215, Val Loss: 4.884055
[ Info: Train Loss: 0.79689586, Val Loss: 4.9315615
[ Info: Train Loss: 0.7644792, Val Loss: 4.8976398
[ Info: Train Loss: 0.7273372, Val Loss: 4.635629
[ Info: Train Loss: 0.6961968, Val Loss: 4.9264054
[ Info: Train Loss: 0.67604417, Val Loss: 4.89318
[ Info: Train Loss: 0.6577363, Val Loss: 4.914153
[ Info: Train Loss: 0.6324462, Val Loss: 5.102131
[ Info: Train Loss: 0.65544385, Val Loss: 5.4484243
[ Info: Train Loss: 0.6060765, Val Loss: 5.093669
[ Info: Train Loss: 0.60152113, Val Loss: 5.0452814
[ Info: Train Loss: 0.5759656, Val Loss: 5.05144
[ Info: Train Loss: 0.6268597, Val Loss: 5.127491
[ Info: Train Loss: 0.56338257, Val Loss: 5.1252966
[ Info: Train Loss: 0.4942387, Val Loss: 4.9550514
[ Info: Train Loss: 0.44558626, Val Loss: 5.0278497
[ Info: Train Loss: 0.47265473, Val Loss: 4.9084997Prediction ā
To predict the output sequence at each step, the predicted token from the previous time step is fed into the decoder as an input. One simple strategy is to sample whichever token that has been assigned by the decoder the highest probability when predicting at each step. As in training, at the initial time step the beginning-of-sequence ("<bos>") token is fed into the decoder. This prediction process is illustrated in Figure. When the end-of-sequence ("<eos>") token is predicted, the prediction of the output sequence is complete.
Predicting the output sequence token by token using an RNN encoderādecoder.
In the next section, we will introduce more sophisticated strategies based on beam search (:numref:sec_beam-search).
function predict_step(model::AbstractEncoderDecoder, batch, device, num_steps)
batch = batch |> device
src, tgt, src_valid_len, _ = batch
enc_all_outputs = model.encoder(src, src_valid_len)
dec_state = d2lai.init_state(model.decoder, enc_all_outputs, src_valid_len)
outputs, attention_weights = [tgt[1:1, 1:end]], []
for _ in 1:num_steps
Y_t = outputs[end]
Y_t_plus_1, dec_state = model.decoder(Y_t, dec_state)
Y_t_plus_1_index = getindex.(argmax(Y_t_plus_1, dims = 1), 1)
push!(outputs, reshape(Y_t_plus_1_index, 1, :))
end
out = reduce(vcat, outputs)
return out[2:end, :]
endpredict_step (generic function with 1 method)Evaluation of Predicted Sequences ā
We can evaluate a predicted sequence by comparing it with the target sequence (the ground truth). But what precisely is the appropriate measure for comparing similarity between two sequences?
Bilingual Evaluation Understudy (BLEU), though originally proposed for evaluating machine translation results [173], has been extensively used in measuring the quality of output sequences for different applications. In principle, for any subsec_markov-models-and-n-grams) in the predicted sequence, BLEU evaluates whether this
Denote by
$
\exp\left(\min\left(0, 1 - \frac{\textrm{len}{\textrm{label}}}{\textrm{len}{\textrm{pred}}}\right)\right) \prod_{n=1}^k p_n^{1/2^n},$ :eqlabel:eq_bleu
where
Based on the definition of BLEU in :eqref:eq_bleu, whenever the predicted sequence is the same as the target sequence, BLEU is 1. Moreover, since matching longer eq_bleu penalizes shorter predicted sequences. For example, when
We implement the BLEU measure as follows.
function bleu(pred_seq::String, label_seq::String, k::Int)
pred_tokens = split(pred_seq)
label_tokens = split(label_seq)
len_pred = length(pred_tokens)
len_label = length(label_tokens)
# Brevity penalty
score = exp(min(0.0, 1 - len_label / len_pred))
for n in 1:min(k, len_pred)
num_matches = 0
label_subs = Dict{String, Int}()
# Build reference n-gram counts
for i in 1:(len_label - n + 1)
ngram = join(label_tokens[i:i+n-1], " ")
label_subs[ngram] = get(label_subs, ngram, 0) + 1
end
# Match predicted n-grams against reference
for i in 1:(len_pred - n + 1)
pred_ngram = join(pred_tokens[i:i+n-1], " ")
if get(label_subs, pred_ngram, 0) > 0
num_matches += 1
label_subs[pred_ngram] -= 1
end
end
# Update score with weighted precision
score *= (num_matches / (len_pred - n + 1))^(0.5^n)
end
return score
endbleu (generic function with 1 method)In the end, we use the trained RNN encoderādecoder to translate a few English sentences into French and compute the BLEU of the results.
engs = ["go .", "i lost .", "he's calm .", "i'm home ."]
fras = ["va !", "j'ai perdu .", "il est calme .", "je suis chez moi ."]
batch = d2lai.build(data, engs, fras)
preds = predict_step(m, batch, cpu, data.args.num_steps)
for (en, fr, p) in zip(engs, fras, eachcol(preds))
translation = []
for token in d2lai.to_tokens(data.tgt_vocab, p)
if token == "<eos>"
break
end
push!(translation, token)
end
bleu_score = bleu(join(translation, " "), fr, 2)
println("$en => $translation", "bleu: $bleu_score")
endgo . => Any["va", "!"]bleu: 1.0
i lost . => Any["j'ai", "perdu", "."]bleu: 1.0
he's calm . => Any["soyez", "calme", "", "!"]bleu: 0.0
i'm home . => Any["je", "suis", "chez", "moi", "."]bleu: 1.0