Long Short-Term Memory (LSTM)
Shortly after the first Elman-style RNNs were trained using backpropagation [163], the problems of learning long-term dependencies (owing to vanishing and exploding gradients) became salient, with Bengio and Hochreiter discussing the problem [164], [165]. Hochreiter had articulated this problem as early as 1991 in his Master's thesis, although the results were not widely known because the thesis was written in German. While gradient clipping helps with exploding gradients, handling vanishing gradients appears to require a more elaborate solution. One of the first and most successful techniques for addressing vanishing gradients came in the form of the long short-term memory (LSTM) model due to Hochreiter and Schmidhuber [166]. LSTMs resemble standard recurrent neural networks but here each ordinary recurrent node is replaced by a memory cell. Each memory cell contains an internal state, i.e., a node with a self-connected recurrent edge of fixed weight 1, ensuring that the gradient can pass across many time steps without vanishing or exploding.
The term "long short-term memory" comes from the following intuition. Simple recurrent neural networks have long-term memory in the form of weights. The weights change slowly during training, encoding general knowledge about the data. They also have short-term memory in the form of ephemeral activations, which pass from each node to successive nodes. The LSTM model introduces an intermediate type of storage via the memory cell. A memory cell is a composite unit, built from simpler nodes in a specific connectivity pattern, with the novel inclusion of multiplicative nodes.
using Pkg; Pkg.activate("../../d2lai")
using d2lai
using Flux
using Downloads
using StatsBase
using Plots
using CUDA, cuDNN Activating project at `/workspace/d2l-julia/d2lai`[ Info: Precompiling d2lai [749b8817-cd67-416c-8a57-830ea19f3cc4] (cache misses: include_dependency fsize change (2))
Gated Memory Cell
Each memory cell is equipped with an internal state and a number of multiplicative gates that determine whether (i) a given input should impact the internal state (the input gate), (ii) the internal state should be flushed to
Gated Hidden State
The key distinction between vanilla RNNs and LSTMs is that the latter support gating of the hidden state. This means that we have dedicated mechanisms for when a hidden state should be updated and also for when it should be reset. These mechanisms are learned and they address the concerns listed above. For instance, if the first token is of great importance we will learn not to update the hidden state after the first observation. Likewise, we will learn to skip irrelevant temporary observations. Last, we will learn to reset the latent state whenever needed. We discuss this in detail below.
Input Gate, Forget Gate, and Output Gate
The data feeding into the LSTM gates are the input at the current time step and the hidden state of the previous time step, as illustrated in Figure. Three fully connected layers with sigmoid activation functions compute the values of the input, forget, and output gates. As a result of the sigmoid activation, all values of the three gates are in the range of
Computing the input gate, the forget gate, and the output gate in an LSTM model.
Mathematically, suppose that there are
where subsec_broadcasting) is triggered during the summation. We use sigmoid functions (as introduced in :numref:sec_mlp) to map the input values to the interval
Input Node
Next we design the memory cell. Since we have not specified the action of the various gates yet, we first introduce the input node
where
A quick illustration of the input node is shown in Figure.
Computing the input node in an LSTM model.
Memory Cell Internal State
In LSTMs, the input gate
If the forget gate is always 1 and the input gate is always 0, the memory cell internal state
We thus arrive at the flow diagram in Figure.
Computing the memory cell internal state in an LSTM model.
Hidden State
Last, we need to define how to compute the output of the memory cell, i.e., the hidden state
Whenever the output gate is close to 1, we allow the memory cell internal state to impact the subsequent layers uninhibited, whereas for output gate values close to 0, we prevent the current memory from impacting other layers of the network at the current time step. Note that a memory cell can accrue information across many time steps without impacting the rest of the network (as long as the output gate takes values close to 0), and then suddenly impact the network at a subsequent time step as soon as the output gate flips from values close to 0 to values close to 1. Figure has a graphical illustration of the data flow.
Computing the hidden state in an LSTM model.
Implementation from Scratch
Now let's implement an LSTM from scratch. As same as the experiments in :numref:sec_rnn-scratch, we first load The Time Machine dataset.
Initializing Model Parameters
Next, we need to define and initialize the model parameters. As previously, the hyperparameter num_hiddens dictates the number of hidden units. We initialize weights following a Gaussian distribution with 0.01 standard deviation, and we set the biases to 0.
struct LSTMScratch{W, A} <: AbstractModel
w::W
args::A
end
Flux.@layer LSTMScratch trainable = (w,)function LSTMScratch(num_inputs::Int, num_hiddens::Int; sigma = 0.1)
init_weights() = randn(num_hiddens, num_inputs).*sigma, randn(num_hiddens, num_hiddens).*sigma, zeros(num_hiddens)
W_ix, W_ih, b_i = init_weights() # input gate
W_fx, W_fh, b_f = init_weights() # forget gate
W_cx, W_ch, b_c = init_weights() # input node
W_ox, W_oh, b_o = init_weights()
w = (input_gate = d2lai.construct_nt_args(;W_ix, W_ih, b_i),
forget_gate = d2lai.construct_nt_args(; W_fx, W_fh, b_f),
input_node = d2lai.construct_nt_args(;W_cx, W_ch, b_c),
output_gate = d2lai.construct_nt_args(;W_ox, W_oh, b_o)
)
args = d2lai.construct_nt_args(; num_inputs, num_hiddens, sigma)
LSTMScratch(w, args)
endLSTMScratchfunction (m::LSTMScratch)(x, state = nothing)
batchsize = size(x, 3)
device = isa(x, CuArray) ? gpu : cpu
H, C = if isnothing(state)
zeros(m.args.num_hiddens, batchsize), zeros(m.args.num_hiddens, batchsize)
else
state
end |> device
outputs = map(eachslice(x; dims = 2)) do x_
It = sigmoid.(m.w.input_gate.W_ix*x_ + m.w.input_gate.W_ih*H .+ m.w.input_gate.b_i)
Ft = sigmoid.(m.w.forget_gate.W_fx*x_ + m.w.forget_gate.W_fh*H .+ m.w.forget_gate.b_f)
Ot = sigmoid.(m.w.output_gate.W_ox*x_ + m.w.output_gate.W_oh*H .+ m.w.output_gate.b_o)
C_tilde = tanh.(m.w.input_node.W_cx*x_ + m.w.input_node.W_ch*H .+ m.w.input_node.b_c)
C = Ft.*C + It.*C_tilde
H = Ot.*C
return H
end
outputs = stack(outputs)
permutedims(outputs, [1,3,2]), (H,C)
endTraining and Prediction
Let's train an LSTM model by instantiating the RNNLMScratch class from :numref:sec_rnn-scratch.
data = d2lai.TimeMachine(1024, 32) |> f64
num_hiddens = 32
lstm = LSTMScratch(length(data.vocab), num_hiddens)
model = RNNLMScratch(lstm, length(data.vocab)) |> f64RNNLMScratch(
LSTMScratch(
(W_ix = [0.08747171542425301 -0.17341179478720958 … -0.06726800252203806 0.09367294373942317; -0.1452824044577879 0.12619551779995467 … -0.09895119556199267 0.010863879061412557; … ; -0.04473087731072691 -0.09172165627921565 … -0.022918508423614457 0.12640761448798024; 0.044891806895076894 0.1253085614641499 … 0.13964047920846456 0.04759282998431221], W_ih = [0.00198311320890812 0.06285866923421234 … 0.1181754137313053 0.022546301070249433; -0.11826559391442358 -0.02407909507971443 … 0.10119098908374068 -0.09650097487588047; … ; -0.1636589338274944 0.17372744913318205 … 0.05804939516927729 -0.017811349706691525; -0.055622201558758556 -0.02674223288341316 … -0.012510227028076538 -0.07420678957704421], b_i = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 … 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]), # 1_952 parameters
(W_fx = [-0.03990545908452857 -0.067809551088846 … -0.10866316208637067 0.09450208536139496; 0.104632485712098 -0.0713691863742404 … 0.1776967778508824 -0.08515016488721067; … ; -0.032020427141648176 0.07252298887271223 … -0.038950676291590644 -0.09071531169374465; -0.14312767584185845 0.03182870549135677 … 0.2484052388570822 0.02838694309947931], W_fh = [-0.1542728723745034 -0.08137738201212526 … 0.13508575195738584 0.04435403279988345; 0.0926025052358465 0.2115985738567155 … 0.041519734356975056 -0.048142429347691744; … ; -0.021665807358067564 0.284003578361186 … -0.04007965960538923 -0.0943554101580606; 0.10265102692019597 0.062394240951549586 … 0.0037949800148378283 0.06891733867484122], b_f = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 … 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]), # 1_952 parameters
(W_cx = [-0.057746973035697396 0.049473263701352875 … 0.14266931815401726 -0.12835027766770862; -0.15853591183556465 0.11142973276125619 … -0.0514186117703089 -0.04167830752986392; … ; 0.03802113801933415 -0.16798656201263984 … -0.05775810377740525 0.07803486512754508; 0.0007221079892858898 0.11214275798990739 … 0.020792252262238735 0.016063651387834856], W_ch = [-0.010305367639121033 -0.026382694315963292 … 0.021271328314904577 -0.17804910157928017; 0.10761429933258931 0.0047901517084850105 … -0.05323488569625752 -0.025607971816758474; … ; 0.09385819398844707 0.07621759848114866 … 0.02868831050757416 0.09367484361681064; 0.01416586475005763 0.05216876563726765 … -0.16881776954360384 -0.04751474790122834], b_c = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 … 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]), # 1_952 parameters
(W_ox = [-0.04526375021551397 -0.06648484271920702 … 0.10107594549007899 -0.1152532728881317; -0.18353714110467184 -0.027480361116801202 … -0.0998697352851585 -0.10317289653443074; … ; -0.006446491123335958 -0.08940384910640532 … 0.0316224204708325 0.20238208121579998; -0.09458060135995505 0.06142514360853934 … -0.08026848914011071 0.08207326271481763], W_oh = [-0.08824522502870512 0.05201644901708602 … -0.13259675059260456 -0.07490408118605714; 0.09243762383911425 -0.0026307008373316855 … 0.10233276493803499 0.08652392808485837; … ; -0.06646381760877561 -0.09401510737760418 … -0.10332148228115944 -0.10370990372253414; -0.09358430741237717 -0.14512489397013742 … 0.09465269108418406 0.2752306069916312], b_o = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 … 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]), # 1_952 parameters
),
28×32 Matrix{Float64}, # 896 parameters
28-element Vector{Float64}, # 28 parameters (all zero)
) # Total: 14 arrays, 8_732 parameters, 68.977 KiB.opt = Descent(1.)
trainer = Trainer(model, data, opt; max_epochs = 100, gpu = true, board_yscale = :identity, gradient_clip_val = 1.)
m = d2lai.fit(trainer); [ Info: Train Loss: 2.8577464, Val Loss: 2.8570354
[ Info: Train Loss: 2.789548, Val Loss: 2.7877047
[ Info: Train Loss: 2.688144, Val Loss: 2.6732988
[ Info: Train Loss: 2.5377173, Val Loss: 2.537118
[ Info: Train Loss: 2.4538963, Val Loss: 2.4457486
[ Info: Train Loss: 2.3772397, Val Loss: 2.3949955
[ Info: Train Loss: 2.3360584, Val Loss: 2.3654156
[ Info: Train Loss: 2.2909172, Val Loss: 2.3251545
[ Info: Train Loss: 2.2559114, Val Loss: 2.3191686
[ Info: Train Loss: 2.2344718, Val Loss: 2.2964644
[ Info: Train Loss: 2.1931727, Val Loss: 2.2701557
[ Info: Train Loss: 2.168284, Val Loss: 2.2416053
[ Info: Train Loss: 2.1241865, Val Loss: 2.224526
[ Info: Train Loss: 2.1009102, Val Loss: 2.1999269
[ Info: Train Loss: 2.0752017, Val Loss: 2.1708565
[ Info: Train Loss: 2.0330355, Val Loss: 2.1414955
[ Info: Train Loss: 2.0177925, Val Loss: 2.1233914
[ Info: Train Loss: 1.9999425, Val Loss: 2.1197312
[ Info: Train Loss: 1.9709194, Val Loss: 2.0945277
[ Info: Train Loss: 1.9374853, Val Loss: 2.094332
[ Info: Train Loss: 1.9336435, Val Loss: 2.0630207
[ Info: Train Loss: 1.91804, Val Loss: 2.0741513
[ Info: Train Loss: 1.9016124, Val Loss: 2.031097
[ Info: Train Loss: 1.8670052, Val Loss: 2.032497
[ Info: Train Loss: 1.8544912, Val Loss: 2.0302908
[ Info: Train Loss: 1.831575, Val Loss: 2.0121994
[ Info: Train Loss: 1.8039266, Val Loss: 2.0143247
[ Info: Train Loss: 1.7944635, Val Loss: 2.003396
[ Info: Train Loss: 1.7696036, Val Loss: 1.9874908
[ Info: Train Loss: 1.751212, Val Loss: 1.9674721
[ Info: Train Loss: 1.7603586, Val Loss: 1.9571958
[ Info: Train Loss: 1.7435517, Val Loss: 1.9905457
[ Info: Train Loss: 1.7058074, Val Loss: 1.961815
[ Info: Train Loss: 1.7169164, Val Loss: 1.9375858
[ Info: Train Loss: 1.6705436, Val Loss: 1.9560782
[ Info: Train Loss: 1.679861, Val Loss: 1.9562875
[ Info: Train Loss: 1.6392078, Val Loss: 1.9134141
[ Info: Train Loss: 1.635678, Val Loss: 1.9301218
[ Info: Train Loss: 1.6286591, Val Loss: 1.9856387
[ Info: Train Loss: 1.6013554, Val Loss: 1.9378722
[ Info: Train Loss: 1.617093, Val Loss: 1.9436615
[ Info: Train Loss: 1.5867039, Val Loss: 1.9401144
[ Info: Train Loss: 1.5967989, Val Loss: 1.9242941
[ Info: Train Loss: 1.5880837, Val Loss: 1.9108937
[ Info: Train Loss: 1.5574433, Val Loss: 1.9368172
[ Info: Train Loss: 1.5503292, Val Loss: 1.9265687
[ Info: Train Loss: 1.5365485, Val Loss: 1.9561881
[ Info: Train Loss: 1.533664, Val Loss: 1.966977
[ Info: Train Loss: 1.5209395, Val Loss: 1.9422178
[ Info: Train Loss: 1.5454886, Val Loss: 1.9531256
[ Info: Train Loss: 1.513706, Val Loss: 1.9142479
[ Info: Train Loss: 1.5376143, Val Loss: 1.8976619
[ Info: Train Loss: 1.5177532, Val Loss: 1.9576881
[ Info: Train Loss: 1.4931737, Val Loss: 1.9219521
[ Info: Train Loss: 1.4751806, Val Loss: 1.950009
[ Info: Train Loss: 1.4812049, Val Loss: 1.9766624
[ Info: Train Loss: 1.4989445, Val Loss: 1.9336927
[ Info: Train Loss: 1.4475642, Val Loss: 1.9463936
[ Info: Train Loss: 1.4587109, Val Loss: 1.9254599
[ Info: Train Loss: 1.441813, Val Loss: 1.9885918
[ Info: Train Loss: 1.4468588, Val Loss: 1.9261206
[ Info: Train Loss: 1.4331188, Val Loss: 1.9755284
[ Info: Train Loss: 1.4292498, Val Loss: 1.9617558
[ Info: Train Loss: 1.4354277, Val Loss: 1.9441824
[ Info: Train Loss: 1.4217302, Val Loss: 2.00037
[ Info: Train Loss: 1.434056, Val Loss: 1.9373636
[ Info: Train Loss: 1.4313787, Val Loss: 1.9979906
[ Info: Train Loss: 1.3996192, Val Loss: 1.9893837
[ Info: Train Loss: 1.3944763, Val Loss: 1.9426534
[ Info: Train Loss: 1.404267, Val Loss: 2.0085268
[ Info: Train Loss: 1.4112734, Val Loss: 1.9628971
[ Info: Train Loss: 1.4130398, Val Loss: 1.9723268
[ Info: Train Loss: 1.4022796, Val Loss: 1.9902713
[ Info: Train Loss: 1.3957345, Val Loss: 1.9562278
[ Info: Train Loss: 1.3724736, Val Loss: 1.9985989
[ Info: Train Loss: 1.3800858, Val Loss: 1.9412769
[ Info: Train Loss: 1.3721211, Val Loss: 1.9895244
[ Info: Train Loss: 1.3690766, Val Loss: 1.9767517
[ Info: Train Loss: 1.362259, Val Loss: 1.9564062
[ Info: Train Loss: 1.3640676, Val Loss: 2.000539
[ Info: Train Loss: 1.3771462, Val Loss: 2.002624
[ Info: Train Loss: 1.357882, Val Loss: 1.9988647
[ Info: Train Loss: 1.3579962, Val Loss: 2.026906
[ Info: Train Loss: 1.3600618, Val Loss: 2.0099378
[ Info: Train Loss: 1.3557875, Val Loss: 1.9709011
[ Info: Train Loss: 1.3204275, Val Loss: 1.9751427
[ Info: Train Loss: 1.3581452, Val Loss: 2.0131629
[ Info: Train Loss: 1.3186963, Val Loss: 1.985362
[ Info: Train Loss: 1.333647, Val Loss: 1.9873333
[ Info: Train Loss: 1.3187902, Val Loss: 2.0392253
[ Info: Train Loss: 1.322236, Val Loss: 2.0185454
[ Info: Train Loss: 1.3284179, Val Loss: 2.0056393
[ Info: Train Loss: 1.3288062, Val Loss: 2.0357358
[ Info: Train Loss: 1.3189505, Val Loss: 2.0276403
[ Info: Train Loss: 1.3097466, Val Loss: 2.020081
[ Info: Train Loss: 1.3055674, Val Loss: 2.0383105
[ Info: Train Loss: 1.3121166, Val Loss: 2.0296302
[ Info: Train Loss: 1.3159087, Val Loss: 2.0466354
[ Info: Train Loss: 1.2757726, Val Loss: 2.0666091
[ Info: Train Loss: 1.2954105, Val Loss: 2.0443974prefix = "it has"
d2lai.prediction(prefix, m[1], data.vocab, 20)"it has and some that is al"Concise Implementation
Using high-level APIs, we can directly instantiate an LSTM model. This encapsulates all the configuration details that we made explicit above. The code is significantly faster.
lstm_concise = LSTM(length(data.vocab) => num_hiddens; return_state = true)
model = RNNModelConcise(lstm_concise, num_hiddens, length(data.vocab)) |> f64
opt = Descent(1.)
trainer = Trainer(model, data, opt; max_epochs = 100, gpu = true, board_yscale = :identity, gradient_clip_val = 1.)
m = d2lai.fit(trainer); [ Info: Train Loss: 2.8427107, Val Loss: 2.8369534
[ Info: Train Loss: 2.76487, Val Loss: 2.7668047
[ Info: Train Loss: 2.6583545, Val Loss: 2.6500497
[ Info: Train Loss: 2.5179682, Val Loss: 2.5197139
[ Info: Train Loss: 2.431327, Val Loss: 2.4431098
[ Info: Train Loss: 2.3572307, Val Loss: 2.391342
[ Info: Train Loss: 2.3227513, Val Loss: 2.3556664
[ Info: Train Loss: 2.2947817, Val Loss: 2.3331885
[ Info: Train Loss: 2.2521603, Val Loss: 2.2913945
[ Info: Train Loss: 2.2128947, Val Loss: 2.2687695
[ Info: Train Loss: 2.1813984, Val Loss: 2.2456841
[ Info: Train Loss: 2.1330736, Val Loss: 2.2201614
[ Info: Train Loss: 2.107634, Val Loss: 2.193372
[ Info: Train Loss: 2.067568, Val Loss: 2.15843
[ Info: Train Loss: 2.0382428, Val Loss: 2.1482017
[ Info: Train Loss: 2.0414839, Val Loss: 2.1229594
[ Info: Train Loss: 1.9991355, Val Loss: 2.112071
[ Info: Train Loss: 1.9926142, Val Loss: 2.0838833
[ Info: Train Loss: 1.9585189, Val Loss: 2.0609097
[ Info: Train Loss: 1.9337728, Val Loss: 2.049715
[ Info: Train Loss: 1.8995695, Val Loss: 2.041491
[ Info: Train Loss: 1.87231, Val Loss: 2.0025415
[ Info: Train Loss: 1.8709, Val Loss: 2.007211
[ Info: Train Loss: 1.8434366, Val Loss: 1.9976096
[ Info: Train Loss: 1.8393289, Val Loss: 2.0047002
[ Info: Train Loss: 1.8241904, Val Loss: 1.9923693
[ Info: Train Loss: 1.8032255, Val Loss: 1.994213
[ Info: Train Loss: 1.7696857, Val Loss: 1.9697965
[ Info: Train Loss: 1.7758065, Val Loss: 1.9684645
[ Info: Train Loss: 1.7555944, Val Loss: 1.9513144
[ Info: Train Loss: 1.7414378, Val Loss: 1.9620013
[ Info: Train Loss: 1.7296154, Val Loss: 1.9514403
[ Info: Train Loss: 1.71898, Val Loss: 1.9280633
[ Info: Train Loss: 1.709265, Val Loss: 1.9368947
[ Info: Train Loss: 1.699403, Val Loss: 1.9414907
[ Info: Train Loss: 1.6675162, Val Loss: 1.9390253
[ Info: Train Loss: 1.676482, Val Loss: 1.9241304
[ Info: Train Loss: 1.6794927, Val Loss: 1.9257721
[ Info: Train Loss: 1.6586671, Val Loss: 1.9032117
[ Info: Train Loss: 1.6457124, Val Loss: 1.9131901
[ Info: Train Loss: 1.6280394, Val Loss: 1.9078672
[ Info: Train Loss: 1.6233665, Val Loss: 1.900517
[ Info: Train Loss: 1.6127323, Val Loss: 1.8871064
[ Info: Train Loss: 1.604043, Val Loss: 1.905726
[ Info: Train Loss: 1.6070231, Val Loss: 1.8944578
[ Info: Train Loss: 1.5934694, Val Loss: 1.8868828
[ Info: Train Loss: 1.5822835, Val Loss: 1.8936948
[ Info: Train Loss: 1.5772934, Val Loss: 1.897214
[ Info: Train Loss: 1.5587626, Val Loss: 1.8838716
[ Info: Train Loss: 1.5576969, Val Loss: 1.9061204
[ Info: Train Loss: 1.5581809, Val Loss: 1.8890705
[ Info: Train Loss: 1.5537266, Val Loss: 1.8935368
[ Info: Train Loss: 1.5498881, Val Loss: 1.9078213
[ Info: Train Loss: 1.5434625, Val Loss: 1.9040521
[ Info: Train Loss: 1.517248, Val Loss: 1.9126192
[ Info: Train Loss: 1.5158942, Val Loss: 1.9081383
[ Info: Train Loss: 1.5224184, Val Loss: 1.9113809
[ Info: Train Loss: 1.5104607, Val Loss: 1.8958948
[ Info: Train Loss: 1.508267, Val Loss: 1.9130298
[ Info: Train Loss: 1.4817677, Val Loss: 1.9148769
[ Info: Train Loss: 1.501207, Val Loss: 1.9040751
[ Info: Train Loss: 1.4847144, Val Loss: 1.8919543
[ Info: Train Loss: 1.5002887, Val Loss: 1.9020813
[ Info: Train Loss: 1.4769074, Val Loss: 1.9049815
[ Info: Train Loss: 1.4697545, Val Loss: 1.8896354
[ Info: Train Loss: 1.4565266, Val Loss: 1.9191732
[ Info: Train Loss: 1.4550897, Val Loss: 1.9008762
[ Info: Train Loss: 1.4565283, Val Loss: 1.9195735
[ Info: Train Loss: 1.4566759, Val Loss: 1.9123726
[ Info: Train Loss: 1.4473749, Val Loss: 1.9118538
[ Info: Train Loss: 1.4360607, Val Loss: 1.9222747
[ Info: Train Loss: 1.4182155, Val Loss: 1.9195168
[ Info: Train Loss: 1.4331613, Val Loss: 1.9151908
[ Info: Train Loss: 1.4168733, Val Loss: 1.9315561
[ Info: Train Loss: 1.4212273, Val Loss: 1.9126766
[ Info: Train Loss: 1.4102596, Val Loss: 1.9049134
[ Info: Train Loss: 1.4269556, Val Loss: 1.9387856
[ Info: Train Loss: 1.4188406, Val Loss: 1.9388701
[ Info: Train Loss: 1.3939968, Val Loss: 1.9117727
[ Info: Train Loss: 1.4007958, Val Loss: 1.9260702
[ Info: Train Loss: 1.4008294, Val Loss: 1.9048063
[ Info: Train Loss: 1.4162264, Val Loss: 1.9182342
[ Info: Train Loss: 1.3896556, Val Loss: 1.9164705
[ Info: Train Loss: 1.3912114, Val Loss: 1.935259
[ Info: Train Loss: 1.4087954, Val Loss: 1.9066732
[ Info: Train Loss: 1.3773736, Val Loss: 1.933785
[ Info: Train Loss: 1.3843387, Val Loss: 1.9180298
[ Info: Train Loss: 1.3781508, Val Loss: 1.915618
[ Info: Train Loss: 1.3738914, Val Loss: 1.9218712
[ Info: Train Loss: 1.3691036, Val Loss: 1.9033444
[ Info: Train Loss: 1.3789481, Val Loss: 1.9576943
[ Info: Train Loss: 1.3722394, Val Loss: 1.9286522
[ Info: Train Loss: 1.3392986, Val Loss: 1.9551451
[ Info: Train Loss: 1.35313, Val Loss: 1.9495249
[ Info: Train Loss: 1.36132, Val Loss: 1.9364525
[ Info: Train Loss: 1.358688, Val Loss: 1.9494243
[ Info: Train Loss: 1.3130082, Val Loss: 1.9404734
[ Info: Train Loss: 1.338081, Val Loss: 1.9542209
[ Info: Train Loss: 1.3178873, Val Loss: 1.9131705
[ Info: Train Loss: 1.3307419, Val Loss: 1.9667456prefix = "it has"
d2lai.prediction(prefix, m[1], data.vocab, 20; state = (zeros(num_hiddens), zeros(num_hiddens)))"it has of the then the tim"LSTMs are the prototypical latent variable autoregressive model with nontrivial state control. Many variants thereof have been proposed over the years, e.g., multiple layers, residual connections, different types of regularization. However, training LSTMs and other sequence models (such as GRUs) is quite costly because of the long range dependency of the sequence. Later we will encounter alternative models such as Transformers that can be used in some cases.
Summary
While LSTMs were published in 1997, they rose to great prominence with some victories in prediction competitions in the mid-2000s, and became the dominant models for sequence learning from 2011 until the rise of Transformer models, starting in 2017. Even Tranformers owe some of their key ideas to architecture design innovations introduced by the LSTM.
LSTMs have three types of gates: input gates, forget gates, and output gates that control the flow of information. The hidden layer output of LSTM includes the hidden state and the memory cell internal state. Only the hidden state is passed into the output layer while the memory cell internal state remains entirely internal. LSTMs can alleviate vanishing and exploding gradients.
Exercises
Adjust the hyperparameters and analyze their influence on running time, perplexity, and the output sequence.
How would you need to change the model to generate proper words rather than just sequences of characters?
Compare the computational cost for GRUs, LSTMs, and regular RNNs for a given hidden dimension. Pay special attention to the training and inference cost.
Since the candidate memory cell ensures that the value range is between
and by using the function, why does the hidden state need to use the function again to ensure that the output value range is between and ? Implement an LSTM model for time series prediction rather than character sequence prediction.