Concise Implementation of Linear Regression ​
Deep learning has witnessed a sort of Cambrian explosion over the past decade. The sheer number of techniques, applications and algorithms by far surpasses the progress of previous decades. This is due to a fortuitous combination of multiple factors, one of which is the powerful free tools offered by a number of open-source deep learning frameworks. Theano [14], DistBelief [15], and Caffe [16] arguably represent the first generation of such models that found widespread adoption. In contrast to earlier (seminal) works like SN2 (Simulateur Neuristique) [17], which provided a Lisp-like programming experience, modern frameworks offer automatic differentiation and the convenience of Python. These frameworks allow us to automate and modularize the repetitive work of implementing gradient-based learning algorithms.
In :numref:sec_linear_scratch, we relied only on (i) tensors for data storage and linear algebra; and (ii) automatic differentiation for calculating gradients. In practice, because data iterators, loss functions, optimizers, and neural network layers are so common, modern libraries implement these components for us as well. In this section, we will show you how to implement the linear regression model**) from :numref:sec_linear_scratch (**concisely by using high-level APIs of deep learning frameworks.
using Pkg
Pkg.activate("../../d2lai")
using d2lai
using Flux Activating project at `/workspace/workspace/d2l-julia/d2lai`Defining the Model ​
When we implemented linear regression from scratch in :numref:sec_linear_scratch, we defined our model parameters explicitly and coded up the calculations to produce output using basic linear algebra operations. You should know how to do this. But once your models get more complex, and once you have to do this nearly every day, you will be glad of the assistance. The situation is similar to coding up your own blog from scratch. Doing it once or twice is rewarding and instructive, but you would be a lousy web developer if you spent a month reinventing the wheel.
For standard operations, we can use a framework's predefined layers, which allow us to focus on the layers used to construct the model rather than worrying about their implementation. Recall the architecture of a single-layer network as described in Figure. The layer is called fully connected, since each of its inputs is connected to each of its outputs by means of a matrix–vector multiplication.
struct LinearRegressionConcise{N} <: d2lai.AbstractModel
net::N
end
function d2lai.forward(lr::LinearRegressionConcise, x)
lr.net(x)
endDefining the Loss Function ​
function d2lai.loss(lr::LinearRegressionConcise, y_pred, y)
Flux.Losses.mse(y_pred, y)
endDefining the Optimization Algorithm ​
opt = Descent(0.03)Descent(0.03)Training ​
You might have noticed that expressing our model through high-level APIs of a deep learning framework requires fewer lines of code. We did not have to allocate parameters individually, define our loss function, or implement minibatch SGD. Once we start working with much more complex models, the advantages of the high-level API will grow considerably.
Now that we have all the basic pieces in place, the training loop itself is the same as the one we implemented from scratch.
model = LinearRegressionConcise(Dense(2 => 1))
opt = Descent(0.03)
data = SyntheticRegressionData([2 -3.4], 4.3)
trainer = Trainer(model, data, opt; max_epochs = 3)
d2lai.fit(trainer)┌ Warning: Layer with Float32 parameters got Float64 input.
│ The input will be converted, but any earlier layers may be very slow.
│ layer = Dense(2 => 1) # 3 parameters
│ summary(x) = "2×32 Matrix{Float64}"
â”” @ Flux ~/.julia/packages/Flux/3711C/src/layers/stateless.jl:60 [ Info: Train Loss: 0.5285112035993975, Val Loss: 0.3674669802544852
[ Info: Train Loss: 0.009801724255686355, Val Loss: 0.007229238263031851
[ Info: Train Loss: 0.00017841425281289386, Val Loss: 0.0002817269356486329(LinearRegressionConcise{Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}(Dense(2 => 1)), (val_loss = [0.00029441198543635357, 0.00032685511703581856, 0.00034953779412912495, 0.0003339858449917922, 0.000363652779279678, 0.0003238800810076286, 0.0003354708605756992, 0.0002933743959921633, 0.0003247936637506058, 0.0002479933867462904 … 0.0002817704456362504, 0.00021349322630722425, 0.00019684240958221702, 0.00030345155823706025, 0.000214075675876277, 0.00036942509393279513, 0.0002903322578708252, 0.00034681583857292767, 0.00027830931508335635, 0.0002817269356486329], val_acc = nothing))Below, we compare the model parameters learned by training on finite data and the actual parameters that generated our dataset. To access parameters, we access the weights and bias of the layer that we need. As in our implementation from scratch, note that our estimated parameters are close to their true counterparts.
get_w(model::LinearRegressionConcise) = model.net.weight
get_b(model::LinearRegressionConcise) = model.net.biasget_b (generic function with 1 method)get_w(model)1×2 Matrix{Float32}:
1.99837 -3.39384get_b(model)1-element Vector{Float32}:
4.28833Summary ​
This section contains the first implementation of a deep network (in this book) to tap into the conveniences afforded by modern deep learning frameworks, such as MXNet [18], JAX [19], PyTorch [20], and Tensorflow [21]. We used framework defaults for loading data, defining a layer, a loss function, an optimizer and a training loop. Whenever the framework provides all necessary features, it is generally a good idea to use them, since the library implementations of these components tend to be heavily optimized for performance and properly tested for reliability. At the same time, try not to forget that these modules can be implemented directly. This is especially important for aspiring researchers who wish to live on the leading edge of model development, where you will be inventing new components that cannot possibly exist in any current library.
Exercises ​
How would you need to change the learning rate if you replace the aggregate loss over the minibatch with an average over the loss on the minibatch?
Review the framework documentation to see which loss functions are provided. In particular, replace the squared loss with Huber's robust loss function. That is, use the loss function
How do you access the gradient of the weights of the model?
What is the effect on the solution if you change the learning rate and the number of epochs? Does it keep on improving?
How does the solution change as you vary the amount of data generated?
Plot the estimation error for
and as a function of the amount of data. Hint: increase the amount of data logarithmically rather than linearly, i.e., 5, 10, 20, 50, ..., 10,000 rather than 1000, 2000, ..., 10,000. Why is the suggestion in the hint appropriate?