Introduction to Recurrent Neural Networks in Pytorch

This tutorial is intended for someone who wants to understand how Recurrent Neural Network works, no prior knowledge about RNN is required. We will implement the most simple RNN model – Elman Recurrent Neural Network. To get a better understanding of RNNs, we will build it from scratch using Pytorch tensor package and autograd library.

I assume that you have some understanding of feed-forward neural network if you are new to Pytorch and autograd library checkout my tutorial.

Elman Recurrent Neural Network

An Elman network was introduced by Jeff Elman, and was first published in a paper entitled Finding structure in time It’s just a  three-layer feed-forward network, in our case, input layer consist of one input neuron \(x_{1}\)  and additional units called context neurons \(c_{1}\) … \(c_{n}\). Context neurons receive input from the hidden layer neurons, from previous time step. We have one context neuron per neuron in the hidden layer. Since the state from previous time step is provided as a part of the input, we can say that network has a form of memory, context neurons represent a memory.

Predicting the sine wave

We will train our RNN to learn sine function. During training we will be feeding our model with one data point at a time, that is why we need only one input neuron  \(x_{1}\), and we want to predict the value at next time step. Our input sequence x consists of 20 data points, and the target sequence is the same as the input sequence but it ‘s shifted by one-time step into the future.

Implementing the model

We start by importing the necessary packages.

Next, we’ll set the model hyperparameters,  the size of the input layer is set to 7, which means that we will have 6 context neurons and 1 input neuron, seq_length defines the length of our input and target sequence.

Now we will generate the training data, where x is an input sequence and y is a target sequence.

We need to create two weight matrices, w1 of size (input_size, hidden_size) for input to hidden connections, and a w2 matrix of size (hidden_size, output_size) for hidden to output connection. Weights are initialized using a normal distribution with zero mean.

We can now define forward method, it takes input vector, context_state vector and two weights matrices as arguments. We will create vector xh by concatenating input vector with the context_state vector. We perform dot product between the xh vector and weight matrix w1, then apply tanh function as nonlinearity, which works better with RNNs than sigmoid. Then we perform another dot product between new context_state and weight matrix w2. We want to predict continuous value, so we do not apply any nonlinearity at this stage.

Note that context_state vector will be used to populate context neurons at the next time step. That is why we return context_state vector along with the output of the network.

This is where the magic happens, context_state vector summarizes the history of the sequence it has seen so far.

Training

Our training loop will be structured as follows.

  • The outer loop iterates over each epoch. Epoch is defined as one pass of all training data. At the beginning of each epoch, we need to initialize our context_state vector with zeros.
  • The inner loop runs through each element of the sequence. We run forward method to perform forward pass which returns prediction and context_state which will be used for next time step. Then we compute Mean Square Error (MSE),  which is a natural choice when we want to predict continuous values.  By running backward() method on the loss we calculating gradients, then we update the weights. We’re supposed to clear the gradients at each iteration by calling zero_() method otherwise gradients will be accumulated. The last thing we do is wrapping context_state vector in new Variable, to detach it from its history.

The output generated during training shows how the loss is decreasing with every epoch, which is a good sign. Decaying loss means that our model is learning.

Epoch: 0 loss 2.777482271194458
Epoch: 10 loss 0.10264662653207779
Epoch: 20 loss 0.1178232803940773

Epoch: 280 loss 0.005524573381990194
Epoch: 290 loss 0.005174985621124506

Making Predictions

Once our model is trained, we can make predictions, at each step of the sequence we will feed the model with single data point and ask the model to predict one value at the next time step.

As you can see, our model did a pretty good job.

Conclusion

In this post, we implemented a basic RNN model from scratch using Pytorch. We learned how to apply RNN to simple sequence prediction problem. The full code is available on Github.

2 thoughts to “Introduction to Recurrent Neural Networks in Pytorch”

Leave a Reply

Your email address will not be published. Required fields are marked *