In this notebook, we're going to build a neural network using naught but pure numpy and steel nerves. It's going to be fun, I promise!
Here goes our main class: a layer that can .forward() and .backward().
We're going to build a neural network that classifies MNIST digits. To do so, we'll need a few building blocks:
Let's approach them one at a time.
This is the simplest layer you can get: it simply applies a nonlinearity to each element of your network.
Now let's build something more complicated. Unlike nonlinearity, a dense layer actually has something to learn.
A dense layer applies affine transformation. In a vectorized form, it can be described as: @@0@@
Both W and b are initialized during layer creation and updated each time backward is called.
Here we have a few tests to make sure your dense layer works properly. You can just run them, get 3 "well done"s and forget they ever existed.
... or not get 3 "well done"s and go fix stuff. If that is the case, here are some tips for you:
Since we want to predict probabilities, it would be logical for us to define softmax nonlinearity on top of our network and compute loss given predicted probabilities. However, there is a better way to do so.
If you write down the expression for crossentropy as a function of softmax logits (a), you'll see:
If you take a closer look, ya'll see that it can be rewritten as:
It's called Log-softmax and it's better than naive log(softmax(a)) in all aspects:
So why not just use log-softmax throughout our computation and never actually bother to estimate probabilities.
Here you are! We've defined the both loss functions for you so that you could focus on neural network part.
Let't find a stable version of cross entropy
Let't find a stable version of cross entropy gradient
Now let's combine what we've just built into a working neural network. As we announced, we're gonna use this monster to classify handwritten digits, so let's get them loaded.
We'll define network as a list of layers, each applied on top of previous one. In this setting, computing predictions and training becomes trivial.
Instead of tests, we provide you with a training loop that prints training and validation accuracies on every epoch.
If your implementation of forward and backward are correct, your accuracy should grow from 90~93% to >97% with the default network.
As usual, we split data into minibatches, feed each such minibatch into the network and update weights.
Congradulations, you managed to get this far! There is just one quest left undone, and this time you'll get to choose what to do.
To pass this assignment, you must conduct an experiment showing how xavier initialization compares to default initialization on deep networks (5+ layers).
To pass this assignment, you must conduct an experiment showing if regularization mitigates overfitting in case of abundantly large number of neurons. Consider tuning @@1@@ for better results.
Most of those methods require persistent parameters like momentum direction or moving average grad norm, but you can easily store those params inside your layers.
To pass this assignment, you must conduct an experiment showing how your chosen method performs compared to vanilla SGD.
Please read the peer-review guidelines before starting this part of the assignment.
In short, a good solution is one that:
Formally we can't ban you from writing boring reports, but if you bored your reviewer to death, there's noone left alive to give you the grade you want.
As a bonus assignment (no points, just swag), consider implementing Batch Normalization (guide) or Dropout (guide). Note, however, that those "layers" behave differently when training and when predicting on test set.
For small network no noticeable improvemt is observed for any learning rate.
Let's try a bigger network
For low learning rates networks learn faster and achieve a higher val accuracy when trained with Xavier initialization.
For higher learning rates networks with 'normal' initialization simply do not learn at all, while networks with Xavier initialization exhibit very good perfomance (not considering network with highest learning rate where numerical overflow occured).
Let's try even bigger network!
For big networks the effect is even more pronounced: with 'normal' initialization networks do no learn at all for any learning rate, while networks with Xavier initialization show very good learning curves (last network again experienced numerical overflow)
Xavier initialization allows networks learn faster and achieve higher accuracies compared to initialization with normal distrbution with fixed standard deviation (0.01 in our experiments). For large networks Xavier initialization even allows to succesfully train networks that do not learn at all with 'normal' initialization.
rerun with no lr = 0.5 and with alpha = 0.003 or higher