In this project, we will be visualizing and manipulating AlexNet [1]:
For this project, we are using PyTorch, an open-source deep learning library that allows an efficient implementation of CNNs. Other similar libraries include Torch, Theano, Caffe, and TensorFlow.
Some parts of this assignment were adapted/inspired from a Stanford cs231n assignment. The parts that are similar have been modified and ported to PyTorch. Thanks are due to the assignment's original creators from Stanford, as well as Noah Snavely, Kavita Bala, and various TAs who have further developed and refined this assignment.
The assignment is contained in an IPython Notebook; see below.
[1] Krizhevsky et al, "ImageNet Classification with Deep Convolutional Neural Networks", NIPS 2012
Google Colab or Colaboratory is a useful tool for machine learning, that runs entirely on cloud and thus is easy to setup. It is very similar to the Jupyter notebook environment. The notebooks are stored on users Google Drive and just like docs/sheets/files on drive, it can be shared among users for collaboration (you'll need to share your notebooks as you'll be doing this in a team of 2).
To be done in teams of 2.
There are many pieces to the assignment, but each piece is just a few lines of code. you should expect to write less than 10 lines of code for each TODO .
Tests: to verify the correctness of your solutions, you can run tests at very end of the notebook
The instructions about individual TODO are present in detail in the notebooks
597P students must design and train their own neural network for MNIST dataset. You will be given an example network and your aim should be to improve the accuracy while being under the specified parameter limit. Look at the MNIST notebook for skeleton code and more instructions.
This section contains images to illustrate what kinds of qualitative results we expect. If your images do not match these perfectly, do not panic. If your code passes the tests at the bottom of the notebook, it is considered correct. In many cases, better images can be achieved by simply training for more iterations.
Saliency: we expect that pixels related to the class have a higher value. Left: Input image. Right: saliency.
Fooling image
These images look nearly identical, and yet AlexNet will classify each image on the middle as "snail". If you look really closely you can notice some tiny visual differences. The right image shows the difference magnified by 5x (with 0 re-centered at gray).Class visualization
These images are classified as 100% belonging to different classes by AlexNet. If you run these for longer or adjust the hyperparameters, you may see a more salient result.
Many classes don't give very good results; here we show some of the better classes.
strawberry | throne | mushroom |
tarantula | flamingo | king penguin |
goblet | sax | llama |
cloak | moped | indigo bunting |
bulbul | squirrel monkey | cock |
Feature inversion (597P Only)
Note that we could probably obtain higher quality reconstructions if we ran the optimization for longer, or added a better regularizer. To keep things simple, your images only need to be mostly converged.
original | conv1 | conv2 |
conv3 | conv4 | conv5 |
fc6 | fc7 | fc8 |