Thanks to glmnet working last week, over the course of Week 5, I was able to work on the accuracy of my model. The model as a whole has an accuracy of around 96%, which is very high. However, I bumped into an issue when calculating the accuracy of the output from the glmnet function – it was around 75%, which is pretty low.
The glmnet function extracts the output of the first layer of my neural network (there are 10 in total) and tells us the accuracy of the first layer’s predictions. Obviously if an image is classified by the whole neural network as opposed to one layer, it’ll be a more accurate prediction about which number that image represents. I increased the accuracy of the first layer’s predictions by tweaking the lambda parameter (sparsity control parameter) of the glmnet function.
The optimal lambda was around 0.00009, which is extremely low and would produce images that while accurate, aren’t interpretable. Interpretability is important because it helps us, as users understand how the neural network makes its decisions better. In order to strike a balance between model accuracy and interpretability, I found that a lambda of 0.02 works best.
Here are some of the images I produced with that lambda.
In the above images, the pixelated regions of the images on the left indicate the most significant areas of each number (these are the areas of an image the computer uses to determine what number the image is). The images on the right are samples of what those handwritten digits look like.
Thanks for reading and see you next week!