Over the course of this week, I worked on classifying an interesting handwriting sample using my neural network. The dataset I’ve been using is called MNIST and it contains thousands of images which are 28*28 pixels. Those pictures had extremely low resolution (due to the small number of pixels they had). The images that I got from the interesting handwriting sample from Ms. Bhattacharya were of much higher resolution, so I used an online photo editor to cut them down to 28*28 pixels each. I spent the majority of this week playing around with numerous R packages to determine the best way to import the picture into my program. After numerous hours of trial and error, my mentor and I determined that OpenImageR was our best option. In Week 7, I’ll be working with that package to put the image through my neural network and hopefully classify it.
In addition to attempting to classify the sample handwriting, I spent some time this week making the images I produced last week more interpretable. The pixelated areas of images from last week depict the regions of the images that the computer looks at to determine what number it is. However, the computer looks at both empty and filled space. For example, in a 3, the computer may look at the 2 consecutive curves of a 3, but it may also look at the empty space between the 2 semi-circles of the 3. The pixelated regions of the image that respond to empty space correspond to “negative beta,” and to make the images more interpretable, I removed all negative beta and only left the positive beta (this ensures that the pixelated regions of the image correspond to actual parts of the number).
Thanks for reading this week’s update!