Week 8: Hybrid

Apr 12, 2019

Hello! During this week, I worked on classifying more numbers from the interesting data sample. Since the handwritten sample I’m trying to classify is written on graph paper, I wanted to minimize distractions for the neural network so it could focus on only the number. The gridlines on the paper were blue, so in order to dilute them I reduced the “B” value on the RGB values. By doing this, I was able to successfully classify a couple more numbers from the interesting data sample.

In addition to this, I talked through my approach for the hybrid model with my external advisor. I’ll be coding in Julia (similar to MatLab), and this code will determine whether the linear model (higher interpretability, lower accuracy) or the black-box model (low interpretability, higher accuracy) should be used to classify the number. Essentially, if it’s an easier number to classify, it’ll go to the linear model, and if not then it’ll go to the black-box. Each image will have a “loss-function,” which will be used to determine which model it should be classified by. If the loss-function is higher than a certain threshold, the image will be sent to the black-box model, and if not, it’ll go to the linear.

In order to show that this hybrid model can be used with all types of black-box models and to determine which black-box model is the best, I’m experimenting with three different black-box models: randomForest, XGBoost, and adaboost. I’ll be using both the MNIST and Covtype datasets in order to make these determinations.

Thanks for reading, and come back next week for more updates!

Best,

Arshia

2 Replies to “Week 8: Hybrid”

  1. Vidur Gupta says:

    What are the differences between the three different models for black-box?

    1. Arshia S. says:

      They are all great models and I’m trying to test which one would be the best fit. They run different regressions and predict differently.

Leave a Reply

Your email address will not be published.