I’ve pretty much finished with the Kaggle data set I was using. While a few algorithms I was using were previously underperforming, I tested them with different parameters, and eventually found ones that increased the accuracy to fairly good levels (around 70-80%) correctly classified so far. Also, it seems that the algorithms are being run on all of the data points, so none have been left out altogether.
There are a few more data sets I will move on to next. An important one is the UCI Student Performance Data Set. Hopefully, I should obtain the final results soon.