Week 11: New VOR Data
This week began on Wednesday for me; I began running experiments that day, and have been running them through the weekend as well. It still takes about an hour and 40 minutes to run each one. I will be completely done by May 4, end of this week (which, I guess is the last week of Senior Projects), so I’ve just barely made it in some sense.
Either way, I’ll be analyzing the data as it comes in to speed up the process and be ready to make a conclusion by Monday. But there are still a lot of confounding variables to consider, such as what time of day the experiment was done, the gap between the current and the previous experiments, etc. However, the order of the experiments doesn’t need to be accounted for, because I randomized it earlier (see Week 5)!
Today, we can talk about the documentation that each experiment receives. I make a few different notes on each experiment in my lab notebook.
- First, I judge the signal of the sensor, choosing between “good,” “OK,” and “bad.” This refers to how well the sensor is able to record eye movements — can I infer the mouse’s eye movements from the sensor recordings? In the case of sinusoidal eye movements, better signal means a greater change in the sensor’s voltage output for the same amount of eye rotation. Each sensor typically has two channels (if one goes bad the other is a backup, so that we can still get data), so I make sure to record which channel (Channel 1 or 2) is gives better signal from the eye movements (i.e. has the larger amplitude).
- Second, I judge the quality of the eye movements themselves. How good is the eye at moving sinusoidally? This is importantly different from the first note, because it judges the ability of the eye muscles, whereas the first note judges the quality of the sensor. It can be hard to separate the two, but I can see saccades (erratic eye movements) very well, but sinusoids not so well, it is safe to assume eye muscles are bad at sinusoids as compared to a bad sensor. I also record how the eye movement amplitudes changed over the experiment (any change in amplitude is almost certainly due to the mouse learning; the sensor’s quality generally remains constant over the experiment).
- Third, I record how well the video was able to record pupil and reflection position. If the cornea of the eye is uneven, this can cause disruption of the movement of the reflection or misinterpretation of pupil position.
- Fourth, I write down how well the cameras were able to record the mouse’s eye position during calibration. Calibration is by far the most delicate part of the experiment — this is where the most things go wrong.
- Finally, I keep track of the “metadata” of the experiment — what mouse was run, what time the experiment ended, which treatment the mouse received, what folder the experiment was saved in, and so on. There’s always a “Notes” column in case of emergency. 🙂
These notes really come in handy throughout the data analysis portion of the experiment, because they can help me understand why data may look weird. Often times, there may be very few sections of good data if the sensor had bad signal. This way, I can justify throwing out certain experiments without biasing the conclusions I make from the data.
Uh oh….bullets points are weird….