Despite likely faultiness in procedure, I am finishing up the last few experiments this week and will be analyzing. I wanted to go over how we do calibration and analysis.
As a quick refresher, the way we get data is from movement of a magnet above the eye, that moves with the eye. A sensor on the top of the head detects the movements of the magnet (magnet movement results in changes in the magnetic field, which produces voltage in the sensor through some electromagnetics, and I’m not exactly sure how that works). The sensor actually has two sensors in it, and we call these channels. If you look at the Spike2 files (in the Week 2 and Week 4 posts), you’ll see that we have two traces picking up eye movements. This is so that we can pick whichever channel is better; in the event of one channel being terrible and not picking up proper signal from the magnet, the other can be used.
But how, one might ask, can we convert this into actual degrees of eye movement? The answer, my friends, is calibration.
At the end of each experiment, we perform a calibration, a small segment during which just the platform under the mouse moves sinusoidally (rather than both the platform and the image on top of it). During this time, we collect eye movement data from the magnets while also taking a dual-angle video-oculography (fancy phrase for two videos at two different angles). Because of the placement of the cameras and the dual video, we are able to determine the actual number of degrees of eye movements using some nice trigonometry. In the graph below, the red line is the graph of the position of the center of the pupil over time. As the eye moves, the center of the pupil moves, so we plot this over time and get a nice red sinusoid (with some breaks here and there).
Because the eye movements are sinusoidal, the signal we get is sinusoidal (see Spike2 files in Week 2 and Week 4 posts). The video analysis also determines that the eye moves in a sine wave. Because we want to map the signal sine wave to the sine wave of actual eye movements (as detected by the two videos), all we need is a scaling factor — a number by which we multiply the signal to get the actual movements of the eye.
Hope this was clear, but if it wasn’t, please feel free to ask questions below!