Week 2: OKR Data Collection

Feb 25, 2019

Last week was very busy. Each experiment takes about 1.5 hours to run, and I had 17. So that’s 25.5 hours, not including any of my other duties in the lab. There were some pretty long days.

The experiments I ran last week were OKR experiments, testing the ability to follow objects moving in the visual field. With two exceptions, all the experiments are finished, and data analysis will begin today. Once I spend enough time cleaning up the raw data (it tends to be quite noisy), I’ll make graphs comparing the reflexes of the LTD-impaired mice to the wild type, or normal mice.

Our stimulus moves left and right, back and forth, in a sinusoidal fashion. The stimulus doesn’t change at all over the experiment, so any increase in the amount that the mice move their eyes is from their own learning. They adjust over time to follow the stimulus with their eyes. I’ll be checking whether the LTD-impaired or wild type mice learned more. There’s a video below for your convenience.


The data comes in through this awesome program called Spike2, which takes in live data about the position of the drum (the black and white cylindrical shape that moves in the video) and the chair (the platform at the bottom, which the mouse sits on — yes, it can move too! More on that next week.) It also collects data bout the mouse’s eye movements. Here’s a labeled sample trace of OKR.

In case you’re wondering what the last four traces show, that’s velocity data, and it’s not incredibly important for the purposes of our experiments; just good to have data about it because of the way these machines work. If you’re really interested, you can always talk to me between 2-5 PM on Thursdays at BASIS if you want. I’m pretty much always in 337 or 341 or 343 or 344 — somewhere in the Bio section of the school.

I’ve learned a lot about the lab environment, and am happy to say that our lab (the Raymond Lab) is one of the most communicative labs. Generally speaking, researchers don’t talk to each other or share results with each other, because they are worried about others stealing their ideas. Where I worked in the summer of 2017, other researchers were very reluctant to share what they were studying. I’m glad to be in an environment where we can all trust each other. I’ve learned a lot from other lab members, and I’ll be going over my data with them. They are unafraid to teach me about what their results show and what that means.


5 Replies to “Week 2: OKR Data Collection”

  1. anuradhamurthy says:

    I am so happy you found the right lab environment to thrive in. Sounds like a wonderful place to grow, learn and contribute.

    You are working really hard and I am sure this will pay off in terms of some publications! Go for it.

    1. Jaydev B. says:

      Thank you so much! Fingers crossed… 🙂

  2. Harleen D. says:

    I’m glad that you love your lab! I’m excited to learn more about your data.

  3. Beryl Z. says:

    This is super cool! I’d love to pop by sometime and check out the data- sounds super interesting. What is the purpose of regulating the real time position of the chair again? Aren’t you mainly focused on the eye movements?

    1. Jaydev B. says:

      We use the same rig to run both VOR and OKR. For OKR the chair doesn’t move, but for VOR it does, so really, there’s not a huge need to regulate chair position here, but it’s recorded anyway. These files aren’t really that large, so it doesn’t kill our storage.

Leave a Reply

Your email address will not be published. Required fields are marked *