Hippocampal data
Gulli et al. 2020, Nat Neuro
Roberto A. Gulli, Lyndon R. Duong, Benjamin W. Corrigan, Guillaume Doucet, Sylvain Williams, Stefano Fusi, & Julio C. Martinez-Trujillo
DOI: 10.1038/s41593-019-0548-3
Download the full dataset on GitHub.
Context dependent representations of objects and space in the primate hippocampus during virtual navigation
Overview
The GitHub directory hosts the data that was analyzed and included in Gulli et al. (2020).
Spatial position and firing rate vectors
Figs. 1, 2, & 3 all include analyses that start with subjects’ trial-to-trial position in the X-Maze during each task.
This data can be downloaded in the directory Position and Spike Rasters
.
In Fig. 1d, six example neurons are shown with spikes overlaid on trajectories through the virtual reality environment. Plots akin to these can be quickly generated for any neuron using the data provided here.
Spatial information content and spatial firing fields
Fig. 1e reports spatial information content for the population of hippocampal neurons recorded in each task. For these analyses, firing rate for each neuron was computed in 104 × 104 unit pixels.
The specificity of each neuron’s spatial response map was quantified using spatial information content. Each neuron’s information content (\(I\); in bits) is defined as
\[I=\sum_i^L P_i \frac{\lambda_i}{\bar{\lambda}}\log_2 \frac{\lambda_i}{\bar{\lambda}}\]where \(L\) is the total number of pixels, \(P_i\) is the proportion of time spent in the \(i^{th}\) pixel, and \(\lambda_i\) is the average firing rate for each pixel.
Note, normalized spatial information content values are reported in the manuscript. These values subtract the mean of permutation-derived null spatial information content values for each neuron.
Fig. 2 shows the number of neurons with a significantly elevated firing rate for each pixel of the maze in each task using the same pixel size. Pixellated firing rate maps are shown for six example neurons in Extended Data Figure 4. The analyses in Fig. 2 are replicated the using 208 × 208 unit pixels in Extended Data Figure 5.
Spatial decoding analyses
Fig. 3 shows the spatial decoding accuracy using both an allocentric and direction-dependent reference frame, using hippocampal data collected while monkeys completed the X-Maze in both tasks. For these analyses, firing rates were computed from the position and spike rasters in either 9 or 5 distinct maze areas (corresponding to allocentric or directional reference frames, respectively). Please see the Methods for more detail.
Associative memory task trial type encoding and decoding analyses
Figs. 4, 5 & 6 use regression and classification to examine sensory and mnemonic coding for objects and context in hippocampal neurons recorded during the associative memory task.
All data for these analyses can be accessed in the Associative Memory Trial Data
directory.
More details
For more information about these data, code, questions or comments regarding the analyses, or a PDF of the manuscript and supplemental materials, feel free to email me.
This page by Roberto Gulli is licensed under a Creative Commons Attribution 4.0 International License.