Abstract
Studies on the neural correlates of navigation in 3D environments are
plagued by several issues that need to be solved. For example,
experimental studies show markedly different place cell responses in
rats and bats, both navigating in 3D environments. To understand this
divergence, we propose a deep autoencoder network to model the place
cells and grid cells in a simulated agent navigating in a 3D
environment. We also explore the possibility of the vital role that Head
Direction (HD) tuning plays in determining the isotropic or anisotropic
nature of the observed place fields in different species. The input
layer to the autoencoder network model is the HD layer which encodes the
agent’s HD in terms of azimuth (θ) and pitch angles (ϕ). The output of
this layer is given as input to the Path Integration (PI) layer, which
integrates velocity information into the phase of oscillating neural
activity. The output of the PI layer is modulated and passed through a
low pass filter to make it purely a function of space before passing it
to an autoencoder. The bottleneck layer of the autoencoder model encodes
the spatial cell-like responses. Both grid cell and place cell-like
responses are observed. The proposed model is verified using two
experimental studies with two 3D environments. This model paves the way
for a holistic approach to using deep neural networks to model spatial
cells in 3D navigation.