Using generative models of naturalistic scenes to sample neural
population tuning manifolds
Abstract
Investigations into sensory coding in the visual system have typically
relied on the use of either simple, unnatural visual stimuli, or natural
images. Simple stimuli, such as Gabor patches, have been effective when
looking at single neurons in early visual areas such as V1, but seldom
produce large responses from mid-level visual neurons or neural
populations with diverse tuning. Many types of “naturalistic” image
models have been developed recently which bridge the gap between
overly-simple stimuli and experimentally infeasible natural images.
These stimuli can vary along a large number of feature dimensions,
introducing new challenges when trying to map those features to neural
activity. This “curse of dimensionality” is exacerbated when neural
responses are themselves high-dimensional, such as when recording neural
populations with implanted multielectrode arrays. We propose a method
that searches high-dimensional stimulus spaces for characterizing neural
population manifolds in a closed-loop experimental design. Stimuli were
generated using a deep neural network in each block by using neural
responses to previous stimuli to make predictions about the relationship
between the latent space of the image model and neural responses. We
found that these latent variables from the deep generative image model
explained stronger linear relationships with neural activity than
various alternative forms of image compression. This result reinforces
the potential for deep generative image models for efficient
characterization of high-dimensional tuning manifolds for visual neural
populations.