“Latent Statistical Structure in Large-scale Neural Data: How to Find It, and When to Believe It”
John P. Cunningham, Ph.D.
Department of Statistics
One central challenge in neuroscience is to understand how neural populations represent and produce the remarkable computational abilities of our brains. Indeed, neuroscientists increasingly form scientific hypotheses that can only be studied at the level of the neural population, and exciting new large-scale datasets have followed. Capitalizing on this trend, however, requires two major efforts from applied statistical and machine learning researchers: (i) methods for finding latent structure in this data, and (ii) methods for statistically validating that structure. First, I will discuss our machine learning research that combines latent variable modeling, deep learning, dynamical systems, and dimensionality reduction, and I will discuss how we have applied those models to advance understanding of the computational structure in various neural systems, including in particular the primate and rodent motor cortices. Second, I will detail a problem of growing importance throughout unsupervised learning: how to understand when these analysis techniques artificially create structure, rather than that structure being a true feature of the data. I will review our recent work in this space, which uses deep neural network architectures in the flavor of implicit generative models, and describe our current application of these methods to a number of active debates in the neuroscience community about the triviality of certain results.