This is a past event. Its details are archived for historical purposes.
The contact information may no longer be valid.
Please visit our current events listings to look for similar events by title, location, or venue.
Monday, April 29, 2019 at 11:00am to 12:00pm
Cornell Tech, Bloomberg Center, Room 165 2 West Loop Road New York, NY 10044
"Deeply-Sparse Signal Representations, Artificial Neural Networks and Hierarchical Processing in the Brain"
Two important problems in neuroscience are to understand 1) how the brain represents sensory signals hierarchically and 2) how populations of neurons encode stimuli and how this encoding is related to behavior. I have developed theoretical and computational tools to answer these questions, and have applied them, in collaboration with experimental colleagues, to elucidate the circuit mechanisms behind the observational learning of fear in rodents. My talk will focus on the tools I have developed to answer the first question. First, because they provide theoretical answers as to the complexity of learning deep neural networks. Second, because the framework behind these tools has implications on the principles of hierarchical processing in the brain. I will show a strong parallel between deep neural network architectures and sparse recovery and estimation, namely that a deep neural network architecture with ReLu nonlinearities arises from a finite sequence of cascaded sparse coding models, the outputs of which, except for the last element in the cascade, are sparse and unobservable. I have shown that if the measurement matrices in the cascaded sparse coding model (a) satisfy RIP and (b) all have sparse columns except for the last, they can be recovered with high probability in the absence of noise using a sequential alternating optimization algorithm. The method of choice in deep learning to solve this problem is by training a deep auto-encoder. My main result states that the complexity of learning this deeply-sparse coding model is given by the maximum, across layers, of the product of the number of active neurons (sparsity) and the embedding dimension (of the sparse vector). I will demonstrate the usefulness of the relationship between the sparse coding model and auto-encoders through an architecture that performs convolutional dictionary learning, and that was applied to a popular source separation problem in computational neuroscience known as spike sorting.
Bio: Demba Ba received the B.Sc. degree in electrical engineering from the University of Maryland, College Park, MD, USA, in 2004, and the M.Sci. and Ph.D. degrees in electrical engineering and computer science with a minor in mathematics from the Massachusetts Institute of Technology, Cambridge, MA, USA, in 2006 and 2011, respectively. In 2006 and 2009, he was a Summer Research Intern with the Communication and Collaboration Systems Group, Microsoft Research, Redmond, WA, USA. From 2011 to 2014, he was a Postdoctoral Associate with the MIT/Harvard Neuroscience Statistics Research Laboratory, where he developed theory and efficient algorithms to assess synchrony among large assemblies of neurons. He is currently an Assistant Professor of electrical engineering and bioengineering with Harvard University, where he directs the CRISP group. His research interests lie at the intersection of high-dimensional statistics, optimization and dynamic modeling, with applications to neuroscience and multimedia signal processing. Recently, he has taken a keen interest in the connection between neural networks, sparse signal processing, and hierarchical representations of sensory signals in the brain, as well as the implications of this connection on the design of data-adaptive digital signal processing hardware. In 2016, he was the recipient of a Research Fellowship in Neuroscience from the Alfred P. Sloan Foundation.