Our brain executes very sparse computation, allowing for great speed and energy savings. Deep neural networks can also be made to exhibit high levels of sparsity without significant accuracy loss. As their size grows, it is becoming imperative that we use sparsity to improve their efficiency. This is a challenging task because the memory systems and SIMD operations that dominate todays CPUs and GPUs do not lend themselves easily to the irregular data patterns sparsity introduces.
This talk will survey the role of sparsity in neural network computation, and the parallel algorithms and hardware features that nevertheless allow us to make effective use of it.
Nir Shavit is the co-founder of Neural Magic. An award-winning computer scientist, patented inventor, professor, and author, Nir is a tech veteran with more than three decades of experience.
He currently serves as a professor in the Department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology and spent more than 25 years as a professor of computer science at Tel Aviv University.
Nir has received numerous awards including a 2004 Gödel Prize in theoretical computer science for his work on applying tools from algebraic topology to model shared memory computability, and a winner of the 2012 Dijkstra Prize for the introduction and first implementation of software transactional memory.
Nir is also co-author of the book The Art of Multiprocessor Programming and is an ACM fellow. Nir received B.Sc. and M.Sc. degrees in Computer Science from Technion – Israel Institute of Technology, and a Ph.D. in Computer Science from the Hebrew University of Jerusalem.