## Deep Neural Network Mathematics Framework

#### Mathematics/Machine Learning Research

Researching applications of harmonic analysis, topology, Riemannian geometry, and dynamical mean field theory towards developing a rigorous mathematical framework for deep neural networks. Refer to the following papers for background on applications of harmonic analysis towards this goal, along with details on my specific interest in this area: the scattering transform and its applications. For more information (and software!) on the scattering transform, visit the scattering home page.

- E. J. Candès Harmonic Analysis of Neural Networks Applied and Computational Harmonic Analysis 6(2): 197-218, 1999.
- U. Shaham, A. Cloninger, and R. R. Coifman Provable approximation properties for deep neural networks arXiv preprint, Mar. 2016.
- J. Bruna and S. Mallat Invariant Scattering Convolution Networks IEEE Transactions on Pattern Analysis and Machine Intelligence 35(8): 1872-86, 2013.
- J. Bruna "Scattering Representations for Recognition" Ph.D. thesis, Ècole Polytechnique, Palaiseau, France, Nov. 2012.
- S. Mallat "Group Invariant Scattering" In Communications in Pure and Applied Mathematics 65(10) 1331-98, 2012.
- J. Bruna and S. Mallat Classification with Scattering Operators Computer Vision and Patern Recognition (CVPR), 2011 IEE Conference on 1561-66, 2011.
- J. Andén and S. Mallat "Deep Scattering Spectrum" In Submitted to IEEE Transactions on Signal Processing 2011.
- J. Bruna, S. Chintala, Y. LeCun, S. Piantino, A. Szlam, and M. Tygert "A Mathematical Motivation for Complex-valued Convolutional Networks" In Neural Computation 28(5): 1-11, 2016.
- J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun "Spectral Networks and Deep Locally Connected Networks on Graphs" arXiv preprint, May 2014.
- X. Chen, X. Cheng, and S. Mallat "Unsupervised Deep Haar Scattering on Graphs" Conference on Neural Information Processing Systems (NIPS), Dec. 2014.
- W. Czaja and W. Li "Uniform Covering Frames for Scattering" arXiv preprint, Jun. 2016.

My interest in applying Riemannian geometry and dynamical mean field theory towards this problem was piqued by the following papers:

- B. Poole, S. Lahiri, M. Raghu, J. Sohl-Dickstein, and S. Ganguli "Exponential expressivity in deep neural networks through transient chaos" arXiv preprint, Jun. 2016.
- B. Poole, S. Lahiri, M. Raghu, J. Sohl-Dickstein, S. Ganguli, and J. Kleinberg "On the expressive power of deep neural networks" arXiv preprint, Jun. 2016.

More papers focused on developing a rigorous mathematical theory for deep neural networks:

- T. Wiatowski and H. Bölcskei "A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction" arXiv preprint, Dec. 2015.
- S. Mallat "Understanding Deep Convolutional Networks" arXiv preprint, Jan. 2016.
- G. F. Montufar, R. Pascanu, K. Cho, and Y. Bengio "On the Number of Linear Regions of Deep Neural Networks" In Proc. Advances in Neural Information Processing Systems 27: 2924-32, 2014.
- T. Wiatowski, M. Tschannen, A. Stanic, P. Grohs, and H. Bölcskei "Discrete Deep Feature Extraction: A Theory and New Architectures" arXiv preprint, May 2016.

If interested in hearing more about this research, feel free to contact me.