The author presents a theory of invertible and injective deep neural networks for likelihood estimation and uncertainty quantification.
In this presentation, an applied mathematician from South Dakota State University presents his research on the mathematical foundations of deep learning, especially at the intersection of geometry, topology and universality. The presentation introduces some of the mathematical theory behind deep learning, aiming to provide a taste of how deep learning can be formalized and how these formalisms yield interesting mathematics. Machine learning tries to let machines ‘learn’ patterns in data. Deep learning is a subset of machine learning and is best defined through example. Deep learning composes simple functions to build more complex ones. These four steps still guide deep learning to this day: 1) collect and clean data; 2) choose an architecture that depends on a parameter θ; 3) train the network to find the ‘right’ θ ∈ Θ; and 4) measure how well F works on data that it wasn’t trained on. Statistical Learning Theory (SLT) give us a useful formalism for understanding these questions.
Downloads
Similar Publications
- Criminal Court Case Processing of American School Shooters
- Assessing the value of bacteria, plants, fungi and arthropods characterized via DNA metabarcoding for separation of forensic-like surface soils at varied spatial scales
- Economic Abuse and Predatory Financial Service use Among Intimate Partner Violence Survivors