U.S. flag

An official website of the United States government, Department of Justice.

NCJRS Virtual Library

The Virtual Library houses over 235,000 criminal justice resources, including all known OJP works.
Click here to search the NCJRS Virtual Library

Theory of Invertible and Injective Deep Neural Networks for Likelihood Estimation and Uncertainty Quantification

NCJ Number
308222
Author(s)
Michael Puthawala; Matti Lassas; Ivan Dokmanic; Pekka Pankka; Maarten de Hoop
Date Published
June 2023
Length
91 pages
Annotation

The author presents a theory of invertible and injective deep neural networks for likelihood estimation and uncertainty quantification.           

Abstract

In this presentation, an applied mathematician from South Dakota State University presents his research on the mathematical foundations of deep learning, especially at the intersection of geometry, topology and universality. The presentation introduces some of the mathematical theory behind deep learning, aiming to provide a taste of how deep learning can be formalized and how these formalisms yield interesting mathematics. Machine learning tries to let machines ‘learn’ patterns in data. Deep learning is a subset of machine learning and is best defined through example. Deep learning composes simple functions to build more complex ones. These four steps still guide deep learning to this day: 1) collect and clean data; 2) choose an architecture that depends on a parameter θ; 3) train the network to find the ‘right’ θ ∈ Θ; and 4) measure how well F works on data that it wasn’t trained on. Statistical Learning Theory (SLT) give us a useful formalism for understanding these questions.