Ernesto De Vito: Understanding Neural Networks with Reproducing Kernel Banach Spaces
Characterizing the function spaces corresponding to neural networks can provide a way to understand their properties. The talk is devoted to show how the theory of reproducing kernel Banach spaces can be used to characterize the function spaces corresponding to neural networks. In particular, I will show a representer theorem for a class of reproducing kernel Banach spaces, which includes one hidden layer neural networks of possibly infinite width. Furthermore, I will prove that, for a suitable class of ReLU activation functions, the norm in the corresponding reproducing kernel Banach space can be characterized in terms of the inverse Radon transform of a bounded real measure. The
talk is based on on joint work with F. Bartolucci, L. Rosasco and S. Vigogna.
|For further information please contact email@example.com|