I like the very realistic approach here, I wish my students would articulate this some day!
Quote “Even if a single hidden-layer network can do arbitrary function approximation, that doesn’t mean that it does it efficiently (in terms of number of parameters…
I’m presenting Barron’s Theorem in the Machine Learning reading group today.
Abstract: I will state and prove Barron’s Theorem, which shows that 1-layer neural networks can evade the curse of dimensionality. Barron’s Theorem bounds the error of the best neural net approximation to a function, in terms of the number of hidden nodes and the smoothness of the function, independently of the dimension.
View original post 11 more words