Barron’s theorem: Neural networks evade the curse of dimensionality

I like the very realistic approach here, I wish my students would articulate this some day!

Quote “Even if a single hidden-layer network can do arbitrary function approximation, that doesn’t mean that it does it efficiently (in terms of number of parameters…

Mental Wilderness

I’m presenting Barron’s Theorem in the Machine Learning reading group today.

Abstract: I will state and prove Barron’s Theorem, which shows that 1-layer neural networks can evade the curse of dimensionality. Barron’s Theorem bounds the error of the best neural net approximation to a function, in terms of the number of hidden nodes and the smoothness of the function, independently of the dimension.

Notes are available here (source).
Update: Thanks to Alex Anderson for pointing out limitations of Barron’s Theorem. In his words:
In the context of deep learning, Barron’s Theorem has been a huge red-herring for the neural network community. Even if a single hidden-layer network can do arbitrary function approximation, that doesn’t mean that it does it efficiently (in terms of number of parameters), and these approaches are never used in practice now. There are some things happening that are much more subtle than can be treated in this…

View original post 11 more words

Advertisements

5 thoughts on “Barron’s theorem: Neural networks evade the curse of dimensionality

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s