I like the very realistic approach here, I wish my students would articulate this some day!

Quote “Even if a single hidden-layer network can do arbitrary function approximation, that doesn’t mean that it does it efficiently (in terms of number of parameters…

I’m presenting Barron’s Theorem in the Machine Learning reading group today.

**Abstract:** I will state and prove Barron’s Theorem, which shows that 1-layer neural networks can evade the curse of dimensionality. Barron’s Theorem bounds the error of the best neural net approximation to a function, in terms of the number of hidden nodes and the smoothness of the function, independently of the dimension.

**Update:**Thanks to Alex Anderson for pointing out limitations of Barron’s Theorem. In his words:

In the context of deep learning, Barron’s Theorem has been a huge red-herring for the neural network community. Even if a single hidden-layer network can do arbitrary function approximation, that doesn’t mean that it does it efficiently (in terms of number of parameters), and these approaches are never used in practice now. There are some things happening that are much more subtle than can be treated in this…

View original post 11 more words

This was intriguing

LikeLiked by 1 person

Thank you nimitode!Means so much to have someone see the value of neural networks. 🙂

LikeLiked by 1 person

Not sure if it is my brain that is somehow failing me right now from overload and fatigue, but I read the post 3 times and it went over my head:(

LikeLiked by 1 person

I am sure the same thing happened to me in grad school! Thank you so much for your support! I merely re- blogged this! 🙂 🙂 🙂

LikeLiked by 1 person

LOL happens to the best of us:p We need to give our brains a little break sometimes.

LikeLiked by 1 person