读书笔记 : http://neuralnetworksanddeeplearning.com/chap4.html
很容易的看懂universality theorem
-
What is universality theorem https://en.wikipedia.org/wiki/Universal_approximation_theorem
Neural networks can compute any function.
Suppose we’re given a function f(x)f(x) which we’d like to compute to within some desired accuracy ϵ>0. The guarantee is that by using enough hidden neurons we can always find a neural network whose output g(x) satisfies |g(x)−f(x)|<ϵ, for all inputs xx. -
Construct the base neural
z = w*x + b (x is input)
σ(z) = 1/(1+exp(-z))
By using big enough w like 1000, b = -400 we can get below chart.
Given s=−b/w, we can see below chart
- Construct pair neurons
Set up S1 and S2, with different weights in the next path.
Set h = w1 = -w2, we can get
- Use lots of hidden neurals to cover the function.
Adjust the h parameter to fit the function