一切图片来自官方PPT网站Index of /slides/2022
代码部分:双层网络(仿射层+ReLu层)_iwill323的博客-CSDN博客
目录
神经网络
神经网络结构
神经网络计算层数的时候是不带输入层的
“3-layer Neural Net”, or “2-hidden-layer Neural Net”
# forward-pass of a 3-layer neural network:
f = lambda x: 1.0/(1.0 + np.exp(-x)) # activation function (use sigmoid)
x = np.random.randn(3, 1) # random input vector of three numbers (3x1)
h1 = f(np.dot(W1, x) + b1) # calculate first hidden layer activations (4x1)
h2 = f(np.dot(W2, h1) + b2) # calculate second hidden layer activations (4x1)
out = np.dot(W3, h2) + b3 # output neuron (1x1)
关于神经网络capacity的设定
神经网络尺寸越大越容易过拟合
使用正则化可以抑制过拟合
官方笔记的建议:The regularization strength is the preferred way to control the overfitting of a neural network. The takeaway is that you shou