3层神经网络:
输入层(第0层)有2个神经元x1,x2;第1个隐藏层(第一层)有3个神经元a1,a2,a3;第2个隐藏层(第2层)有2个神经元a1,a2;输出层(第3层)有2个神经元y1,y2及偏置神经元1(及其权重b)。
权重w右下角有两个数字12,表示第1层第2神经元向第2层第1个神经元的索引。顺序:后一层的索引号,前一层的索引号
权重和神经元符号右上角有个(1),代表权重和神经元的层号(第1层权重,第一层神经元)
信息传递函数式:a1 = w11*x1+w12*x2+b1
A = XW + B
A = (a1 a2 a3)
X= (x1 x2)
B= (b1 b2 b3)
W= ( w11 w21 w31
w12 w22 w32)
代码:
X = np.array([1.0 , 0.5])
W1 = np.array([0.1,0.3,0.5],[0.2,0.4,0.6])
B1 = np.array([0.1,0.2,0.3])
print(W1.shape) #(2,3)
print(X.shape) #(2,)
print(B1.shape) #(3,)
A1 = np.dot(X, W1) + B1
隐藏层的加权(加权信号和偏置的总和)用a表示,被激活的函数转换后的信号用z表示,h()表示激活函数,使用sigmoid函数
Z1 = sigmoid(A1)
print(A1) #[0.3 , 0.7 , 1.1]
print(Z1) #[0.57 , 0.67 , 0.75]
=============
def init_network( );
network = {}
network['W1'] = np.array([[0.1 , 0.3, 0.5],[0.2, 0.4, 0.6]])
network['b1'] = np.array([0.1, 0.2, 0.3])
network['W2'] = np.array([0.1, 0.4], [0.2, 0.5], [0.3, 0.6])
network['b2'] = np.array([0.1, 0.2])
network['W3'] = np.array([[0.1, 0.3], [0.2, 0.4]])
network['b3'] = np.array([0.1, 0.2])
return network
def forward(network, x):
W1,W2,W3 = network['W1'], network['W2'], network['W3']
b1, b2, b3 = network['b1'], network['b2'], network['b3']
a1 = np.dot(x, W1) + b1
z1 = sigmoid(a1)
a2 = np.dot(z1, W2) + b2
z2 = sigmoid(a2)
a2 = np.dot(z1, W2) + b2
y = indentity_function(a3)
return y
network = init_network()
x = np.array([1.0, 0.5])
y = forward(network, x)
print(y) #[ 0.3168 0.69627909]