用numpy自己实现一个前馈神经网络

本文介绍如何利用numpy从头实现一个具有三层结构的前馈神经网络,并探讨了多层神经网络的基本原理和实现步骤。
摘要由CSDN通过智能技术生成

三层的:

def sigmoid(x):
    """
    Compute the sigmoid of x
    Arguments:
    x -- A scalar or numpy array of any size.
    Return:
    s -- sigmoid(x)
    """
    s = 1/(1+np.exp(-x))
    return s

def relu(x):
    """
    Compute the relu of x
    Arguments:
    x -- A scalar or numpy array of any size.
    Return:
    s -- relu(x)
    """
    s = np.maximum(0,x)
    
    return s

def initialize_parameters(layer_dims):
    """
    Arguments:
    layer_dims -- python array (list) containing the dimensions of each layer in our network
    
    Returns:
    parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
                    W1 -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
                    b1 -- bias vector of shape (layer_dims[l], 1)
                    Wl -- weight matrix of shape (layer_dims[l-1], layer_dims[l])
                    bl -- bias vector of shape (1, layer_dims[l])
                    
    Tips:
    - For example: the layer_dims for the "Planar Data classification model" would have been [2,2,1]. 
    This means W1's shape was (2,2), b1 was (1,2), W2 was (2,1) and b2 was (1,1). Now you have to generalize it!
    - In the for loop, use parameters['W' + str(l)] to access Wl, where l is the iterative integer.
    """
    
    np.random.seed(3)
    parameters = {
   }
    L = len(layer_dims) # number of layers in the network

    for l in range(1, L):
        parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) / np.sqrt(layer_dims[l-1])
        parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
        
        assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
        assert(parameters['W' + str(l)].shape == layer_dims[l], 1)

        
    return parameters

def forward_propagation(X, parameters):
    """
    Implements the forward propagation (and computes the loss) presented in Figure 2.
    
    Arguments:
    X -- input dataset, of shape (input size, number of examples)
    parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
                    W1 -- weight matrix of shape ()
                    b1 -- bias vector of shape ()
                    W2 -- weight matrix of shape ()
                    b2 -- bias vector of shape ()
                    W3 -- weight matrix of shape ()
                    b3 -- bias vector of shape ()
    
    Returns:
    loss -- the loss function (vanilla logistic loss)
    """
        
    # retrieve parameters
    W1 = parameters["W1"]
    b1 = parameters["b1"]
    W2 = parameters["W2"]
    b2 = parameters["b2"]
    W3 = parameters["W3"]
    b3 = parameters["b3"]
    
    # LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
    Z1 = np.dot(W1, X) + b1
    A1 = relu(Z1)
    Z2 = np.dot(W2, A1) + b2
    A2 = relu(Z2)
    Z3 = np.dot(W3, A2) + b3
    A3 = sigmoid(Z3)
    
    cache = (Z1, A1, W1, b1, Z2, A2, W2, b2,
  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
下面是一个使用 numpy 实现前馈神经网络的基本示例代码: ```python import numpy as np class NeuralNetwork: def __init__(self, input_size, hidden_size, output_size): self.input_size = input_size self.hidden_size = hidden_size self.output_size = output_size # 初始化权重 self.weights1 = np.random.randn(self.input_size, self.hidden_size) self.weights2 = np.random.randn(self.hidden_size, self.output_size) def sigmoid(self, x): return 1 / (1 + np.exp(-x)) def forward(self, X): # 前向传播 self.hidden = self.sigmoid(np.dot(X, self.weights1)) self.output = self.sigmoid(np.dot(self.hidden, self.weights2)) return self.output def sigmoid_derivative(self, x): return x * (1 - x) def backward(self, X, y, output): # 反向传播 self.output_error = y - output self.output_delta = self.output_error * self.sigmoid_derivative(output) self.hidden_error = self.output_delta.dot(self.weights2.T) self.hidden_delta = self.hidden_error * self.sigmoid_derivative(self.hidden) self.weights1 += X.T.dot(self.hidden_delta) self.weights2 += self.hidden.T.dot(self.output_delta) def train(self, X, y, epochs): for i in range(epochs): output = self.forward(X) self.backward(X, y, output) def predict(self, X): return self.forward(X) ``` 这个神经网络有一个输入层,一个隐藏层和一个输出层。在初始化权重时,我们使用了随机初始化。我们还定义了 sigmoid 函数作为激活函数,并使用它进行前向传播。 在反向传播过程中,我们首先计算输出误差,然后计算输出层和隐藏层的 delta。最后,我们根据 delta 值更新权重。最后,我们定义了一个 train 函数,它在给定的 epochs 中使用反向传播来训练模型。 我们还定义了 predict 函数,它使用训练后的权重进行预测。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值