【深度学习】1.2:简单神经网络的python实现

资料参考:
[1]知乎专栏总结:https//zhuanlan.zhihu.com/p/21423252
[2]Youtube上的视频network demystified playlist
以下实现神经网络,但此处的矩阵是Youtube视频上的转置
参数的选取:
learn_rate:步子大了,容易跳过最优点;步子小了,容易陷入局部最优
hidden_nodes: 隐藏层节点个数多了,容易过拟合,节点个数少,容易欠拟合
epho: 迭代次数多了,容易过拟合,时间长;次数少了,容易欠拟合

代码块

#有56个输入节点,2个隐藏节点,1个输出节点,做个示例
#隐藏层激活函数为sigmoid,输出层为f(x)=x.
class NeuralNetwork(object):
    def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
        # Set number of nodes in input, hidden and output layers.
        self.input_nodes = input_nodes
        self.hidden_nodes = hidden_nodes
        self.output_nodes = output_nodes

        # Initialize weights,注意权重的shape
        self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5, 
                                       (self.hidden_nodes, self.input_nodes))

        self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5, 
                                       (self.output_nodes, self.hidden_nodes))
        self.lr = learning_rate

        #### Set this to your implemented sigmoid function ####
        # Activation function is the sigmoid function
        def sigmoid(x):
             return 1 / (1 + np.exp(-x))
        self.activation_function = sigmoid

    def train(self, inputs_list, targets_list):
        # Convert inputs list to 2d array
        inputs = np.array(inputs_list, ndmin=2).T # inputs的shape为 [feature_diemension, 1]=[56,1],1是指1个实例
        targets = np.array(targets_list, ndmin=2).T#[1,1]
        #print("input",inputs.shape)
        #print("targets",targets.shape)

        #### Implement the forward pass here ####
        ### Forward pass ###
        # TODO: Hidden layer
        hidden_inputs = np.dot(self.weights_input_to_hidden,inputs)#hidden_inputs是[hidden_nodes,1]=[2,1]
        hidden_outputs = self.activation_function(hidden_inputs)#hidden_outputs是[hidden_nodes,1]=[2,1]
        #print("hidden_inputs",(hidden_inputs.shape))
        #print(("hidden_outputs"),hidden_outputs.shape)

        # TODO: Output layer
        final_inputs = np.dot(self.weights_hidden_to_output,hidden_outputs)#[output_nodes,1]
        final_outputs = final_inputs #[output,input]=[1,1]
        #print("final_inputs",final_inputs.shape)
        #print("final_outputs",final_outputs.shape)

        #### Implement the backward pass here ####
        ### Backward pass ###

        # TODO: Output error
        output_errors = targets-final_outputs #[output,1]

        # TODO: Backpropagated error
        hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors)#[hidden_nodes,1]
        hidden_grads = hidden_outputs * ( 1-hidden_outputs )#[hidden_nodes,1]


        #  Update the weights  
        #  update hidden-to-output weights with gradient descent step  
        self.weights_hidden_to_output += np.dot(output_errors, hidden_outputs.T) * self.lr  #[output,hidden]
        # update input-to-hidden weights with gradient descent step  
        self.weights_input_to_hidden += np.dot(hidden_errors * hidden_grads, inputs.T) * self.lr #[hidden,input]


    def run(self, inputs_list):
        # Run a forward pass through the network
        inputs = np.array(inputs_list, ndmin=2).T

        #### Implement the forward pass here ####
        # TODO: Hidden layer
        hidden_inputs = np.dot(self.weights_input_to_hidden,inputs)
        hidden_outputs = self.activation_function(hidden_inputs)

        # TODO: Output layer
        final_inputs = np.dot(self.weights_hidden_to_output,hidden_outputs)
        final_outputs = final_inputs 

        return final_outputs
  • 0
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值