python中从零开始的神经网络

介绍: (Introduction:)

Do you really think that a neural network is a block box? I believe, a neuron inside the human brain may be very complex, but a neuron in a neural network is certainly not that complex.

您真的认为神经网络是一个盒子吗? 我相信,人脑内部的神经元可能非常复杂,但是神经网络中的神经元肯定没有那么复杂。

It does not matter, what software you are developing right now, if you are not getting up to speed on machine learning…you lose. We are going to an era where one software will create another software and perhaps automate itself.

没关系,您现在正在开发什么软件,如果您不适应机器学习的速度,那您就输了。 我们将进入一个时代,一个软件将创建另一个软件,并可能实现自动化。

In this article, we are going to discuss how to implement a neural network from scratch in Python. This means we are not going to use deep learning libraries like TensorFlow, PyTorch, Keras, etc.

在本文中,我们将讨论如何在Python中从头开始实现神经网络。 我们不打算使用深度学习库,例如这意味着TensorFlowPyTorchKeras等。

You may like to watch a video version of this article for a more detailed explanation…

您可能希望观看本文的视频版本,以获取更详细的说明……

一般条款: (General Terms:)

Let us first discuss a few statistical concepts used in this post.

让我们首先讨论本文中使用的一些统计概念。

Dot Product of Matrix: Dot product of two matrices is one of the most important operations in deep learning. In mathematics, the dot product is a mathematical operation that takes as input, two equal-length sequences of numbers, and outputs a single number.

矩阵的点积:两个矩阵的点积是深度学习中最重要的运算之一。 在数学中,点积是一种数学运算,将两个等长的数字序列作为输入,并输出一个数字。

Not all matrices are eligible for multiplication. To carry out the dot product of two matrices, The number of columns of the 1st matrix must equal the number of rows of the 2nd. Therefore, If we multiply an m×n matrix by an n×p matrix, then the result is an m×p matrix. Here the first dimension represents rows and the second dimension represents columns in a matrix. Note that the number of columns in the first matrix should be the same as the number of rows in the second matrix. This is represented by the letter n here.

并非所有矩阵均可进行乘法运算。 要执行两个矩阵的点积,第一个矩阵的列数必须等于第二个矩阵的行数。 因此,如果我们乘 一个m×n矩阵乘以一个n×p矩阵,则结果是一个m×p矩阵。 在这里,第一维表示行,第二维表示矩阵中的列。 请注意,第一个矩阵中的列数应与第二个矩阵中的行数相同。 这在这里由字母n表示。

Image for post
Dot Product of Matrix
矩阵点积

Sigmoid: A sigmoid function is an activation function. For any given input number n, the sigmoid function maps that number to output between 0 and 1.When the value of n gets larger, the value of the output gets closer to 1 and when n gets smaller, the value of the output gets closer to 0.

乙状结肠 :乙状结肠功能是一种激活功能。 对于任何给定的输入数字n,S型函数都会将该数字映射为0到1之间的输出。当n的值变大时,输出的值接近1;当n变小时,输出的值变得更近。到0。

Image for post
Sigmoid Function
乙状结肠功能
Image for post
Sigmoid Function used in Machine Learning Classification
机器学习分类中使用的Sigmoid函数

Sigmoid Derivative: the derivative of the sigmoid function, is the sigmoid multiplied by one minus the sigmoid.

乙状结肠导数 :乙状结肠功能的导数,是乙状结肠乘以一减去乙状结肠。

Image for post
The derivative of the Sigmoid Function
S形函数的导数

实现方式: (Implementation:)

导入库: (Import Libraries:)

We are going to import NumPy and the pandas library.

我们将导入NumPy和熊猫库。

import numpy as np
import pandas as pd

加载数据: (Load Data:)

We will be using pandas to load the CSV data to a pandas data frame.

我们将使用pandas将CSV数据加载到pandas数据框中。

df = pd.read_csv('Data.csv')
df.head()
Image for post
Classification Data for Neural Network from Scratch
从零开始的神经网络分类数据

To proceed further we need to separate the features and labels.

要进一步进行,我们需要将功能部件和标签分开。

x = df[['Glucose','BloodPressure']]
y = df['Diabetes']

After that let us define the sigmoid function.

之后,让我们定义S形函数

def sigmoid(input):    
output = 1 / (1 + np.exp(-input))
return output

There is one more function that we are going to use. It is related to sigmoid and called the sigmoid derivative function.

我们将使用另一个功能。 它与乙状结肠有关,称为乙状结肠导函数

# Define the sigmoid derivative function
def sigmoid_derivative(input):
return sigmoid(input) * (1.0 - sigmoid(input))

Then we need to define the network training function as below.

然后,我们需要如下定义网络训练功能。

def train_network(features,label,weights,bias,learning_rate,epochs):                                         for epoch in range(epochs):       
dot_prod = np.dot(features, weights) + bias
# using sigmoid
preds = sigmoid(dot_prod)
# Error
errors = preds - label
deriva_cost_funct = errors
deriva_preds = sigmoid_derivative(pred)
deriva_product = deriva_cost_funct * deriva_pred
#update the weights
weights = weights - np.dot(featurest, deriva_product) * learning_rate
loss = errors.sum()
print(loss)
for i in deriva_product:
bias = bias - i * learning_rate

After that let us initialize the required parameters

之后,让我们初始化所需的参数

np.random.seed(10)
features = x
label = y.values.reshape(1000,1)
weights = np.random.rand(1,2)
bias = np.random.rand(1)
learning_rate = 0.0004
epochs = 100

We are ready to train the network now:

我们现在准备训练网络:

Image for post
Training Neural Network from Scratch in Python
从头开始用Python训练神经网络

尾注: (End Notes:)

In this article, we discussed, how to implement a Neural Network model from scratch without using a deep learning library. However, if you will compare it with the implementations using the libraries, it will give nearly the same result.

在本文中,我们讨论了如何在不使用深度学习库的情况下从头开始实现神经网络模型。 但是,如果将其与使用库的实现进行比较,它将得到几乎相同的结果。

The code is uploaded to Github here.

该代码在此处上传到Github。

Happy Coding !!

快乐编码!

翻译自: https://medium.com/swlh/neural-network-from-scratch-in-python-fcd6faef9f35

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值