人工神经网络(ANN)与反向传播(BP)实例应用:判断小麦种类(不调库,手工推)

目录

一、数据来源

二、使用方式

三、代码实现

四、完整代码


一、数据来源

1.数据来源:kaggle

2.数据样式

通过前7列参数,判断出小麦的种类,小麦种类共有3类(第8列)。

本次模型拟合度96.667%(见后续详细代码)。

二、使用方式

人工神经网络(Artificial Neural Network)&反向传播 (Back Propagation)

方法说明:下述为简单阐述,详细说明请查阅相关文档。

简单的人工神经网络分为三层,输入层(Input Layer)、隐藏层(Hidden Layer)、输出层(Output Layer);每一层里包含n个神经元(Neuron)。

1、向前传递

①在输入层,输入值(Input_i),得到输入每一个神经元的激活值( activation = input_1 *w_1 + input_2 * w_2 + …… +input_n * w_n +bais)。

②在隐藏层,每个神经元相当于一个整流器,通过激活函数进行传输,输出值为y_i。同时,y_i作为向下一层传递的输入值,继续进行传输。

正是通过了激活函数的传输,使得神经网络非线性化,能解决相对复杂的问题。

③在输入层,output_i   = y_1 *θ_1 + y_2 * θ_2 + …… + y_n * θ_n +bais 。

激活函数:

常用的激活函数有:sigmoid函数,ReLU函数,Softplus函数,本文后续代码将用Softplus函数。

关于激活函数的解释,请查阅相关文档。

2、反向传播法

 通过梯度下降法,不断拟合,寻找最优的权重系数w_n ,偏值bais_n。

\theta_i = \theta_i - \alpha \times \frac{\partial L(\theta_i)}{\partial \theta_i}

其中:损失函数L(\theta) = \frac{1}{2} \sum \limits_{i=1}^n (h_\theta(x) - y )^2

因为需要通过训练数据来训练模型,拟合而出最优的权重系数和偏值,训练的起点是误差,误差等于输出值与训练数据观察值的差值,同时各个神经元之间存在多个函数关系;

这时,就需要从输出层开始反向传播。

例:拟合w_1

w_1 = w_1 - \alpha \times \frac{\partial L(w)}{\partial w_1}

\frac{\partial L(w_1)}{\partial w_1} = \frac{\partial L(w)}{\partial output} \times \frac{\partial output}{\partial y} \times \frac{\partial y}{\partial activation} \times \frac{\partial activation}{\partial w_1}

其中,以使用softplus激活函数为例,进行变形:

\frac{\partial L(w_1)}{\partial w_1} = \sum {[(output_i - observe_i) \times \theta_{1,i} \times \frac{1}{1 + e^{-activation_{1}}})] \times input_1

依次类推,计算其他权重系数和偏值的偏导数……

三、代码实现

从数据读取开始,不调取三方库,纯手工推导。

1、导入基础库

#1.导入基础库
from csv import reader
from math import exp,log
from random import randrange,seed,random
import copy

2.读取csv文件和数据类型转换

#2.读取csv文件和数据类型转换
def csv_loader(file):
    dataset=list()
    with open(file,'r') as f:
        csv_reader=reader(f)    
        for row in csv_reader:
            if not row:
                continue
            dataset.append(row)
    return dataset

#字符串数据转换为浮点型
def str_to_float_converter(dataset):
    dataset=dataset[1:]
    for i in range(len(dataset[0])-1):
        for row in dataset:
            row[i]= float(row[i].strip())

#观察数据转换为整型
def str_to_int_converter(dataset):
    dataset=dataset[1:]
    class_values= [row[-1] for row in dataset]
    unique_values= set(class_values)
    converter_dict=dict()
    for i,value in enumerate(unique_values):
        converter_dict[value] = i
    for row in dataset:
        row[-1] = converter_dict[row[-1]]

3.数据归一化

#3.数据归一化
def normalization(dataset):
    for i in range(len(dataset[0])-1):
        col_values = [row[i] for row in dataset]
        max_value = max(col_values)
        min_value = min(col_values)
        for row in dataset:
            row[i] = (row[i] - min_value)/float(max_value - min_value)

4.K折交叉验证拆分数据

#4.K折交叉验证拆分数据
def k_fold_cross_validation(dataset,n_folds):
    dataset_split=list()
    fold_size= int(len(dataset)/n_folds)
    dataset_copy = list(dataset)
    for i in range(n_folds):
        fold_data = list()
        while len(fold_data) < fold_size:
            index = randrange(len(dataset_copy))
            fold_data.append(dataset_copy.pop(index))
        dataset_split.append(fold_data)
    return dataset_split

5.计算准确性

#5.计算准确性
def calculate_accuracy(actual,predicted):
    correct = 0
    for i in range(len(actual)):
        if actual[i] == predicted[i]:
            correct +=1
    accuracy = correct/float(len(actual)) *100.0
    return accuracy

6.模型测试评分

#6.模型测试评分
def mode_scores(dataset,algo,n_folds,*args):
    dataset_split = k_fold_cross_validation(dataset,n_folds)
    scores  = list() 
    for fold in dataset_split:
        train = copy.deepcopy(dataset_split)
        train.remove(fold)
        train = sum(train, [])
        test =list()
        test = copy.deepcopy(fold)
        predicted = algo(train, test, *args)
        actual = [row[-1] for row in fold]
        accuracy= calculate_accuracy(actual,predicted)
        scores.append(accuracy)
    return scores

以下为人工神经网络算法的代码

7.初始化神经网络

#7.初始化神经网络
def initialize_network(n_inputs,n_hiddens,n_outputs):
    network = list()
    hidden_layer = [{'weight':[random() for i in range(n_inputs+1)]} for i in range(n_hiddens)]
    network.append(hidden_layer)
    output_layer = [{'weight':[random() for i in range(n_hiddens+1)]} for i in range(n_outputs)]
    network.append(output_layer)
    return network

8.计算激活值

#8.计算激活值
def activate(weights,inputs):
    activation =weights[-1]
    for i in range(len(weights)-1):
        activation +=weights[i] * inputs[i]
    return activation

9.传输神经元的激活值

#9.传输神经元的激活值
def neuron_transfer(activation):
    output = log(1+exp(activation))
    return output

10.向前传递,获得输出结果

#10.向前传递,获得输出结果
def forward_propagation(network,row):
    inputs = row 
    for layer in network:
        inputs_new = list()
        for neuron in layer:
            activation = activate(neuron['weight'],inputs)
            if layer != network[-1]:
                neuron['output'] = neuron_transfer(activation)
                neuron['input'] = activation
                inputs_new.append(neuron['output'])
            else:
                neuron['output'] = activation
                neuron['input'] = activation
                inputs_new.append(neuron['output'])
        inputs = inputs_new
    return inputs

11.计算神经元输出值对输入值的导数(传输器函数的导数)

#11.计算神经元输出值对输入值的导数(传输器函数的导数)
def transfer_derivative(input_):
    derivative =  1.0/(1.0 + exp(-input_)) 
    return derivative

12.计算反向传播的误差

#12.计算反向传播的误差
def back_propagation_error(network,expected):
    for i in reversed(range(len(network))):
        layer = network[i]
        if i-1 >= 0:
            layer_pre = network[i-1]
        else:
            layer_pre = None
        errors = list()
        if i != len(network) -1:
            for j in range(len(layer)):
                neuron = layer[j]
                error = 0.0
                for neuron_latter in network[i+1]:
                    error += (neuron_latter['weight'][j] *  neuron_latter['delta'][j])
                neuron['error'] = error
                errors.append(error)
                
        else:
            for j in range(len(layer)):
                neuron = layer[j]
                error = neuron['output'] - expected[j]
                neuron['error'] = error
                errors.append(error)
        for j in range(len(layer)):
            if not layer_pre:
                continue
            else:
                neuron = layer[j]
                neuron['delta'] = list()
                for neuron_pre in layer_pre:
                    error_class= errors[j] * transfer_derivative(neuron_pre['input'])
                    neuron['delta'].append(error_class)

13.更新权重系数

#13.更新权重系数
def update_weights(network,row,learning_rate):
    for i in range(len(network)):
        inputs = row[:-1]
        if i != 0:
            inputs = [neuron['output'] for neuron in network[i-1]]
        for neuron in network[i]:
            for j in range(len(inputs)):
                neuron['weight'][j] -=  learning_rate * neuron['error'] * inputs[j]
                neuron['weight'][-1] -= learning_rate * neuron['error']

14.训练神经网络

#14.训练神经网络
def train_network(network,train,learning_rate,n_epochs,n_outputs):
    for epoch in range(n_epochs):
        sum_error = 0.0
        for row in train:
            outputs = forward_propagation(network,row)
            expected = [0 for i in range(n_outputs)]
            expected[row[-1]] = 1
            sum_error += sum([(outputs[i] - expected[i])**2 for i in range(len(expected))])
            back_propagation_error(network,expected)
            update_weights(network,row,learning_rate)
        print('We are at epoch [%d] right now, The learning rate is [%.3f], the error is [%.3f]' %(epoch,learning_rate,sum_error))

15.进行预测

#15.进行预测
def make_prediction(network,row):
    outputs = forward_propagation(network,row)
    prediction = outputs.index(max(outputs))
    return prediction

16.使用反向传播

#16.使用反向传播
def back_propagation(train,test,learning_rate,n_epochs,n_hiddens):
    n_inputs = len(train[0]) -1
    n_outputs = len(set(row[-1] for row in train))
    network = initialize_network(n_inputs,n_hiddens,n_outputs)
    train_network(network,train,learning_rate,n_epochs,n_outputs)
    predictions = list()
    for row in test:
        prediction = make_prediction(network,row)
        predictions.append(prediction)
    return predictions

17.开始测试

#17.开始测试
file='./download_datas/seeds_dataset.csv'
dataset=csv_loader(file)
str_to_float_converter(dataset)
str_to_int_converter(dataset)
dataset=dataset[1:]
normalization(dataset)

seed(1)
n_folds=5
learning_rate=0.005
n_epochs=500
n_hiddens=10

algo= back_propagation
scores=mode_scores(dataset,algo,n_folds,learning_rate,n_epochs,n_hiddens)

print('The scores of our model are : %s' % scores)
print('The average score of our model is : %.3f%%' % (sum(scores)/float(len(scores))))

测试结果

#输出结果
The scores of our model are : [95.23809523809523, 95.23809523809523, 100.0, 95.23809523809523, 97.61904761904762]
The average score of our model is : 96.667%

四、完整代码

#1.导入基础库
from csv import reader
from math import exp,log
from random import randrange,seed,random
import copy

#2.读取csv文件和数据类型转换
def csv_loader(file):
    dataset=list()
    with open(file,'r') as f:
        csv_reader=reader(f)    
        for row in csv_reader:
            if not row:
                continue
            dataset.append(row)
    return dataset

#字符串数据转换为浮点型
def str_to_float_converter(dataset):
    dataset=dataset[1:]
    for i in range(len(dataset[0])-1):
        for row in dataset:
            row[i]= float(row[i].strip())

#观察数据转换为整型
def str_to_int_converter(dataset):
    dataset=dataset[1:]
    class_values= [row[-1] for row in dataset]
    unique_values= set(class_values)
    converter_dict=dict()
    for i,value in enumerate(unique_values):
        converter_dict[value] = i
    for row in dataset:
        row[-1] = converter_dict[row[-1]]

#3.数据归一化
def normalization(dataset):
    for i in range(len(dataset[0])-1):
        col_values = [row[i] for row in dataset]
        max_value = max(col_values)
        min_value = min(col_values)
        for row in dataset:
            row[i] = (row[i] - min_value)/float(max_value - min_value)

#4.K折交叉验证拆分数据
def k_fold_cross_validation(dataset,n_folds):
    dataset_split=list()
    fold_size= int(len(dataset)/n_folds)
    dataset_copy = list(dataset)
    for i in range(n_folds):
        fold_data = list()
        while len(fold_data) < fold_size:
            index = randrange(len(dataset_copy))
            fold_data.append(dataset_copy.pop(index))
        dataset_split.append(fold_data)
    return dataset_split

#5.计算准确性
def calculate_accuracy(actual,predicted):
    correct = 0
    for i in range(len(actual)):
        if actual[i] == predicted[i]:
            correct +=1
    accuracy = correct/float(len(actual)) *100.0
    return accuracy

#6.模型测试评分
def mode_scores(dataset,algo,n_folds,*args):
    dataset_split = k_fold_cross_validation(dataset,n_folds)
    scores  = list() 
    for fold in dataset_split:
        train = copy.deepcopy(dataset_split)
        train.remove(fold)
        train = sum(train, [])
        test =list()
        test = copy.deepcopy(fold)
        predicted = algo(train, test, *args)
        actual = [row[-1] for row in fold]
        accuracy= calculate_accuracy(actual,predicted)
        scores.append(accuracy)
    return scores


#人工神经网络算法
#7.初始化神经网络
def initialize_network(n_inputs,n_hiddens,n_outputs):
    network = list()
    hidden_layer = [{'weight':[random() for i in range(n_inputs+1)]} for i in range(n_hiddens)]
    network.append(hidden_layer)
    output_layer = [{'weight':[random() for i in range(n_hiddens+1)]} for i in range(n_outputs)]
    network.append(output_layer)
    return network

#8.计算激活值
def activate(weights,inputs):
    activation =weights[-1]
    for i in range(len(weights)-1):
        activation +=weights[i] * inputs[i]
    return activation


#9.传输神经元的激活值
def neuron_transfer(activation):
    output = log(1+exp(activation))
    return output

#10.向前传递,获得输出结果
def forward_propagation(network,row):
    inputs = row 
    for layer in network:
        inputs_new = list()
        for neuron in layer:
            activation = activate(neuron['weight'],inputs)
            if layer != network[-1]:
                neuron['output'] = neuron_transfer(activation)
                neuron['input'] = activation
                inputs_new.append(neuron['output'])
            else:
                neuron['output'] = activation
                neuron['input'] = activation
                inputs_new.append(neuron['output'])
        inputs = inputs_new
    return inputs

#11.计算神经元输出值对输入值的导数(传输器函数的导数)
def transfer_derivative(input_):
    derivative =  1.0/(1.0 + exp(-input_)) 
    return derivative

#12.计算反向传播的误差
def back_propagation_error(network,expected):
    for i in reversed(range(len(network))):
        layer = network[i]
        if i-1 >= 0:
            layer_pre = network[i-1]
        else:
            layer_pre = None
        errors = list()
        if i != len(network) -1:
            for j in range(len(layer)):
                neuron = layer[j]
                error = 0.0
                for neuron_latter in network[i+1]:
                    error += (neuron_latter['weight'][j] *  neuron_latter['delta'][j])
                neuron['error'] = error
                errors.append(error)
                
        else:
            for j in range(len(layer)):
                neuron = layer[j]
                error = neuron['output'] - expected[j]
                neuron['error'] = error
                errors.append(error)
        for j in range(len(layer)):
            if not layer_pre:
                continue
            else:
                neuron = layer[j]
                neuron['delta'] = list()
                for neuron_pre in layer_pre:
                    error_class= errors[j] * transfer_derivative(neuron_pre['input'])
                    neuron['delta'].append(error_class)

#13.更新权重系数
def update_weights(network,row,learning_rate):
    for i in range(len(network)):
        inputs = row[:-1]
        if i != 0:
            inputs = [neuron['output'] for neuron in network[i-1]]
        for neuron in network[i]:
            for j in range(len(inputs)):
                neuron['weight'][j] -=  learning_rate * neuron['error'] * inputs[j]
                neuron['weight'][-1] -= learning_rate * neuron['error']

#14.训练神经网络
def train_network(network,train,learning_rate,n_epochs,n_outputs):
    for epoch in range(n_epochs):
        sum_error = 0.0
        for row in train:
            outputs = forward_propagation(network,row)
            expected = [0 for i in range(n_outputs)]
            expected[row[-1]] = 1
            sum_error += sum([(outputs[i] - expected[i])**2 for i in range(len(expected))])
            back_propagation_error(network,expected)
            update_weights(network,row,learning_rate)
        print('We are at epoch [%d] right now, The learning rate is [%.3f], the error is [%.3f]' %(epoch,learning_rate,sum_error))
    
#15.进行预测
def make_prediction(network,row):
    outputs = forward_propagation(network,row)
    prediction = outputs.index(max(outputs))
    return prediction

#16.使用反向传播
def back_propagation(train,test,learning_rate,n_epochs,n_hiddens):
    n_inputs = len(train[0]) -1
    n_outputs = len(set(row[-1] for row in train))
    network = initialize_network(n_inputs,n_hiddens,n_outputs)
    train_network(network,train,learning_rate,n_epochs,n_outputs)
    predictions = list()
    for row in test:
        prediction = make_prediction(network,row)
        predictions.append(prediction)
    return predictions
    

#17.开始测试
file='./download_datas/seeds_dataset.csv'
dataset=csv_loader(file)
str_to_float_converter(dataset)
str_to_int_converter(dataset)
dataset=dataset[1:]
normalization(dataset)

seed(1)
n_folds=5
learning_rate=0.005
n_epochs=500
n_hiddens=10

algo= back_propagation
scores=mode_scores(dataset,algo,n_folds,learning_rate,n_epochs,n_hiddens)

print('The scores of our model are : %s' % scores)
print('The average score of our model is : %.3f%%' % (sum(scores)/float(len(scores))))
  • 2
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
你可以使用 PyQt5 库来创建界面,使用 PyCryptodome 库来实现 DES 和 AES 加密解密。 下面是实现 DES 和 AES 加密解密的 Python 代码: ```python from PyQt5.QtWidgets import QApplication, QMainWindow, QWidget, QVBoxLayout, QHBoxLayout, QLabel, QLineEdit, QPushButton, QComboBox, QPlainTextEdit from Crypto.Cipher import DES, AES from Crypto.Util.Padding import pad, unpad from Crypto.Random import get_random_bytes import sys class MainWindow(QMainWindow): def __init__(self): super().__init__() # 创建主窗口 self.setWindowTitle("DES 和 AES 加密解密") self.setGeometry(200, 200, 800, 600) # 创建主窗口的中心部件 self.central_widget = QWidget() self.setCentralWidget(self.central_widget) # 创建主窗口的布局 self.layout = QVBoxLayout(self.central_widget) # 创建 DES 和 AES 切换按钮和下拉框 self.mode_layout = QHBoxLayout() self.mode_label = QLabel("选择加密模式:") self.mode_combo = QComboBox() self.mode_combo.addItems(["DES", "AES"]) self.mode_button = QPushButton("切换") self.mode_button.clicked.connect(self.toggle_mode) self.mode_layout.addWidget(self.mode_label) self.mode_layout.addWidget(self.mode_combo) self.mode_layout.addWidget(self.mode_button) self.layout.addLayout(self.mode_layout) # 创建输入框和输出框 self.input_layout = QVBoxLayout() self.input_label = QLabel("输入:") self.input_edit = QPlainTextEdit() self.input_layout.addWidget(self.input_label) self.input_layout.addWidget(self.input_edit) self.layout.addLayout(self.input_layout) self.output_layout = QVBoxLayout() self.output_label = QLabel("输出:") self.output_edit = QPlainTextEdit() self.output_edit.setReadOnly(True) self.output_layout.addWidget(self.output_label) self.output_layout.addWidget(self.output_edit) self.layout.addLayout(self.output_layout) # 创建加密解密按钮 self.button_layout = QHBoxLayout() self.encrypt_button = QPushButton("加密") self.encrypt_button.clicked.connect(self.encrypt) self.decrypt_button = QPushButton("解密") self.decrypt_button.clicked.connect(self.decrypt) self.button_layout.addWidget(self.encrypt_button) self.button_layout.addWidget(self.decrypt_button) self.layout.addLayout(self.button_layout) # 默认使用 DES 加密 self.mode = DES.MODE_ECB self.key_size = 8 # 切换加密模式 def toggle_mode(self): if self.mode_combo.currentText() == "DES": self.mode = DES.MODE_ECB self.key_size = 8 elif self.mode_combo.currentText() == "AES": self.mode = AES.MODE_ECB self.key_size = 16 # 加密 def encrypt(self): # 获取输入文本 plaintext = self.input_edit.toPlainText().encode("utf-8") # 生成随机密钥 key = get_random_bytes(self.key_size) # 创建加密器 if self.mode == DES.MODE_ECB: cipher = DES.new(key, self.mode) elif self.mode == AES.MODE_ECB: cipher = AES.new(key, self.mode) # 加密 ciphertext = cipher.encrypt(pad(plaintext, cipher.block_size)) # 显示输出 self.output_edit.setPlainText(f"密钥:{key.hex()}\n密文:{ciphertext.hex()}") # 解密 def decrypt(self): # 获取输入文本 ciphertext = bytes.fromhex(self.input_edit.toPlainText()) # 获取密钥 key = bytes.fromhex(self.output_edit.toPlainText().split("\n")[0].split(":")[1]) # 创建解密器 if self.mode == DES.MODE_ECB: cipher = DES.new(key, self.mode) elif self.mode == AES.MODE_ECB: cipher = AES.new(key, self.mode) # 解密 plaintext = unpad(cipher.decrypt(ciphertext), cipher.block_size) # 显示输出 self.output_edit.setPlainText(plaintext.decode("utf-8")) if __name__ == "__main__": # 创建应用程序 app = QApplication(sys.argv) # 创建主窗口 window = MainWindow() window.show() # 运行应用程序 sys.exit(app.exec_()) ``` 在界面上,你可以选择使用 DES 或 AES 加密模式,输入要加密的文本,点击“加密”按钮会生成随机密钥并加密文本,显示出密钥和密文。你也可以输入密文和密钥,点击“解密”按钮会解密文本,显示出明文。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值