Neural Network with Python(神经网络回归+结果可视化python代码)

这段时间参加的数模模拟刚好用到了深度学习的底层架构神经网络模型,于是自己索性就将我数模中用到的代码封装了以下,做成如下的神经网络模型函数以及包括结果的可视化、拟合度的计算结果输出,方便各位友友可以直接使用。

(插一个本人之前写的关于神经网络原理的介绍:https://zhuanlan.zhihu.com/p/486668654

当然面对不同的数据,最优的神经网络结构和参数会有所不同,大家可以根据自己的拟合结果,修改我下面的函数参数,从而获取最优模型。(如果各位友友面对的是分类问题,只需把激活函数改成softmax即可,当然损失函数也可以进行修改,关于损失函数的研究比较有代表性的Huber loss和M-regression,感兴趣的友友可以自行查阅相关文献。)

后期有时间的话,我也会写一个遗传算法或者粒子群算法(maybe是其他启发式算法)用来和下面的函数结合,自动帮各位找到预测结果最优的模型参数。

​
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score、
import matplotlib.pyplot as plt
from keras import regularizers
from sklearn.preprocessing import MinMaxScaler
from keras.models import Sequential
from keras.layers import Dense, Dropout 
from sklearn import preprocessing

def NN_Plot(i, X, Y, model_output = False):
		# i 为需要输出图表的y轴标签;
		# X 为自变量的数据集;
		# Y 为输出变量的数据;
		# model_output 表示是否返回 训练后的模型;

		# 1. 数据集标准化
    min_max_scaler = preprocessing.MinMaxScaler()
    X_scale = min_max_scaler.fit_transform(X)

		# 2.训练集和验证集的划分 
    X_train, X_test, Y_train, Y_test = train_test_split(X_scale, Y, test_size=0.3, random_state = n)
    
		# 3. 模型的结构设计
    model = Sequential()  # 初始化,很重要
    model.add(Dense(units = 1000,   # 输出大小,也是该层神经元的个数 
                activation='relu',  # 激励函数-RELU  
                input_shape=(X_train.shape[1],)  # 输入大小, 也就是列的大小  
               ))  

    model.add(Dropout(0.3))  # 丢弃神经元链接概率  

    model.add(Dense(units = 1000,  
              kernel_regularizer=regularizers.l2(0.01),  # 施加在权重上的正则项  
              activity_regularizer=regularizers.l1(0.01),  # 施加在输出上的正则项  
              activation='relu' # 激励函数  
            # bias_regularizer=keras.regularizers.l1_l2(0.01)  # 施加在偏置向量上的正则项  
                ))  
    model.add(Dropout(0.15))
    
    model.add(Dense(units = 500,  
          kernel_regularizer=regularizers.l2(0.01),  # 施加在权重上的正则项  
          activity_regularizer=regularizers.l1(0.01),  # 施加在输出上的正则项  
          activation='relu' # 激励函数  
             # bias_regularizer=keras.regularizers.l1_l2(0.01)  # 施加在偏置向量上的正则项  
            ))  
    model.add(Dropout(0.15))
    
    model.add(Dense(units = 500,  
          kernel_regularizer=regularizers.l2(0.01),  # 施加在权重上的正则项  
          activity_regularizer=regularizers.l1(0.01),  # 施加在输出上的正则项  
          activation='relu' # 激励函数  
             # bias_regularizer=keras.regularizers.l1_l2(0.01)  # 施加在偏置向量上的正则项  
            ))  
    model.add(Dropout(0.2))
    
    model.add(Dense(units = 1,     
                 activation='linear',
                kernel_regularizer=regularizers.l2(0.01)
                    # 线性激励函数 回归一般在输出层用这个激励函数    
               ))  
    model.compile(optimizer='adam',
              loss='mse', # 损失函数为均方误差
              metrics=['accuracy'])
    
		# 4. 模型的训练,可以自行修改batch——size大小和epoch大小
    hist = model.fit(X_train, Y_train,
          batch_size = 32, epochs=250, verbose = 2,
          validation_data=(X_test, Y_test))
    
		# 5. 模型的损失值变化图绘制
    plt.plot(hist.history['loss'])
    plt.plot(hist.history['val_loss'])
    plt.title('Model loss')
    plt.ylabel('Loss')
    plt.xlabel('Epoch')
    plt.legend(['Train', 'Val'], loc='upper right')
    plt.show()
    
    y_pred = model.predict(X_test)
    y_pred_train = model.predict(X_train)

    # 6. 模型在训练集和测试集上的拟合程度图绘制
    plt.figure(figsize=(30,9),dpi = 200)
    plt.subplot(1,2,1)
    ls_x_train = [x for x in range(1, len(y_pred_train.tolist())+1)]
    plt.plot(ls_x_train, y_pred_train.tolist(), label = '训练集的预测值' , marker = 'o')
    plt.plot(ls_x_train, Y_train.iloc[:,0].tolist(), label = '训练集的真实值',linestyle='--', marker = 'o' )
    plt.ylabel(i, fontsize = 15)
    plt.legend(fontsize = 15)
    plt.xticks(fontsize = 12)
    plt.yticks(fontsize = 12)
    
    plt.subplot(1,2,2)
    ls_x = [x for x in range(1, len(y_pred.tolist())+1)]
    plt.plot(ls_x, y_pred.tolist(), label = '验证集的预测值' , marker = 'o')
    plt.plot(ls_x, Y_test.iloc[:,0].tolist(), label = '验证集的真实值',linestyle='--',marker = 'o')
    plt.ylabel(i, fontsize = 15)
    plt.xticks(fontsize = 12)
    plt.yticks(fontsize = 12)
    plt.legend(fontsize = 15)
		
		# R方的计算
    r2_train = R_2(Y_train.iloc[:,0].tolist(), y_pred_train)
    r2_test = R_2(Y_test.iloc[:,0].tolist(), y_pred)
    print([r2_train, r2_test, (r2_train+r2_test)/2 ])

    # 是否返回训练得到的模型
    if model_output==True:
        return [model, min_max_scaler]

# R方的计算函数
def R_2(y, y_pred):
    y_mean = mean(y)
    sst = sum([(x-y_mean)**2 for x in y])
    ssr = sum([(x-y_mean)**2 for x in y_pred])
    sse = sum([(x-y)**2 for x,y in zip(y_pred, y)])
    return 1-sse/sst


NN_model = NN_Plot(i, X, Y, model_output = False) # 如果需要输出模型,只需把False改成True

​

 以下是本人用上述自己写的代码建立的工业模型拟合结果,拟合度达到了95%,大家也可以根据自己的需求和结果的效果修改其中的参数,或者增加隐藏层,从而优化自己的神经网络模型。

  • 2
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
LazyProgrammer, "Convolutional Neural Networks in Python: Master Data Science and Machine Learning with Modern Deep Learning in Python, Theano, and TensorFlow" 2016 | ASIN: B01FQDREOK | 52 pages | EPUB | 1 MB This is the 3rd part in my Data Science and Machine Learning series on Deep Learning in Python. At this point, you already know a lot about neural networks and deep learning, including not just the basics like backpropagation, but how to improve it using modern techniques like momentum and adaptive learning rates. You've already written deep neural networks in Theano and TensorFlow, and you know how to run code using the GPU. This book is all about how to use deep learning for computer vision using convolutional neural networks. These are the state of the art when it comes to image classification and they beat vanilla deep networks at tasks like MNIST. In this course we are going to up the ante and look at the StreetView House Number (SVHN) dataset - which uses larger color images at various angles - so things are going to get tougher both computationally and in terms of the difficulty of the classification task. But we will show that convolutional neural networks, or CNNs, are capable of handling the challenge! Because convolution is such a central part of this type of neural network, we are going to go in-depth on this topic. It has more applications than you might imagine, such as modeling artificial organs like the pancreas and the heart. I'm going to show you how to build convolutional filters that can be applied to audio, like the echo effect, and I'm going to show you how to build filters for image effects, like the Gaussian blur and edge detection. After describing the architecture of a convolutional neural network, we will jump straight into code, and I will show you how to extend the deep neural networks we built last time with just a few new functions to turn them into CNNs. We will then test their performance and show how convolutional neu
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值