基于昇思MindSpore的同元软控AI系列工具箱正式发布,大幅度降低产品研发成本

随着智能时代的到来,同元软控与华为携手合作,以昇思MindSpore为框架底座,打造了MWORKS AI工具箱,并于2023年1月8号正式对外发布。基于昇思MindSpore的MWORKS AI工具箱将原有的仿真建模和AI模型有机融合,可以大幅度减少产品的研发成本。未来该工具箱可以应用到航空、航天、船舶、能源等复杂装备系统的工程建模,实现更高、更智能的数字孪生系统。

01  MWORKS AI工具箱基础架构

MWORKS AI工具箱主要包含深度学习与机器学习工具箱,深度学习工具箱支持前馈神经网络、卷积神经网络、循环神经网络等深度学习网络的设计、模型构建、训练以及深度学习应用。机器学习工具箱支持聚类分析、主成分分析、降维、分类、回归等数据的描述、分析及数据建模。工具架构如图示,主要分为底层、框架核心及前端。

02  模块及功能

本节主要介绍深度学习工具箱的主要模块及功能。深度学习工具箱5大模块,110个基础函数,分别是图像深度学习41个,预训练网络19个,网络训练组件2个,时序、序列和文本深度学习33个,函数逼近与聚类15个。

函数

说明(中文)

trainingOptions

训练深度学习神经网络的选项

trainNetwork

训练深度学习神经网络

squeezenet

SqueezeNet卷积神经网络

googlenet

GoogLeNet卷积神经网络

inceptionv3

Inception-v3卷积神经网络

convolution1dLayer

一维卷积层

mobilenetv2

MobileNet-v2卷积神经网络

resnet18

ResNet-18卷积神经网络

resnet50

ResNet-50卷积神经网络

resnet101

ResNet-101卷积神经网络

xception

Xception卷积神经网络

transposedConv1dLayer

转置一维卷积层

batchNormalization1dLayer

一维批处理规范化层

batchNormalization3dLayer

三维批处理规范化层

shufflenet

预训练的ShuffleNet卷积神经网络

instanceNormalization1dLayer

一维实例规范化层

instanceNormalization3dLayer

三维实例规范化层

averagePooling1dLayer

一维平均池化层

alexnet

AlexNet卷积神经网络

vgg16

VGG-16卷积神经网络

vgg19

VGG-19卷积神经网络

convolution2dLayer

二维卷积层

convolution3dLayer

3D卷积层

groupedConvolution2dLayer

二维分组卷积层

transposedConv2dLayer

转置二维卷积层

transposedConv3dLayer

转置的3D卷积层

fullyConnectedLayer

全连接层

reluLayer

整流线性单元(ReLU)层

leakyReluLayer

泄漏整流线性单元(ReLU)层

clippedReluLayer

修剪整流线性单元(ReLU)层

eluLayer

指数线性单位(ELU)层

tanhLayer

双曲正切(tanh)层

swishLayer

Swish层

batchNormalizationLayer

批处理规范化层

groupNormalizationLayer

组归一化层

instanceNormalizationLayer

实例规范化层

layerNormalizationLayer

层归一化层

crossChannelNormalizationLayer

通道局部响应标准化层

dropoutLayer

丢弃层

crop2dLayer

二维裁剪图层

crop3dLayer

3-D裁剪层

03  应用案例

我们基于同元-昇思MindSpore AI工具箱,在图像、文字和声音等领域展开了研究。

3.1 手写数字识别与倾斜角度预测

手写数字识别(MNIST)数据集是一个大型的手写体数字数据库,通常用于训练各种图像处理系统,也被广泛用于机器学习领域的训练和测试。手写数字识别数据集包含手写数字的合成图像、每张图像对应的数字以及倾斜的角度。该数据集共有60000张训练图片和10000张测试图片,每张图片上数字为0-9中的一种,总共分为9类,其中每张图片的大小为28*28(像素)。

图1:手写数字识别数据集

我们主要利用同元-昇思MindSpore AI工具箱提供的卷积算子(Conv2d)、最大池化算子(MaxPool)等构建一个卷积神经网络,并将图像及其对应的数字作为输入和输出,训练得到一个可识别手写数字的网络(基于同元-昇思MindSpore AI工具箱实现的详细代码见附录(1))。

图2:手写数字识别训练过程

在测试集中随机抽取6张图片展示,并使用训练好的模型进行识别,将结果打印在命令行窗口:

图3:手写数字识别结果

该网络模型的卷积模块可以提取图像的特征,去掉Softmax层并在最后添加新的全连接层,得到一个新的网络模型,随后用数字倾斜角度作为预测目标来训练该网络。

图4:手写数字倾斜角度预测训练过程

随机挑选测试集中的图像进行预测,橙色线条为预测角度,可见与实际数字的倾斜角度基本相符。

图5:手写数字倾斜角度预测结

3.2 日语元音字符分类

本示例说明如何使用长短期记忆 (LSTM) 网络对序列数据进行分类(基于同元-昇思MindSpore AI工具箱实现的详细代码见附录(2))。

要训练深度神经网络以对序列数据进行分类,可以使用 LSTM 网络。LSTM 网络允许您将序列数据输入网络,并根据序列数据的各个时间步进行预测。

本示例使用日语元音数据集。此示例训练一个 LSTM 网络,旨在根据表示连续说出的两个日语元音的时间序列数据来识别说话者。训练数据包含九个说话者的时间序列数据。每个序列有12个特征,且长度不同。该数据集包含270个训练观测值和370个测试观测值。

加载日语元音训练数据。可视化第一个时间序列。每条线对应一个特征。

图6:数据集展示

基于同元-昇思MindSpore AI工具箱提供的LSTM算子、全连接算子、Softmax激活函数等, 我们定义了LSTM网络架构,然后导入日语元音数据集进行训练,并打印Loss曲线。

# 构建网络
layers = SequentialCell([
    bilstmLayer(12, 100; NumLayers=3, Batch_First=true),
    flattenLayer(),
    fullyConnectedLayer(truncation_number * 200, 9),
    softmaxLayer(),
])
options = trainingOptions(
    "CrossEntropyLoss", "Adam", "Accuracy", 27, 200, 0.001; Shuffle=true, Plots=true
)
net = trainNetwork(train_data, train_label, layers, options)
YPred = TyDeepLearning.classify(net, test_data)
accuracy = Accuracy(YPred, test_label)
print(accuracy)

图7:Loss曲线

3.3 基于LSTM的人类活动分类任务

该案例所采用的数据集来源于佩戴在人身上的传感器的数据序列。每个序列有三个特征,分别对应三个不同坐标方向下的加速度数据。该数据集包括七位志愿者的加速度数据,其中六组的数据作为训练集,另外一组作为测试集。

某序列的其中一维坐标数据:

图8:训练序列

我们基于同元-昇思MindSpore AI工具箱中的网络层构建一个LSTM神经网络,训练后对测试数据进行测试。训练过程如下(基于同元-昇思MindSpore AI工具箱实现的详细代码见附录(3)):

图9:LSTM网络训练过程

训练完成后,利用已训练的网络对测试集进行测试。下图展示了测试数据集中的三类坐标数据:

图10:测试序列

对以上数据进行预测得到结果:

图11:测试结果

3.4 声纹识别

声纹识别是一个突出的研究领域,具有多种应用,包括取证和生物识别认证。许多声纹识别系统依赖于预先计算的特征,如i向量或MFCC,然后将其输入机器学习或深度学习网络进行分类。其他深度学习语音系统绕过特征提取阶段,将音频信号直接馈送到网络。在这样的端到端系统中,网络直接学习低级音频信号特性(基于同元-昇思MindSpore AI工具箱实现的详细代码见附录(4))

导入数据集,数据集来源中华语料库。可通过采样率调整数据集大小。

train_data, train_label, test_data, test_label = VPR_dataset( 1500)

类似地,基于同元-昇思MindSpore AI工具箱提供的卷积算子(Conv2d)、批标准化算子(BN)、最大池化算子(MaxPool)等定义了如下的网络结构,导入数据集进行训练。并绘制Loss曲线

net = SequentialCell([
    convolution2dLayer(1, 80, (1, 251); PaddingMode="valid"),
    batchNormalization2dLayer(80),
    leakyReluLayer(0.2),
    maxPooling2dLayer((1, 3)),
    convolution2dLayer(80, 60, (1, 251); PaddingMode="valid"),
    batchNormalization2dLayer(60),
    leakyReluLayer(0.2),
    maxPooling2dLayer((1, 3)),
    convolution2dLayer(60, 60, (1, 5); PaddingMode="valid"),
    batchNormalization2dLayer(60),
    leakyReluLayer(0.2),
    maxPooling2dLayer((1, 3)),
    flattenLayer(),
    fullyConnectedLayer(60 * 2490, 2048),
    batchNormalization1dLayer(2048),
    leakyReluLayer(0.2),
    fullyConnectedLayer(2048, 1024),
    batchNormalization1dLayer(1024),
    leakyReluLayer(0.2),
    fullyConnectedLayer(1024, 256),
    batchNormalization1dLayer(256),
    leakyReluLayer(0.2),
    fullyConnectedLayer(256, 20),
    softmaxLayer()
])
options = trainingOptions(
    "CrossEntropyLoss", "Adam", "Accuracy", 100, 150, 0.0005; Plots=true
)
net = trainNetwork(train_data, train_label, net, options)

图12:Loss曲线

3.5 发动机剩余使用寿命预测

本示例说明如何使用深度学习预测发动机的剩余使用寿命 (RUL) (基于同元-昇思MindSpore AI工具箱实现的详细代码见附录(5))。

本示例使用涡轮风扇发动机退化仿真数据集。该示例训练一个CNN 网络,旨在根据表示发动机中各种传感器的时间序列数据来预测发动机的剩余使用寿命(预测性维护,以周期为单位度量)。训练数据包含 100 台发动机的仿真时间序列数据。每个序列的长度各不相同,对应于完整的运行至故障 (RTF) 实例。测试数据包含 100 个不完整序列,每个序列的末尾为相应的剩余使用寿命值。该数据集包含 100 个训练观测值和 100 个测试观测值。

涡轮风扇发动机退化仿真数据集的每个时间序列表示一个发动机。每台发动机启动时的初始磨损程度和制造变差均未知。发动机在每个时间序列开始时运转正常,在到达序列中的某一时刻时出现故障。在训练集中,故障的规模不断增大,直到出现系统故障。

数据集包含26列数值。每一行是在一个运转周期中截取的数据快照,每一列代表一个不同的变量。这些列分别对应于以下数据:

- 第 1 列 - 单元编号

- 第 2 列 - 周期时间

- 第 3-5 列 - 操作设置

- 第 6-26 列 - 传感器测量值 1-21

将3-26列作为特征数据,绘制展示每个特征前100条数据。

图13:特征曲线

由图可知,特征"op_setting_3", Sensor1", "Sensor5", "Sensor10", "Sensor16", "Sensor18", "Sensor19"保持不变,可将其删除。将删除后的剩余17个特征数据进行标准化预处理。

基于同元-昇思MindSpore AI工具箱提供的卷积算子(Conv1d)、Relu激活函数等定义了如下的神经网络结构。

# 网络构建
layers = SequentialCell([
    convolution1dLayer(17, 32, 5),
    reluLayer(),
    convolution1dLayer(32, 64, 7),
    reluLayer(),
    convolution1dLayer(64, 128, 11),
    reluLayer(),
    convolution1dLayer(128, 256, 13),
    reluLayer(),
    convolution1dLayer(256, 512, 15),
    reluLayer(),
    flattenLayer(),
    fullyConnectedLayer(512 * sequence_length, 100),
    reluLayer(),
    dropoutLayer(0.5),
    fullyConnectedLayer(100, 1),
])
options = trainingOptions("RMSELoss", "Adam", "MSE", 512, 200, 0.001; Plots=true)
net = trainNetwork(XTrain, YTrain, layers, options)

绘制RMSE损失曲线。

图14:RMSE Loss曲线

将训练集和测试集导入训练好的神经网络中进行推理预测,将预测值与真实标签进行对比。

图15:预测值与真实标签对比

图16:测试集效果对比

04  总结与展望

同元软控与华为携手合作,打造了以高性能科学计算语言Julia为用户语言、以华为昇思MindSpore为底层的AI系列工具箱。如深度学习工具箱内含丰富的函数库、应用案例及完善的帮助文档。基于深度学习工具箱,用户可以采用Julia构建卷积神经网络、循环神经网络等各种类型深度学习网络模型,并进行训练计算,深度学习工具箱已在图像识别、语言识别、故障预测等方向开展了应用验证,取得了良好效果。

同元软控与华为的合作开辟了国产软件携手发展的示范应用,未来,我们还会在AI领域继续展开深入合作,全新推出强化学习工具箱,并推进AI系列工具箱在航空、航天、车辆、能源等复杂系统多领域工程建模中的应用,实现机理-数据融合建模,探索智能建模仿真技术,打造新一代智能建模仿真软件平台。我们也希望能够有更多的企业、科研院所参与进来,共同打造国产智能软硬件平台。

苏州同元软控信息技术有限公司:

苏州同元软控信息技术有限公司(简称“同元软控”)是专业从事新一代系统级设计与仿真工业软件产品研发、工程服务及系统工程解决方案的高科技企业。团队历经二十多年技术积累与公司十多年持续研发,研制了国际先进、完全自主的科学计算与系统建模仿真平台MWORKS。

代码附录

(1)手写数字识别与倾斜角度预测

using TyDeepLearning
using TyPlot
using TyImages
# 训练卷积神经网络用于图像分类
XTrain, YTrain = DigitDatasetTrainData()
p = randperm(5000)
index = p[1:20]
figure(1)
for i in eachindex(range(1, 20))
    subplot(4, 5, i)
    imshow(XTrain[index[i], 1, :, :])
end
index1 = p[1:750]
index2 = p[751:end]
X_train = XTrain[index2, :, :, :]
Y_train = YTrain[index2]
X_Test = XTrain[index1, :, :, :]
Y_Test = YTrain[index1]

options = trainingOptions(
    "CrossEntropyLoss", "Momentum", "Accuracy", 128, 50, 0.001; Plots=true
)

layers = SequentialCell([
    convolution2dLayer(1, 20, 5),
    reluLayer(),
    maxPooling2dLayer(2; Stride=2),
    flattenLayer(),
    fullyConnectedLayer(20 * 14 * 14, 10),
    softmaxLayer(),
])
net = trainNetwork(X_train, Y_train, layers, options)
function preclasses(prob, classes)
    ypredclasses = []
    for i in eachindex(range(1, size(prob)[1]))
        maxindex = 0
        maxnum = 0
        for k in eachindex(classes)
            if prob[i, :][k] > maxnum
                maxnum = prob[i, :][k]
                maxindex = k
            end
        end
        ypredclasses = append!(ypredclasses, [unique(classes)[maxindex]])
    end
    return ypredclasses
end
YPred = TyDeepLearning.predict(net, X_Test)
classes = [i - 1 for i in range(1, 10)]
YPred1 = preclasses(YPred, classes)
accuracy = Accuracy(YPred, Y_Test)

figure(3)
p2 = randperm(750)
index = p2[1:9]
for i in eachindex(range(1, 9))
    TyPlot.subplot(3, 3, i)
    TyImages.imshow(X_Test[index[i], 1, :, :])
    title1 = "Prediction Label"
    title2 = string(YPred1[index[i]])
    title(string(title1, ": ", title2))
end
# 手写数字识别角度预测
using TyDeepLearning
using TyImages
using TyPlot

XTrain, YTrain = DigitTrain4DArrayData()
index = randperm(5000)[1:20]
figure(1)
for i in eachindex(range(1, 20))
    subplot(4, 5, i)
    imshow(XTrain[index[i], 1, :, :])
end
layers = SequentialCell([
    convolution2dLayer(1, 25, 12),
    reluLayer(),
    flattenLayer(),
    fullyConnectedLayer(25 * 28 * 28, 1),
])
options = trainingOptions("RMSELoss", "Adam", "MSE", 10, 50, 0.0001; Plots=true)
net = trainNetwork(XTrain, YTrain, layers, options)
XTest, YTest = DigitTest4DArrayData()
YPred = TyDeepLearning.predict(net, XTest)
rmse = sqrt(mse(YTest, YPred))

index = randperm(5000)[1:9]
figure(2)
for i in range(1, 9)
    subplot(3, 3, i)
    hold("on")
    imshow(XTest[index[i], 1, :, :])
    x = [7:21...]
    plot(x, tan((90 + YPred[index[i]]) / 180 * pi) * (x .- 14) .+ 14, "r")
    ax = gca()
    ax.set_ylim(28,0)
    ax.set_xlim(0, 28)
    hold("off")
end

(2)日语元音字符分类

using TyDeepLearning
using TyPlot
# 数据导入
path1 = "data/JapaneseVowels/train.csv"
path2 = "data/JapaneseVowels/trainlabels.csv"
path3 = "data/JapaneseVowels/test.csv"
path4 = "data/JapaneseVowels/testlabels.csv"
train_data = Array(CSV.read(path1, DataFrame; header=false))
train_label = Array(CSV.read(path2, DataFrame; header=false))[:,1]
test_data = Array(CSV.read(path3, DataFrame; header=false))
test_label = Array(CSV.read(path4, DataFrame; header=false))[:,1]

figure(1)
plot(train_data[1:12,1:20]')
title("Training Observation 1")
truncation_number = 29
train_data = Array(train_data[:, 1:truncation_number])
for i in range(1, 270 * 12)
    for j in range(1, truncation_number)
        if train_data[i, j] === missing
            train_data[i, j] = 0
        end
    end
end

train_data = reshape(train_data, (12, 270, truncation_number))
train_data = permutedims(train_data, (2, 3, 1))
train_label = Array(train_label)[:, 1]
test_data = Array(test_data[:, 1:truncation_number])
for i in range(1, 370 * 12)
    for j in range(1, truncation_number)
        if test_data[i, j] === missing
            test_data[i, j] = 0
        end
    end
end
test_data = reshape(test_data, (12, 370, truncation_number))
test_data = permutedims(test_data, (2, 3, 1))
test_label = Array(test_label)[:, 1]

# 构建网络
layers = SequentialCell([
    bilstmLayer(12, 100; NumLayers=3, Batch_First=true),
    flattenLayer(),
    fullyConnectedLayer(truncation_number * 200, 9),
    softmaxLayer(),
])
options = trainingOptions(
    "CrossEntropyLoss", "Adam", "Accuracy", 27, 200, 0.001; Shuffle=true, Plots=true
)
net = trainNetwork(train_data, train_label, layers, options)
YPred = TyDeepLearning.classify(net, test_data)
accuracy = Accuracy(YPred, test_label)
print(accuracy)

(3)基于LSTM的人类活动分类任务

using TyDeepLearning
using TyPlot

train_data_label = CSV.read(
    "data/HumanActivity/HumanActivityTrain.csv", DataFrame; header=false
)
test_data_label = CSV.read(
    "data/HumanActivity/HumanActivityTest.csv", DataFrame; header=false
)
train_data_label = Array(train_data_label)
test_data_label = Array(test_data_label)
# 可视化

figure(2)
line_color = ["#0072BD", "#D95319", "#EDB120", "#7E2F8E", "#77AC30"]
for i in range(1, 5)
    classes = [i - 1 for i in range(1, 5)]
    idx = TyBase.find(train_data_label[1:64480, 4] .== classes[i])
    hold("on")
    plot(idx, train_data_label[idx, 1], line_color[i])
end
hold("off")
xlabel("Time Step")
ylabel("Acceleration")
title("Training Sequence 1, Feature 1")
classes = ["Dancing", "Running", "Sitting", "Standing", "Walking"]
legend(classes)

figure(3)
plot(test_data_label[:, 1:3])
title("Test Data")
xlabel("Time Step")
legend(["Feature 1", "Feature 2", "Feature 3"])

function create_datasets(data, t_window)
    out_seq = reshape(data[1:(1 + t_window - 1), :], (1, t_window, size(data)[2]))
    L = size(data)[1]
    for i in range(2, L - t_window + 1)
        train_seq = data[i:(i + t_window - 1), :]
        train_seq = reshape(train_seq, (1, t_window, size(data)[2]))
        out_seq = cat(out_seq, train_seq; dims=1)
    end
    return out_seq
end
t_window = 6
train_data = train_data_label[:, 1:3]
train_data = create_datasets(train_data, t_window)
train_label = train_data_label[t_window:end, 4]

test_data = test_data_label[:, 1:3]
test_data = create_datasets(test_data, t_window)
test_label = test_data_label[t_window:end, 4]

numFeatures = 3
numHiddenUnits = 200
numClasses = 5
layers = SequentialCell([
    lstmLayer(numFeatures, numHiddenUnits; NumLayers=1),
    flattenLayer(),
    fullyConnectedLayer(numHiddenUnits * t_window, numClasses),
    softmaxLayer(),
])
options = trainingOptions(
    "CrossEntropyLoss", "Adam", "Accuracy", 512, 200, 0.005; Plots=true
)
net = trainNetwork(train_data, train_label, layers, options)
preruslt = TyDeepLearning.classify(net, test_data)
accuracy = Accuracy(preruslt, test_label)
prelabel = Array{Int}(undef, 53883)
for i in 1:53883
    prelabel_item = findmax(preruslt[i, :])[2] - 1
    prelabel[i] = prelabel_item
end
figure(4)
idx = [1:53883...]
hold("on")
plot(idx, prelabel, ".", idx, test_label, "-")
legend("Predicted")
# plot(idx, prelabel, 0.1, "y", idx, test_label, "-")
hold("off")
xlabel("Time Step")
ylabel("Activity")
title("Predicted Activities")
legend(["Predicted", "Test Data"])

(4)声纹识别

using TyDeepLearning

train_data, train_label, test_data, test_label = VPR_dataset( 1500)

net = SequentialCell([
    convolution2dLayer(1, 80, (1, 251); PaddingMode="valid"),
    batchNormalization2dLayer(80),
    leakyReluLayer(0.2),
    maxPooling2dLayer((1, 3)),
    convolution2dLayer(80, 60, (1, 251); PaddingMode="valid"),
    batchNormalization2dLayer(60),
    leakyReluLayer(0.2),
    maxPooling2dLayer((1, 3)),
    convolution2dLayer(60, 60, (1, 5); PaddingMode="valid"),
    batchNormalization2dLayer(60),
    leakyReluLayer(0.2),
    maxPooling2dLayer((1, 3)),
    flattenLayer(),
    fullyConnectedLayer(60 * 2490, 2048),
    batchNormalization1dLayer(2048),
    leakyReluLayer(0.2),
    fullyConnectedLayer(2048, 1024),
    batchNormalization1dLayer(1024),
    leakyReluLayer(0.2),
    fullyConnectedLayer(1024, 256),
    batchNormalization1dLayer(256),
    leakyReluLayer(0.2),
    fullyConnectedLayer(256, 20),
    softmaxLayer()
])
options = trainingOptions(
    "CrossEntropyLoss", "Adam", "Accuracy", 100, 150, 0.0005; Plots=true
)
net = trainNetwork(train_data, train_label, net, options)

test_pred = TyDeepLearning.predict(net, test_data)
train_pred = TyDeepLearning.predict(net, train_data)

train_acc = Accuracy(train_pred, train_label)
test_acc = Accuracy(test_pred, test_label)

(5)发动机剩余寿命预测

using TyDeepLearning
using DataFrames
using TyPlot
dir = "data/RUL/"
# 训练集数据处理
path1 = dir * "train_FD001.csv"
train_FD001 = CSV.read(path1, DataFrame; header=false)

f1 = figure("Feature Data"; figsize=[7, 8])
color_list = ["r", "g", "b", "c", "m", "y", "k"]
legend_list = [
    "op_setting_1",
    "op_setting_2",
    "op_setting_3",
    "Sensor1",
    "Sensor2",
    "Sensor3",
    "Sensor4",
    "Sensor5",
    "Sensor6",
    "Sensor7",
    "Sensor8",
    "Sensor9",
    "Sensor10",
    "Sensor11",
    "Sensor12",
    "Sensor13",
    "Sensor14",
    "Sensor15",
    "Sensor16",
    "Sensor17",
    "Sensor18",
    "Sensor19",
    "Sensor20",
    "Sensor21",
]
for i in range(3, 26)
    subplot(24, 1, i - 2)
    plot(train_FD001[1:100, i], color_list[i % 7 + 1])
    yticklabels([])
    legend([legend_list[i - 2]]; loc="northeast")
end

# 寻找每个单元的最大cycle
train_count = zeros(Int64, (100))
for i in range(1, size(train_FD001)[1])
    for j in range(1, 100)
        if train_FD001[i, 1] == j
            train_count[j] = train_count[j] + 1
        else
            continue
        end
    end
end
RUL = zeros(Int64, (size(train_FD001)[1]))
for i in range(1, size(train_FD001)[1])
    for j in range(1, 100)
        if train_FD001[i, 1] == j
            RUL[i] = train_count[j] - train_FD001[i, 2]
        else
            continue
        end
    end
end
# 删除某些在所有步长中保持不变的特征["op_setting_3", Sensor1", "Sensor5", "Sensor10", "Sensor16", "Sensor18", "Sensor19"]
select!(
    train_FD001,
    Not([:Column5, :Column6, :Column10, :Column15, :Column21, :Column23, :Column24]),
)
f1 = figure("Feature Data"; figsize=[6, 6])
color_list = ["r", "g", "b", "c", "m", "y", "k"]
legend_list = [
    "op_setting_1",
    "op_setting_2",
    "Sensor2",
    "Sensor3",
    "Sensor4",
    "Sensor6",
    "Sensor7",
    "Sensor8",
    "Sensor9",
    "Sensor11",
    "Sensor12",
    "Sensor13",
    "Sensor14",
    "Sensor15",
    "Sensor17",
    "Sensor20",
    "Sensor21",
]
for i in range(3, 19)
    subplot(17, 1, i - 2)
    plot(train_FD001[:, i], color_list[i % 7 + 1])
    yticklabels([])
    legend([legend_list[i - 2]]; loc="northeast")
end

# 添加一列名为RUL
train_FD001 = insertcols!(train_FD001, ncol(train_FD001) + 1, :RUL => RUL)

# 标准化
feats = Array(train_FD001[!, 3:19])
data_max = maximum(feats; dims=1)
data_min = minimum(feats; dims=1)
feats_norm = (feats .- data_min) ./ (data_max .- data_min)
# 滑动窗口为31
sequence_length = 31
function gen_sequence(data_array, data_label, seq_length)
    num_elements = size(data_array)[1]
    label = data_label[seq_length:num_elements]
    data = reshape(data_array[1:seq_length, :], (1, seq_length, 17))
    dict = zip(range(2, num_elements - seq_length + 1), range(seq_length + 1, num_elements))
    for (start, stop) in dict
        data = vcat(data, reshape(data_array[start:stop, :], (1, seq_length, 17)))
    end
    return data, label
end
count_sum = zeros(Int64, (101))
for i in range(1, 100)
    count_sum[i + 1] = train_count[i] + count_sum[i]
end
data_label = RUL[1:192]
data_array = feats_norm[1:192, :]
data, label = gen_sequence(data_array, data_label, sequence_length)
for j in range(2, 100)
    data_array = feats_norm[(count_sum[j] + 1):count_sum[j + 1], :]
    data_label = RUL[(count_sum[j] + 1):count_sum[j + 1]]
    data_id, data_id_label = gen_sequence(data_array, data_label, sequence_length)
    data = cat(data, data_id; dims=1)
    label = cat(label, data_id_label; dims=1)
end
# 裁剪响应RUL阈值为150
RUL_Threshold = 150
for i in range(1, size(label)[1])
    if label[i] > 150
        label[i] = 150
    end
end
XTrain = permutedims(data, (1, 3, 2))
YTrain = label
# 网络构建
layers = SequentialCell([
    convolution1dLayer(17, 32, 5),
    reluLayer(),
    convolution1dLayer(32, 64, 7),
    reluLayer(),
    convolution1dLayer(64, 128, 11),
    reluLayer(),
    convolution1dLayer(128, 256, 13),
    reluLayer(),
    convolution1dLayer(256, 512, 15),
    reluLayer(),
    flattenLayer(),
    fullyConnectedLayer(512 * sequence_length, 100),
    reluLayer(),
    dropoutLayer(0.5),
    fullyConnectedLayer(100, 1),
])
options = trainingOptions("RMSELoss", "Adam", "MSE", 512, 200, 0.001; Plots=true)
net = trainNetwork(XTrain, YTrain, layers, options)
# 测试集数据处理
path2 = dir * "test_FD001.csv"
path3 = dir * "RUL_FD001.csv"
test_FD001 = CSV.read(path2, DataFrame; header=false)
RUL_FD001 = CSV.read(path3, DataFrame; header=false)
select!(
    test_FD001,
    Not([:Column5, :Column6, :Column10, :Column15, :Column21, :Column23, :Column24]),
)

# 寻找每个单元的最大cycle
test_count = zeros(Int64, (100))
for i in range(1, size(test_FD001)[1])
    for j in range(1, 100)
        if test_FD001[i, 1] == j
            test_count[j] = test_count[j] + 1
        else
            continue
        end
    end
end

test_count_sum = zeros(Int64, (101))
for i in range(1, 100)
    test_count_sum[i + 1] = test_count[i] + test_count_sum[i]
end

test_RUL = zeros(Int64, (size(test_FD001)[1]))
for i in range(1, size(test_FD001)[1])
    for j in range(1, 100)
        if test_FD001[i, 1] == j
            test_RUL[i] = test_count[j] - test_FD001[i, 2] + RUL_FD001[j, 1]
        else
            continue
        end
    end
end

# 标准化处理
data_test = Array(test_FD001[!, 3:19])
data_max = maximum(data_test; dims=1)
data_min = minimum(data_test; dims=1)
test_norm = (data_test .- data_min) ./ (data_max .- data_min)

test_data_array = test_norm[(test_count_sum[1] + 1):test_count_sum[1 + 1], :]
test_data = reshape(
    test_data_array[(test_count[1] - sequence_length + 1):end, :], (1, sequence_length, 17)
)
for j in range(2, 100)
    test_data_array = test_norm[(test_count_sum[j] + 1):test_count_sum[j + 1], :]
    datadata = test_data_array[(test_count[j] - sequence_length + 1):end, :]
    data_reshape = reshape(datadata, (1, sequence_length, 17))
    test_data = cat(test_data, data_reshape; dims=1)
end

# 裁剪响应RUL阈值为150
YTest = Array(RUL_FD001)
RUL_Threshold = 150
for i in range(1, size(YTest)[1])
    if YTest[i] > 150
        YTest[i] = 150
    end
end
XTest = permutedims(test_data, (1, 3, 2))
Y = TyDeepLearning.predict(net, XTest)
error = sqrt(mse(YTest, Y))

hold("on")
plot(Y, "-o")
plot(YTest, "-v")
legend(["Prediction value", "True value"])
hold("off")

sequence_length = 31
function gen_sequence(data_array, data_label, seq_length)
    num_elements = size(data_array)[1]
    label = data_label[seq_length:num_elements]
    data = reshape(data_array[1:seq_length, :], (1, seq_length, 17))
    dict = zip(range(2, num_elements - seq_length + 1), range(seq_length + 1, num_elements))
    for (start, stop) in dict
        data = vcat(data, reshape(data_array[start:stop, :], (1, seq_length, 17)))
    end
    return data, label
end
count_sum = zeros(Int64, (101))
for i in range(1, 100)
    count_sum[i + 1] = train_count[i] + count_sum[i]
end
data_label = RUL[848:1116]
data_array = feats_norm[848:1116, :]
data, label = gen_sequence(data_array, data_label, sequence_length)
# 裁剪响应RUL阈值为150
RUL_Threshold = 150
for i in range(1, size(label)[1])
    if label[i] > 150
        label[i] = 150
    end
end
XTrain1 = permutedims(data, (1, 3, 2))
YTrain1 = label
Y1 = TyDeepLearning.predict(net, XTrain1)
error = sqrt(mse(YTrain1, Y1))
hold("on")
plot(Y1)
plot(YTrain1,)
legend(["Prediction value", "True value"])
hold("off")

MindSpore官方资料

官方QQ群 : 871543426

官网:https://www.mindspore.cn/

Gitee : https://gitee.com/mindspore/mindspore

GitHub : https://github.com/mindspore-ai/mindspore

论坛:https://www.hiascend.com/forum/forum-0106101385921175002-1.html

Openl启智社区:https://openi.org.cn

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值