卷积神经网络(CNN):乳腺癌识别

​活动地址:CSDN21天学习挑战赛

        本文主要介绍了通过深度学习进行乳腺癌识别的应用,首先简单介绍了乳腺癌医学背景和相关知识,接着介绍了目前能获得的公开的乳腺癌数据集,最后介绍了神经网络的实现方式和处理后的效果以及性能分析。卷积神经网络(CNN)已经尝试应用于癌症检查,但是基于CNN模型的共同缺点是不稳定性以及对训练数据的依赖。部署模型时,假设训练数据和测试数据是从同一分布中提取的。这可能是医学成像中的一个问题,在这些医学成像中,诸如相机设置或化学药品染色的年龄之类的元素在设施和医院之间会有所不同,并且会影响图像的颜色。这些变化对人眼来说可能并不明显,但是它们可能会影响CNN的重要特征并导致模型性能下降。因此,重要的是要开发一种能够适应域之间差异的鲁棒算法。


一、准备环境

import tensorflow as tf
gpus = tf.config.list_physical_devices("GPU")

if gpus:
    gpu0 = gpus[0] #如果有多个GPU,仅使用第0个GPU
    tf.config.experimental.set_memory_growth(gpu0, True) #设置GPU显存用量按需使用
    tf.config.set_visible_devices([gpu0],"GPU")
    
import matplotlib.pyplot as plt
import os,PIL,pathlib
import numpy as np
import pandas as pd
import warnings
from tensorflow import keras

warnings.filterwarnings("ignore")             #忽略警告信息
plt.rcParams['font.sans-serif'] = ['SimHei']  # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False    # 用来正常显示负号

二、导入数据

1.导入数据

import pathlib

data_dir = "./26-data"
data_dir = pathlib.Path(data_dir)
image_count = len(list(data_dir.glob('*/*')))
print("图片总数为:",image_count)
#图片总数为: 13403

图片总数为: 13403

 

batch_size = 16
img_height = 50
img_width  = 50
"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="training",
    seed=12,
    image_size=(img_height, img_width),
    batch_size=batch_size)

Found 13403 files belonging to 2 classes.Using 10723 files for training.

"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="validation",
    seed=12,
    image_size=(img_height, img_width),
    batch_size=batch_size)

Found 13403 files belonging to 2 classes.Using 2680 files for validation.

 

class_names = train_ds.class_names
print(class_names)

['0', '1']

 2.检查数据

for image_batch, labels_batch in train_ds:
    print(image_batch.shape)
    print(labels_batch.shape)
    break

(16, 50, 50, 3)
(16,)

3.配置数据集

  • shuffle():打乱数据,关于此函数的详细介绍可以参考:https:lzhuanlan.zhihu.com/p/42417456
  • prefetch():预取数据,加速运行,其详细介绍可以参考我前两篇文章,里面都有讲解。.
  • cache():将数据集缓存到内存当中,加速运行
AUTOTUNE = tf.data.AUTOTUNE

def train_preprocessing(image,label):
    return (image/255.0,label)

train_ds = (
    train_ds.cache()
    .shuffle(1000)
    .map(train_preprocessing)    # 这里可以设置预处理函数
#     .batch(batch_size)           # 在image_dataset_from_directory处已经设置了batch_size
    .prefetch(buffer_size=AUTOTUNE)
)

val_ds = (
    val_ds.cache()
    .shuffle(1000)
    .map(train_preprocessing)    # 这里可以设置预处理函数
#     .batch(batch_size)         # 在image_dataset_from_directory处已经设置了batch_size
    .prefetch(buffer_size=AUTOTUNE)
)

4.数据可视化

 

三、构建模型

import tensorflow as tf

model = tf.keras.Sequential([
    tf.keras.layers.Conv2D(filters=16,kernel_size=(3,3),padding="same",activation="relu",input_shape=[img_width, img_height, 3]),
    tf.keras.layers.Conv2D(filters=16,kernel_size=(3,3),padding="same",activation="relu"),

    tf.keras.layers.MaxPooling2D((2,2)),
    tf.keras.layers.Dropout(0.5),
    tf.keras.layers.Conv2D(filters=16,kernel_size=(3,3),padding="same",activation="relu"),
    tf.keras.layers.MaxPooling2D((2,2)),
    tf.keras.layers.Conv2D(filters=16,kernel_size=(3,3),padding="same",activation="relu"),
    tf.keras.layers.MaxPooling2D((2,2)),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(2, activation="softmax")
])
model.summary()

 四、编译

model.compile(optimizer="adam",
                loss='sparse_categorical_crossentropy',
                metrics=['accuracy'])

五、训练模型

from tensorflow.keras.callbacks import ModelCheckpoint, Callback, EarlyStopping, ReduceLROnPlateau, LearningRateScheduler

NO_EPOCHS = 100
PATIENCE  = 5
VERBOSE   = 1

# 设置动态学习率
annealer = LearningRateScheduler(lambda x: 1e-3 * 0.99 ** (x+NO_EPOCHS))

# 设置早停
earlystopper = EarlyStopping(monitor='loss', patience=PATIENCE, verbose=VERBOSE)

# 
checkpointer = ModelCheckpoint('best_model.h5',
                                monitor='val_accuracy',
                                verbose=VERBOSE,
                                save_best_only=True,
                                save_weights_only=True)
Epoch 1/100
671/671 [==============================] - 37s 52ms/step - loss: 0.5644 - accuracy: 0.7007 - val_loss: 0.5268 - val_accuracy: 0.7228

Epoch 00001: val_accuracy improved from -inf to 0.72276, saving model to best_model.h5
Epoch 2/100
671/671 [==============================] - 8s 12ms/step - loss: 0.4430 - accuracy: 0.8062 - val_loss: 0.4252 - val_accuracy: 0.8317

Epoch 00002: val_accuracy improved from 0.72276 to 0.83172, saving model to best_model.h5
Epoch 3/100
671/671 [==============================] - 7s 11ms/step - loss: 0.4225 - accuracy: 0.8152 - val_loss: 0.4436 - val_accuracy: 0.8194

Epoch 00003: val_accuracy did not improve from 0.83172
Epoch 4/100
671/671 [==============================] - 7s 11ms/step - loss: 0.4000 - accuracy: 0.8242 - val_loss: 0.3964 - val_accuracy: 0.8358

Epoch 00004: val_accuracy improved from 0.83172 to 0.83582, saving model to best_model.h5
Epoch 5/100
671/671 [==============================] - 7s 11ms/step - loss: 0.3826 - accuracy: 0.8289 - val_loss: 0.3652 - val_accuracy: 0.8403

Epoch 00005: val_accuracy improved from 0.83582 to 0.84030, saving model to best_model.h5
Epoch 6/100
671/671 [==============================] - 7s 11ms/step - loss: 0.3751 - accuracy: 0.8320 - val_loss: 0.4237 - val_accuracy: 0.8209

Epoch 00006: val_accuracy did not improve from 0.84030
Epoch 7/100
671/671 [==============================] - 7s 11ms/step - loss: 0.3658 - accuracy: 0.8345 - val_loss: 0.3552 - val_accuracy: 0.8608

Epoch 00007: val_accuracy improved from 0.84030 to 0.86082, saving model to best_model.h5
Epoch 8/100
671/671 [==============================] - 8s 11ms/step - loss: 0.3589 - accuracy: 0.8398 - val_loss: 0.3421 - val_accuracy: 0.8459

Epoch 00008: val_accuracy did not improve from 0.86082
Epoch 9/100
671/671 [==============================] - 7s 11ms/step - loss: 0.3533 - accuracy: 0.8430 - val_loss: 0.3504 - val_accuracy: 0.8444

Epoch 00009: val_accuracy did not improve from 0.86082
Epoch 10/100
671/671 [==============================] - 7s 11ms/step - loss: 0.3489 - accuracy: 0.8454 - val_loss: 0.3508 - val_accuracy: 0.8493

Epoch 00010: val_accuracy did not improve from 0.86082
Epoch 11/100
671/671 [==============================] - 8s 11ms/step - loss: 0.3446 - accuracy: 0.8463 - val_loss: 0.3326 - val_accuracy: 0.8616

Epoch 00011: val_accuracy improved from 0.86082 to 0.86157, saving model to best_model.h5
Epoch 12/100
671/671 [==============================] - 8s 11ms/step - loss: 0.3467 - accuracy: 0.8491 - val_loss: 0.3850 - val_accuracy: 0.8455

Epoch 00012: val_accuracy did not improve from 0.86157
Epoch 13/100
671/671 [==============================] - 7s 11ms/step - loss: 0.3337 - accuracy: 0.8524 - val_loss: 0.3387 - val_accuracy: 0.8623

Epoch 00013: val_accuracy improved from 0.86157 to 0.86231, saving model to best_model.h5
Epoch 14/100
671/671 [==============================] - 8s 11ms/step - loss: 0.3308 - accuracy: 0.8561 - val_loss: 0.3292 - val_accuracy: 0.8713

Epoch 00014: val_accuracy improved from 0.86231 to 0.87127, saving model to best_model.h5
Epoch 15/100
671/671 [==============================] - 7s 11ms/step - loss: 0.3230 - accuracy: 0.8580 - val_loss: 0.3403 - val_accuracy: 0.8500

Epoch 00015: val_accuracy did not improve from 0.87127
Epoch 16/100
671/671 [==============================] - 7s 11ms/step - loss: 0.3234 - accuracy: 0.8574 - val_loss: 0.3169 - val_accuracy: 0.8840

Epoch 00016: val_accuracy improved from 0.87127 to 0.88396, saving model to best_model.h5
Epoch 17/100
671/671 [==============================] - 7s 11ms/step - loss: 0.3168 - accuracy: 0.8651 - val_loss: 0.3056 - val_accuracy: 0.8787

Epoch 00017: val_accuracy did not improve from 0.88396
Epoch 18/100
671/671 [==============================] - 7s 11ms/step - loss: 0.3090 - accuracy: 0.8673 - val_loss: 0.2904 - val_accuracy: 0.8914

Epoch 00018: val_accuracy improved from 0.88396 to 0.89142, saving model to best_model.h5
Epoch 19/100
671/671 [==============================] - 8s 11ms/step - loss: 0.3090 - accuracy: 0.8650 - val_loss: 0.3056 - val_accuracy: 0.8828

Epoch 00019: val_accuracy did not improve from 0.89142
Epoch 20/100
671/671 [==============================] - 8s 11ms/step - loss: 0.3008 - accuracy: 0.8736 - val_loss: 0.3003 - val_accuracy: 0.8813

Epoch 00020: val_accuracy did not improve from 0.89142
Epoch 21/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2987 - accuracy: 0.8715 - val_loss: 0.3085 - val_accuracy: 0.8840

Epoch 00021: val_accuracy did not improve from 0.89142
Epoch 22/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2965 - accuracy: 0.8723 - val_loss: 0.3309 - val_accuracy: 0.8694

Epoch 00022: val_accuracy did not improve from 0.89142
Epoch 23/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2937 - accuracy: 0.8753 - val_loss: 0.3135 - val_accuracy: 0.8619

Epoch 00023: val_accuracy did not improve from 0.89142
Epoch 24/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2872 - accuracy: 0.8781 - val_loss: 0.3174 - val_accuracy: 0.8664

Epoch 00024: val_accuracy did not improve from 0.89142
Epoch 25/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2827 - accuracy: 0.8802 - val_loss: 0.3107 - val_accuracy: 0.8698

Epoch 00025: val_accuracy did not improve from 0.89142
Epoch 26/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2803 - accuracy: 0.8815 - val_loss: 0.2883 - val_accuracy: 0.8858

Epoch 00026: val_accuracy did not improve from 0.89142
Epoch 27/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2802 - accuracy: 0.8811 - val_loss: 0.3010 - val_accuracy: 0.8746

Epoch 00027: val_accuracy did not improve from 0.89142
Epoch 28/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2787 - accuracy: 0.8832 - val_loss: 0.3022 - val_accuracy: 0.8832

Epoch 00028: val_accuracy did not improve from 0.89142
Epoch 29/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2740 - accuracy: 0.8846 - val_loss: 0.2763 - val_accuracy: 0.8851

Epoch 00029: val_accuracy did not improve from 0.89142
Epoch 30/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2771 - accuracy: 0.8815 - val_loss: 0.2766 - val_accuracy: 0.8951

Epoch 00030: val_accuracy improved from 0.89142 to 0.89515, saving model to best_model.h5
Epoch 31/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2739 - accuracy: 0.8851 - val_loss: 0.2764 - val_accuracy: 0.8914

Epoch 00031: val_accuracy did not improve from 0.89515
Epoch 32/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2676 - accuracy: 0.8858 - val_loss: 0.2646 - val_accuracy: 0.8940

Epoch 00032: val_accuracy did not improve from 0.89515
Epoch 33/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2682 - accuracy: 0.8871 - val_loss: 0.2759 - val_accuracy: 0.8922

Epoch 00033: val_accuracy did not improve from 0.89515
Epoch 34/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2639 - accuracy: 0.8901 - val_loss: 0.3046 - val_accuracy: 0.8757

Epoch 00034: val_accuracy did not improve from 0.89515
Epoch 35/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2645 - accuracy: 0.8896 - val_loss: 0.3199 - val_accuracy: 0.8716

Epoch 00035: val_accuracy did not improve from 0.89515
Epoch 36/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2603 - accuracy: 0.8891 - val_loss: 0.3165 - val_accuracy: 0.8679

Epoch 00036: val_accuracy did not improve from 0.89515
Epoch 37/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2638 - accuracy: 0.8891 - val_loss: 0.3043 - val_accuracy: 0.8791

Epoch 00037: val_accuracy did not improve from 0.89515
Epoch 38/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2596 - accuracy: 0.8933 - val_loss: 0.2878 - val_accuracy: 0.8821

Epoch 00038: val_accuracy did not improve from 0.89515
Epoch 39/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2555 - accuracy: 0.8936 - val_loss: 0.2620 - val_accuracy: 0.8914

Epoch 00039: val_accuracy did not improve from 0.89515
Epoch 40/100

671/671 [==============================] - 7s 11ms/step - loss: 0.2586 - accuracy: 0.8912 - val_loss: 0.2927 - val_accuracy: 0.8791

Epoch 00040: val_accuracy did not improve from 0.89515
Epoch 41/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2558 - accuracy: 0.8912 - val_loss: 0.2908 - val_accuracy: 0.8843

Epoch 00041: val_accuracy did not improve from 0.89515
Epoch 42/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2493 - accuracy: 0.8950 - val_loss: 0.2861 - val_accuracy: 0.8933

Epoch 00042: val_accuracy did not improve from 0.89515
Epoch 43/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2503 - accuracy: 0.8933 - val_loss: 0.2855 - val_accuracy: 0.8869

Epoch 00043: val_accuracy did not improve from 0.89515
Epoch 44/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2483 - accuracy: 0.8942 - val_loss: 0.2745 - val_accuracy: 0.8959

Epoch 00044: val_accuracy improved from 0.89515 to 0.89590, saving model to best_model.h5
Epoch 45/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2497 - accuracy: 0.8967 - val_loss: 0.2590 - val_accuracy: 0.8951

Epoch 00045: val_accuracy did not improve from 0.89590
Epoch 46/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2460 - accuracy: 0.8976 - val_loss: 0.2656 - val_accuracy: 0.8944

Epoch 00046: val_accuracy did not improve from 0.89590
Epoch 47/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2440 - accuracy: 0.8993 - val_loss: 0.2612 - val_accuracy: 0.8955

Epoch 00047: val_accuracy did not improve from 0.89590
Epoch 48/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2409 - accuracy: 0.8979 - val_loss: 0.2798 - val_accuracy: 0.8922

Epoch 00048: val_accuracy did not improve from 0.89590
Epoch 49/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2438 - accuracy: 0.8989 - val_loss: 0.2524 - val_accuracy: 0.8963

Epoch 00049: val_accuracy improved from 0.89590 to 0.89627, saving model to best_model.h5
Epoch 50/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2441 - accuracy: 0.8995 - val_loss: 0.2616 - val_accuracy: 0.8966

Epoch 00050: val_accuracy improved from 0.89627 to 0.89664, saving model to best_model.h5
Epoch 51/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2390 - accuracy: 0.9010 - val_loss: 0.2683 - val_accuracy: 0.8940

Epoch 00051: val_accuracy did not improve from 0.89664
Epoch 52/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2390 - accuracy: 0.9004 - val_loss: 0.2462 - val_accuracy: 0.9007

Epoch 00052: val_accuracy improved from 0.89664 to 0.90075, saving model to best_model.h5
Epoch 53/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2386 - accuracy: 0.8999 - val_loss: 0.3076 - val_accuracy: 0.8769

Epoch 00053: val_accuracy did not improve from 0.90075
Epoch 54/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2363 - accuracy: 0.9024 - val_loss: 0.2433 - val_accuracy: 0.9060

Epoch 00054: val_accuracy improved from 0.90075 to 0.90597, saving model to best_model.h5
Epoch 55/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2314 - accuracy: 0.9050 - val_loss: 0.2610 - val_accuracy: 0.8989

Epoch 00055: val_accuracy did not improve from 0.90597
Epoch 56/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2355 - accuracy: 0.9003 - val_loss: 0.2585 - val_accuracy: 0.8974

Epoch 00056: val_accuracy did not improve from 0.90597
Epoch 57/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2370 - accuracy: 0.9023 - val_loss: 0.2430 - val_accuracy: 0.9022

Epoch 00057: val_accuracy did not improve from 0.90597
Epoch 58/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2298 - accuracy: 0.9063 - val_loss: 0.2604 - val_accuracy: 0.8951

Epoch 00058: val_accuracy did not improve from 0.90597
Epoch 59/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2272 - accuracy: 0.9030 - val_loss: 0.2420 - val_accuracy: 0.9037

Epoch 00059: val_accuracy did not improve from 0.90597
Epoch 60/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2254 - accuracy: 0.9072 - val_loss: 0.2538 - val_accuracy: 0.8959

Epoch 00060: val_accuracy did not improve from 0.90597
Epoch 61/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2300 - accuracy: 0.9058 - val_loss: 0.2689 - val_accuracy: 0.8910

Epoch 00061: val_accuracy did not improve from 0.90597
Epoch 62/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2289 - accuracy: 0.9044 - val_loss: 0.2585 - val_accuracy: 0.8963

Epoch 00062: val_accuracy did not improve from 0.90597
Epoch 63/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2265 - accuracy: 0.9046 - val_loss: 0.2879 - val_accuracy: 0.8832

Epoch 00063: val_accuracy did not improve from 0.90597
Epoch 64/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2304 - accuracy: 0.9039 - val_loss: 0.2587 - val_accuracy: 0.8948

Epoch 00064: val_accuracy did not improve from 0.90597
Epoch 65/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2247 - accuracy: 0.9057 - val_loss: 0.2619 - val_accuracy: 0.8970

Epoch 00065: val_accuracy did not improve from 0.90597
Epoch 66/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2251 - accuracy: 0.9076 - val_loss: 0.2847 - val_accuracy: 0.8851

Epoch 00066: val_accuracy did not improve from 0.90597
Epoch 67/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2239 - accuracy: 0.9091 - val_loss: 0.2899 - val_accuracy: 0.8832

Epoch 00067: val_accuracy did not improve from 0.90597
Epoch 68/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2229 - accuracy: 0.9077 - val_loss: 0.2987 - val_accuracy: 0.8772

Epoch 00068: val_accuracy did not improve from 0.90597
Epoch 69/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2244 - accuracy: 0.9064 - val_loss: 0.2493 - val_accuracy: 0.8981

Epoch 00069: val_accuracy did not improve from 0.90597
Epoch 70/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2209 - accuracy: 0.9077 - val_loss: 0.2572 - val_accuracy: 0.9007

Epoch 00070: val_accuracy did not improve from 0.90597
Epoch 71/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2242 - accuracy: 0.9078 - val_loss: 0.2657 - val_accuracy: 0.8970

Epoch 00071: val_accuracy did not improve from 0.90597
Epoch 72/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2220 - accuracy: 0.9071 - val_loss: 0.2421 - val_accuracy: 0.9075

Epoch 00072: val_accuracy improved from 0.90597 to 0.90746, saving model to best_model.h5
Epoch 73/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2178 - accuracy: 0.9093 - val_loss: 0.2348 - val_accuracy: 0.9067

Epoch 00073: val_accuracy did not improve from 0.90746
Epoch 74/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2178 - accuracy: 0.9106 - val_loss: 0.2572 - val_accuracy: 0.9037

Epoch 00074: val_accuracy did not improve from 0.90746
Epoch 75/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2197 - accuracy: 0.9115 - val_loss: 0.2632 - val_accuracy: 0.8955

Epoch 00075: val_accuracy did not improve from 0.90746
Epoch 76/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2154 - accuracy: 0.9105 - val_loss: 0.2601 - val_accuracy: 0.8959

Epoch 00076: val_accuracy did not improve from 0.90746
Epoch 77/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2186 - accuracy: 0.9080 - val_loss: 0.2433 - val_accuracy: 0.9034

Epoch 00077: val_accuracy did not improve from 0.90746
Epoch 78/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2147 - accuracy: 0.9127 - val_loss: 0.2809 - val_accuracy: 0.8862

Epoch 00078: val_accuracy did not improve from 0.90746
Epoch 79/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2118 - accuracy: 0.9122 - val_loss: 0.2361 - val_accuracy: 0.9041

Epoch 00079: val_accuracy did not improve from 0.90746
Epoch 80/100

671/671 [==============================] - 7s 11ms/step - loss: 0.2122 - accuracy: 0.9105 - val_loss: 0.2469 - val_accuracy: 0.8966

Epoch 00080: val_accuracy did not improve from 0.90746
Epoch 81/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2121 - accuracy: 0.9116 - val_loss: 0.2430 - val_accuracy: 0.9026

Epoch 00081: val_accuracy did not improve from 0.90746
Epoch 82/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2135 - accuracy: 0.9132 - val_loss: 0.2306 - val_accuracy: 0.9082

Epoch 00082: val_accuracy improved from 0.90746 to 0.90821, saving model to best_model.h5
Epoch 83/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2106 - accuracy: 0.9117 - val_loss: 0.2476 - val_accuracy: 0.9082

Epoch 00083: val_accuracy did not improve from 0.90821
Epoch 84/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2083 - accuracy: 0.9156 - val_loss: 0.2607 - val_accuracy: 0.8970

Epoch 00084: val_accuracy did not improve from 0.90821
Epoch 85/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2070 - accuracy: 0.9136 - val_loss: 0.2582 - val_accuracy: 0.8974

Epoch 00085: val_accuracy did not improve from 0.90821
Epoch 86/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2118 - accuracy: 0.9120 - val_loss: 0.3005 - val_accuracy: 0.8735

Epoch 00086: val_accuracy did not improve from 0.90821
Epoch 87/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2082 - accuracy: 0.9126 - val_loss: 0.2911 - val_accuracy: 0.8802

Epoch 00087: val_accuracy did not improve from 0.90821
Epoch 88/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2078 - accuracy: 0.9155 - val_loss: 0.2466 - val_accuracy: 0.9034

Epoch 00088: val_accuracy did not improve from 0.90821
Epoch 89/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2114 - accuracy: 0.9132 - val_loss: 0.2587 - val_accuracy: 0.8989

Epoch 00089: val_accuracy did not improve from 0.90821
Epoch 90/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2057 - accuracy: 0.9152 - val_loss: 0.2813 - val_accuracy: 0.8922

Epoch 00090: val_accuracy did not improve from 0.90821
Epoch 91/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2079 - accuracy: 0.9147 - val_loss: 0.2526 - val_accuracy: 0.9015

Epoch 00091: val_accuracy did not improve from 0.90821
Epoch 92/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2102 - accuracy: 0.9135 - val_loss: 0.2576 - val_accuracy: 0.9026

Epoch 00092: val_accuracy did not improve from 0.90821
Epoch 93/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2083 - accuracy: 0.9162 - val_loss: 0.2506 - val_accuracy: 0.8974

Epoch 00093: val_accuracy did not improve from 0.90821
Epoch 94/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2064 - accuracy: 0.9155 - val_loss: 0.2705 - val_accuracy: 0.8944

Epoch 00094: val_accuracy did not improve from 0.90821
Epoch 95/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2025 - accuracy: 0.9183 - val_loss: 0.2589 - val_accuracy: 0.8981

Epoch 00095: val_accuracy did not improve from 0.90821
Epoch 96/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2015 - accuracy: 0.9181 - val_loss: 0.2549 - val_accuracy: 0.8970

Epoch 00096: val_accuracy did not improve from 0.90821
Epoch 97/100
671/671 [==============================] - 8s 11ms/step - loss: 0.2010 - accuracy: 0.9179 - val_loss: 0.2401 - val_accuracy: 0.9011

Epoch 00097: val_accuracy did not improve from 0.90821
Epoch 98/100
671/671 [==============================] - 7s 11ms/step - loss: 0.1981 - accuracy: 0.9199 - val_loss: 0.2531 - val_accuracy: 0.9007

Epoch 00098: val_accuracy did not improve from 0.90821
Epoch 99/100
671/671 [==============================] - 7s 11ms/step - loss: 0.2016 - accuracy: 0.9163 - val_loss: 0.2463 - val_accuracy: 0.8985

Epoch 00099: val_accuracy did not improve from 0.90821
Epoch 100/100
671/671 [==============================] - 7s 11ms/step - loss: 0.1972 - accuracy: 0.9191 - val_loss: 0.2718 - val_accuracy: 0.8884

Epoch 00100: val_accuracy did not improve from 0.90821

六、评估模型

1.Accuracy与Loss图

acc = train_model.history['accuracy']
val_acc = train_model.history['val_accuracy']

loss = train_model.history['loss']
val_loss = train_model.history['val_loss']

epochs_range = range(len(acc))

plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)

plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

 

2.混淆矩阵

from sklearn.metrics import confusion_matrix
import seaborn as sns
import pandas as pd

# 定义一个绘制混淆矩阵图的函数
def plot_cm(labels, predictions):
    
    # 生成混淆矩阵
    conf_numpy = confusion_matrix(labels, predictions)
    # 将矩阵转化为 DataFrame
    conf_df = pd.DataFrame(conf_numpy, index=class_names ,columns=class_names)  
    
    plt.figure(figsize=(8,7))
    
    sns.heatmap(conf_df, annot=True, fmt="d", cmap="BuPu")
    
    plt.title('混淆矩阵',fontsize=15)
    plt.ylabel('真实值',fontsize=14)
    plt.xlabel('预测值',fontsize=14)
val_pre   = []
val_label = []

for images, labels in val_ds:#这里可以取部分验证数据(.take(1))生成混淆矩阵
    for image, label in zip(images, labels):
        # 需要给图片增加一个维度
        img_array = tf.expand_dims(image, 0) 
        # 使用模型预测图片中的人物
        prediction = model.predict(img_array)

        val_pre.append(class_names[np.argmax(prediction)])
        val_label.append(class_names[label])

plot_cm(val_label, val_pre)

 

3.各项指标评估

 

from sklearn import metrics

def test_accuracy_report(model):
    print(metrics.classification_report(val_label, val_pre, target_names=class_names)) 
    score = model.evaluate(val_ds, verbose=0)
    print('Loss function: %s, accuracy:' % score[0], score[1])
    
test_accuracy_report(model)
           precision    recall  f1-score   support

       乳腺癌细胞       0.86      0.92      0.89      1339
        正常细胞       0.92      0.86      0.88      1341

    accuracy                           0.89      2680
   macro avg       0.89      0.89      0.89      2680
weighted avg       0.89      0.89      0.89      2680

Loss function: 0.27176907658576965, accuracy: 0.8884328603744507

>- 参考文章地址: >- 本文为[🔗365天深度学习训练营](https://mp.weixin.qq.com/s/k-vYaC8l7uxX51WoypLkTw) 中的学习记录博客
>- 参考文章地址: [🔗深度学习100例-卷积神经网络(CNN)天气识别 | 第5天](https://mtyjkh.blog.csdn.net/article/details/117186183)
 

  • 9
    点赞
  • 47
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 4
    评论
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

猿童学

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值