Tensorflow2.0 基于ResNet18的CIFAR10图片分类

Tensorflow2.0 基于ResNet18的CIFAR10图片分类

前言

Tensorflow2.0的学习笔记,教材基于开源的https://github.com/dragen1860/Deep-Learning-with-TensorFlow-book。非原创,不要杠。

备注

  1. 数据集CIFAR10,tf模块自带下载该模块功能,详情见代码
  2. 模型ResNet18,也可自己修改参数,可实现更深层次的网络
  3. 理论自行搜索

一些随笔

# 关于自定义层
class MyDense(layers.Layer):
    def __init__(self,in_dim,out_dim):
        super(MyDense, self).__init__()
        # self.kernel = self.add_variable('w',[in_dim,out_dim],trainable=True)
        # self.add_variable会返回张量𝑾的Python 引用,而变量名name 由TensorFlow 内部维护
        # 不过我觉得用add_variable显得接口混乱,还是喜欢用如下的tf.Variable
        self.kernel = tf.Variable(tf.random.normal([in_dim, out_dim]),trainable=True)

    def call(self,inputs,training=None):
        out = inputs@self.kernel
        out = tf.nn.relu(out)
        return out

net = MyDense(4,3)
print(net.trainable_variables,'\n',net.trainable_variables)
# 简单看看该layer的结构

ResNet18的实现

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import Sequential, layers, losses, optimizers, metrics


gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
    try:
        for gpu in gpus:
            tf.config.experimental.set_memory_growth(gpu, True)
    except RuntimeError as e:
        print(e)
# 设置显存增长式占用
def preprocess(x, y):
    # 预处理数据
    x = 2*tf.cast(x, dtype=tf.float32) / 255.-1
    # 图像数据归一化
    y = tf.squeeze(y, axis=1)
    # 删除y的第二个维度,为啥,看一下数据集就知道
    y = tf.cast(y, dtype=tf.int32)
    # 转为int32,tf的数据格式自动转换似乎不太理想,可能是为了避免出错吧
    return x,y
  • 2
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
以下是基于TensorFlow2.0resnet152模型代码: ```python import tensorflow as tf def conv3x3_block(inputs, filters, strides=1, name=""): x = tf.keras.layers.Conv2D(filters, kernel_size=3, strides=strides, padding="same", use_bias=False, name=name+"_conv")(inputs) x = tf.keras.layers.BatchNormalization(name=name+"_bn")(x) x = tf.keras.layers.Activation("relu", name=name+"_relu")(x) return x def conv1x1_block(inputs, filters, name=""): x = tf.keras.layers.Conv2D(filters, kernel_size=1, padding="same", use_bias=False, name=name+"_conv")(inputs) x = tf.keras.layers.BatchNormalization(name=name+"_bn")(x) x = tf.keras.layers.Activation("relu", name=name+"_relu")(x) return x def resnet_identity_block(inputs, filters, name=""): x = conv3x3_block(inputs, filters, name=name+"_conv1") x = conv3x3_block(x, filters, name=name+"_conv2") x = tf.keras.layers.Add(name=name+"_add")([inputs, x]) x = tf.keras.layers.Activation("relu", name=name+"_relu")(x) return x def resnet_bottleneck_block(inputs, filters, strides=1, name=""): x = conv1x1_block(inputs, filters//4, name=name+"_conv1") x = conv3x3_block(x, filters//4, strides=strides, name=name+"_conv2") x = conv1x1_block(x, filters, name=name+"_conv3") shortcut = conv1x1_block(inputs, filters, name=name+"_shortcut") x = tf.keras.layers.Add(name=name+"_add")([shortcut, x]) x = tf.keras.layers.Activation("relu", name=name+"_relu")(x) return x def resnet_block(inputs, filters, blocks, strides=1, block_func=resnet_bottleneck_block, name=""): x = block_func(inputs, filters, strides=strides, name=name+"_block1") for i in range(2, blocks+1): x = block_func(x, filters, name=name+"_block"+str(i)) return x def ResNet152(input_shape=(224, 224, 3), num_classes=1000): inputs = tf.keras.layers.Input(shape=input_shape, name="input") x = tf.keras.layers.ZeroPadding2D(padding=((3, 3), (3, 3)), name="padding")(inputs) x = tf.keras.layers.Conv2D(64, kernel_size=7, strides=2, padding="valid", use_bias=False, name="conv1")(x) x = tf.keras.layers.BatchNormalization(name="bn1")(x) x = tf.keras.layers.Activation("relu", name="relu1")(x) x = tf.keras.layers.ZeroPadding2D(padding=((1, 1), (1, 1)), name="padding1")(x) x = tf.keras.layers.MaxPooling2D(pool_size=3, strides=2, padding="valid", name="pool1")(x) x = resnet_block(x, 256, 3, strides=1, block_func=resnet_identity_block, name="res2") x = resnet_block(x, 512, 8, strides=2, block_func=resnet_bottleneck_block, name="res3") x = resnet_block(x, 1024, 36, strides=2, block_func=resnet_bottleneck_block, name="res4") x = resnet_block(x, 2048, 3, strides=2, block_func=resnet_bottleneck_block, name="res5") x = tf.keras.layers.GlobalAveragePooling2D(name="pool5")(x) x = tf.keras.layers.Dense(num_classes, name="fc1000")(x) x = tf.keras.layers.Activation("softmax", name="softmax")(x) model = tf.keras.Model(inputs, x, name="resnet152") return model ``` 该代码实现了ResNet152模型,包括标准的残差块、瓶颈块和整个网络结构。可以通过调用`ResNet152()`函数来创建ResNet152模型。默认输入形状为`(224, 224, 3)`,输出类别数为1000。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值