macbook苹果m1芯片训练机器学习、深度学习模型,resnet101在mnist手写数字识别上做加速,torch.device("mps")...

apple的m1芯片比以往cpu芯片在机器学习加速上听说有15倍的提升,也就是可以使用apple mac训练深度学习pytorch模型!!!惊呆了

安装apple m1芯片版本的pytorch

然后使用chatGPT生成一个resnet101的训练代码,这里注意,如果网络特别轻的话是没有加速效果的,还没有cpu的计算来的快

这里要选择好设备不是"cuda"了,cuda是nvidia深度学习加速的配置

# 设置设备
# device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device = torch.device("mps") #torch.device("cpu")

resnet101的训练代码,训练mnist手写数字识别,之前我还尝试了两层linear的训练代码,低估了apple 的 torch.device("mps"),这两层linear的简单神经网络完全加速不起来,还不如torc
  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 5
    评论
以下是使用ResNet网络模型实现MNIST手写数字识别的代码示例: ```python import tensorflow as tf from tensorflow.keras.datasets import mnist from tensorflow.keras.layers import Input, Conv2D, BatchNormalization, ReLU, Add, GlobalAveragePooling2D, Dense from tensorflow.keras.models import Model # 加载MNIST数据集 (x_train, y_train), (x_test, y_test) = mnist.load_data() # 数据预处理:将像素值缩放到0到1之间,并增加一维通道数 x_train = x_train.astype('float32') / 255.0 x_train = x_train.reshape(x_train.shape[0], 28, 28, 1) x_test = x_test.astype('float32') / 255.0 x_test = x_test.reshape(x_test.shape[0], 28, 28, 1) # 定义ResNet网络模型 def residual_block(inputs, filters, strides=1): shortcut = inputs # 第一个卷积层 x = Conv2D(filters, kernel_size=3, strides=strides, padding='same')(inputs) x = BatchNormalization()(x) x = ReLU()(x) # 第二个卷积层 x = Conv2D(filters, kernel_size=3, strides=1, padding='same')(x) x = BatchNormalization()(x) # 如果输入和输出的维度不同,则使用1x1卷积调整维度 if shortcut.shape[-1] != filters: shortcut = Conv2D(filters, kernel_size=1, strides=strides, padding='same')(shortcut) shortcut = BatchNormalization()(shortcut) # 将残差块的输出与输入相加,构成下一层的输入 x = Add()([x, shortcut]) x = ReLU()(x) return x def ResNet(input_shape=(28, 28, 1), num_classes=10): inputs = Input(shape=input_shape) # 第一层卷积 x = Conv2D(64, kernel_size=3, strides=1, padding='same')(inputs) x = BatchNormalization()(x) x = ReLU()(x) # 残差块组1 x = residual_block(x, filters=64, strides=1) x = residual_block(x, filters=64, strides=1) x = residual_block(x, filters=64, strides=1) # 残差块组2 x = residual_block(x, filters=128, strides=2) x = residual_block(x, filters=128, strides=1) x = residual_block(x, filters=128, strides=1) # 残差块组3 x = residual_block(x, filters=256, strides=2) x = residual_block(x, filters=256, strides=1) x = residual_block(x, filters=256, strides=1) # 残差块组4 x = residual_block(x, filters=512, strides=2) x = residual_block(x, filters=512, strides=1) x = residual_block(x, filters=512, strides=1) # 全局平均池化 x = GlobalAveragePooling2D()(x) # 全连接层 x = Dense(num_classes, activation='softmax')(x) model = Model(inputs=inputs, outputs=x) return model # 创建ResNet模型 model = ResNet(num_classes=10) # 编译模型 model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # 训练模型 model.fit(x_train, y_train, batch_size=128, epochs=10, validation_data=(x_test, y_test)) ```
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值