pytorch Sequential卷积神经网络(padding)

3 篇文章 0 订阅

数据准备

import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms

# Device configuration
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')

# Hyper parameters
num_epochs = 5
num_classes = 10
batch_size = 100
learning_rate = 0.001

# MNIST dataset
train_dataset = torchvision.datasets.MNIST(root='../../data/',
                                           train=True,
                                           transform=transforms.ToTensor(),
                                           download=True)

test_dataset = torchvision.datasets.MNIST(root='../../data/',
                                          train=False,
                                          transform=transforms.ToTensor())

# Data loader
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
                                           batch_size=batch_size,
                                           shuffle=True)

test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
                                          batch_size=batch_size,
                                          shuffle=False)


网络结构

class ConvNet(nn.Module):
    def __init__(self, num_classes=10):
        super(ConvNet, self).__init__()
        self.layer1 = nn.Sequential(#输入数据为1*28*28
            nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2),
            # 输入有1个通道,输出有16个通道(16个卷积核进行卷积),padding之后的数据大小为1*(28+2+2)*(28+2+2)=1*32*32,每个卷积核之后的大小为1*(32-5+1)*(32-5+1)=1*28*28,最终得到16*28*28
            nn.BatchNorm2d(16),##BatchNorm2d的参数为上一个网络层的channel数:16
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2))#16*14*14
        self.layer2 = nn.Sequential(
            nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),#padding之后的大小18*18,卷积之后的大小14*14,输出为32*14*14
            nn.BatchNorm2d(32),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2))#32*7*7
        self.fc = nn.Linear(7 * 7 * 32, num_classes)#全连接

    def forward(self, x):
        out = self.layer1(x)
        out = self.layer2(out)
        out = out.reshape(out.size(0), -1)
        out = self.fc(out)
        return out


model = ConvNet(num_classes).to(device)

# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

训练

# Train the model
total_step = len(train_loader)
for epoch in range(num_epochs):
    for i, (images, labels) in enumerate(train_loader):
        images = images.to(device)
        labels = labels.to(device)
        print(images.size())#torch.Size([100(batch_size), 1(channel), 28, 28])
        # Forward pass
        outputs = model(images)
        loss = criterion(outputs, labels)

        # Backward and optimize
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        if (i + 1) % 100 == 0:
            print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
                  .format(epoch 

测试

 Test the model
model.eval()  # eval mode (batchnorm uses moving mean/variance instead of mini-batch mean/variance)
with torch.no_grad():
    correct = 0
    total = 0
    for images, labels in test_loader:
        images = images.to(device)
        labels = labels.to(device)
        outputs = model(images)
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

    print('Test Accuracy of the model on the 10000 test images: {} %'.format(100 * correct / total))

  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
RNA序列预测是一个重要的生物信息学问题。构建卷积神经网络 (Convolutional Neural Networks, CNNs) 可以用于RNA序列预测,其中卷积层可以提取RNA序列中的特征,池化层可以降低特征的维度,全连接层可以将特征映射到输出类别的空间中。本文将介绍如何使用PyTorch构建一个卷积神经网络实现RNA序列预测。 首先,我们需要准备RNA序列数据集。在这里,我们使用公开的datahub数据集,包含RNA序列及其对应的类别标签。我们需要将RNA序列转换成数字序列,可以使用类似One-Hot编码的方法,将每个核苷酸映射到一个向量中。 接下来,我们可以定义卷积神经网络模型。在这里,我们定义一个包含两个卷积层和两个全连接层的模型。每个卷积层包含一个卷积层、一个ReLU激活函数和一个最大池化层。最后一个全连接层输出类别的概率分布。 ``` python import torch.nn as nn class RNA_CNN(nn.Module): def __init__(self, num_classes=2): super(RNA_CNN, self).__init__() self.conv1 = nn.Sequential( nn.Conv1d(4, 32, kernel_size=5, stride=1, padding=2), nn.ReLU(), nn.MaxPool1d(kernel_size=2, stride=2) ) self.conv2 = nn.Sequential( nn.Conv1d(32, 64, kernel_size=5, stride=1, padding=2), nn.ReLU(), nn.MaxPool1d(kernel_size=2, stride=2) ) self.fc1 = nn.Linear(64 * 50, 1024) self.fc2 = nn.Linear(1024, num_classes) def forward(self, x): x = self.conv1(x) x = self.conv2(x) x = x.view(x.size(0), -1) x = self.fc1(x) x = self.fc2(x) return x ``` 在模型定义之后,我们需要定义损失函数和优化器。在这里,我们使用交叉熵损失函数和Adam优化器。 ``` python import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) ``` 之后,我们可以开始训练模型。在每个epoch中,我们将数据集分成批次,将每个批次输入模型,计算损失函数,进行反向传播,更新模型参数。在训练过程中,我们可以使用验证集来监控模型的性能。 ``` python num_epochs = 10 for epoch in range(num_epochs): for i, (sequences, labels) in enumerate(train_loader): optimizer.zero_grad() outputs = model(sequences) loss = criterion(outputs, labels) loss.backward() optimizer.step() # validation correct = 0 total = 0 with torch.no_grad(): for sequences, labels in val_loader: outputs = model(sequences) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Epoch [{}/{}], Loss: {:.4f}, Validation Accuracy: {:.2f}%' .format(epoch+1, num_epochs, loss.item(), 100 * correct / total)) ``` 最后,我们可以使用测试集来评估模型的性能。 ``` python with torch.no_grad(): correct = 0 total = 0 for sequences, labels in test_loader: outputs = model(sequences) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Test Accuracy: {:.2f}%'.format(100 * correct / total)) ``` 通过以上步骤,我们可以使用PyTorch构建卷积神经网络实现RNA序列预测,并且可以得到不错的性能。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值