Pytorch—lightning初次使用

第一次使用lightning,之前看过很多关于lightning的讲解,直到看到这一篇才真的有点知道lightning的意义:从PyTorch到PyTorch Lightning —简要介绍 - 云+社区 - 腾讯云 (tencent.com)https://cloud.tencent.com/developer/article/1593703今天实际操作一下:

1.Data部分

Data部分目的是生成TrainDataLoader,需要数据准备,数据加载

1.1step部分:获取数据集 也就是pytorch里dataset部分

if stage == 'fit' or stage is None:

            self.trainset = video_datasets(self.tr_data_idx_dir, self.tr_data_frame_dir, transform = self.frame_trans )

1.2train_daatloader部分:加载数据集,返回训练集

def train_dataloader(self):
        return DataLoader(self.trainset, batch_size=self.batch_size, num_workers=self.num_workers, shuffle=True)

2.模型部分

2.1 加载模型下面的方法为在另一个文件中获取模型

def load_model(self):
       
        self.model = AutoEncoderCov3DMem(self.kargs['ImgChnNum'], self.kargs['MemDim'], shrink_thres=self.kargs['ShrinkThres'])

下面的方法是在model部分之间定义模型

 def __init__(self):
    super(LightningMNISTClassifier, self).__init__()
 
    # mnist images are (1, 28, 28) (channels, width, height)
    self.layer_1 = torch.nn.Linear(28 * 28, 128)
    self.layer_2 = torch.nn.Linear(128, 256)
    self.layer_3 = torch.nn.Linear(256, 10)
 
  def forward(self, x):
      batch_size, channels, width, height = x.size()
 
      # (b, 1, 28, 28) -> (b, 1*28*28)
      x = x.view(batch_size, -1)
 
      # layer 1
      x = self.layer_1(x)
      x = torch.relu(x)
 
      # layer 2
      x = self.layer_2(x)
      x = torch.relu(x)
 
      # layer 3
      x = self.layer_3(x)
 
      # probability distribution over labels
      x = torch.log_softmax(x, dim=1)
 
      return x
 

2.2定义loss

def configure_loss(self):
        self.tr_recon_loss_func = nn.MSELoss()
        self.tr_entropy_loss_func = EntropyLossEncap()
        

2.3 定义优化器

def configure_optimizers(self):
        torch.optim.Adam(self.parameters(), lr = self.kargs.LR)

2.4 定义训练过程

def training_step(self, batch, batch_idx):
        img, labels, filename = batch
        out = self(img)
        loss = self.loss_function(out, labels)
        self.log('loss', loss, on_step=True, on_epoch=True, prog_bar=True)
        return loss

3.训练

定义trainer,fit进行训练

   data_module  = DInterface(**vars(args))
 
    model = MInterface(**vars(args))
    logger = TensorBoardLogger('tblog',name='testModel')
    trainer = Trainer.from_argparse_args(args,gpus = 3, logger = logger)
    trainer.fit(model, data_module)

  • 3
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
PyTorch Lightning是一个用于训练和部署深度学习模型的轻量级框架。它基于PyTorch,提供了一种简单易用的方式来组织、管理和扩展PyTorch代码。 以下是PyTorch Lightning使用方法: 1. 安装PyTorch Lightning 可以通过以下命令安装PyTorch Lightning: ``` pip install pytorch-lightning ``` 2. 创建模型 创建一个PyTorch模型,继承`pl.LightningModule`类,并实现`training_step`、`validation_step`和`test_step`方法。例如: ```python import torch.nn as nn import torch.optim as optim import pytorch_lightning as pl class MyModel(pl.LightningModule): def __init__(self): super().__init__() self.fc1 = nn.Linear(28*28, 64) self.fc2 = nn.Linear(64, 10) self.loss_fn = nn.CrossEntropyLoss() def forward(self, x): x = x.view(x.size(0), -1) x = nn.functional.relu(self.fc1(x)) x = self.fc2(x) return x def training_step(self, batch, batch_idx): x, y = batch y_hat = self.forward(x) loss = self.loss_fn(y_hat, y) self.log('train_loss', loss) return loss def validation_step(self, batch, batch_idx): x, y = batch y_hat = self.forward(x) loss = self.loss_fn(y_hat, y) self.log('val_loss', loss) return loss def test_step(self, batch, batch_idx): x, y = batch y_hat = self.forward(x) loss = self.loss_fn(y_hat, y) self.log('test_loss', loss) return loss def configure_optimizers(self): return optim.Adam(self.parameters(), lr=1e-3) ``` 3. 创建数据模块 创建一个`pl.LightningDataModule`类,实现`train_dataloader`、`val_dataloader`和`test_dataloader`方法,以加载训练、验证和测试数据。例如: ```python import torchvision.datasets as datasets import torchvision.transforms as transforms class MNISTDataModule(pl.LightningDataModule): def __init__(self, data_dir='./data'): super().__init__() self.data_dir = data_dir self.transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ]) def prepare_data(self): datasets.MNIST(self.data_dir, train=True, download=True) datasets.MNIST(self.data_dir, train=False, download=True) def train_dataloader(self): return torch.utils.data.DataLoader( datasets.MNIST(self.data_dir, train=True, transform=self.transform), batch_size=32, shuffle=True, num_workers=4 ) def val_dataloader(self): return torch.utils.data.DataLoader( datasets.MNIST(self.data_dir, train=False, transform=self.transform), batch_size=32, shuffle=False, num_workers=4 ) def test_dataloader(self): return torch.utils.data.DataLoader( datasets.MNIST(self.data_dir, train=False, transform=self.transform), batch_size=32, shuffle=False, num_workers=4 ) ``` 4. 训练模型 使用`pl.Trainer`类训练模型。例如: ```python model = MyModel() data_module = MNISTDataModule() trainer = pl.Trainer(max_epochs=10, gpus=1) trainer.fit(model, data_module) ``` 5. 测试模型 使用`pl.Trainer`类测试模型。例如: ```python trainer.test(model, data_module) ``` 以上就是PyTorch Lightning使用方法。通过使用PyTorch Lightning,可以更加方便地组织和管理PyTorch代码,并实现更加高效的训练和部署。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值