ActiveJob-Performs 使用指南

ActiveJob-Performs 使用指南

active_job-performsActiveJob::Performs adds the `performs` macro to set up jobs by convention.项目地址:https://gitcode.com/gh_mirrors/ac/active_job-performs

本指南将详细介绍如何理解和使用 active_job-performs 开源项目,该项目通过一个便捷的 performs 宏简化了基于约定的 ActiveJob 作业设置。我们将从项目的目录结构、启动文件以及配置文件三个方面进行说明。

1. 项目的目录结构及介绍

active_job-performs 的仓库遵循 Rubygem 和 Rails 应用的标准目录布局,关键组件包括:

  • bin/:存放可执行脚本,如项目初始化或测试运行器。
  • lib/active_job_performs/:核心库代码所在,包含了ActiveJob::Performs模块及其相关逻辑实现。
  • test/:单元测试和集成测试文件,用于确保代码质量。
  • LICENSE.txt:项目许可协议,采用MIT License。
  • README.md:项目的快速入门和基本使用说明。
  • active_job_performs.gemspec:Rubygem的规格文件,定义了gem的元数据,如依赖项和版本信息。

该结构简洁明了,核心功能集中在lib目录下,便于开发者快速定位到关键实现部分。

2. 项目的启动文件介绍

虽然直接运行此项目作为独立应用并不适用,但其安装和激活通常在Ruby on Rails应用中进行。关键的“启动”逻辑发生在将gem添加至Rails应用的Gemfile并执行bundle install之后。active_job-performs通过以下方式“启动”:

  • 当你在你的Rails模型(或其他实现了GlobalID的类)中使用performs宏时,它动态地扩展了对象的能力,无需显式创建多个作业类。

换句话说,没有特定的启动文件,其激活是通过在你的应用代码中引入宏和配置来完成的。

3. 项目的配置文件介绍

环境配置

在Rails应用中,对于active_job-performs的配置不是通过单独的配置文件完成的,而是嵌入到应用的现有环境配置中。你可以通过在各环境的配置文件(例如config/environments/development.rb, production.rb, 或 test.rb)内调整ActiveJob的相关配置来间接影响它,例如设置队列适配器或者自定义队列名。

performs宏配置

  • 直接在模型内部使用performs宏时,配置是在方法声明上完成的。例如,你可以指定作业的队列(queue_as)、错误处理行为(discard_onretry_on)等。
class Post < ActiveRecord::Base
  extend ActiveJob::Performs
  performs :publish,
           queue_as: :important,
           discard_on: SomeError,
           retry_on: TimeoutError,
           wait: :polynomially_longer
  # ... 方法定义 ...
end

这种配置方式使得业务逻辑和作业配置紧密结合,提升了代码的可读性和维护性。

综上所述,尽管active_job-performs本身不提供传统的可编辑配置文件,但它利用Ruby的灵活性和ActiveJob框架的特性,在应用层面提供了高度定制化的作业配置能力。

active_job-performsActiveJob::Performs adds the `performs` macro to set up jobs by convention.项目地址:https://gitcode.com/gh_mirrors/ac/active_job-performs

Here is an implementation of a ResNet class that meets the requirements you specified: ``` import torch import torch.nn as nn import torch.nn.functional as F class ResNet(nn.Module): def __init__(self, block, layers, num_classes=1000): super(ResNet, self).__init__() self.inplanes = 64 self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False) self.bn1 = nn.BatchNorm2d(64) self.avgpool = nn.AvgPool2d(kernel_size=3, stride=1, padding=1) self.layer1 = self._make_layer(block, 64, layers[0]) self.layer2 = self._make_layer(block, 128, layers[1], stride=2) self.layer3 = self._make_layer(block, 256, layers[2], stride=2) self.layer4 = self._make_layer(block, 512, layers[3], stride=2) self.qkv_pool = nn.MultiheadAttention(embed_dim=512, num_heads=8, dropout=0.1) self.fc = nn.Linear(512 * block.expansion, num_classes) for m in self.modules(): if isinstance(m, nn.Conv2d): nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)): nn.init.constant_(m.weight, 1) nn.init.constant_(m.bias, 0) def _make_layer(self, block, planes, blocks, stride=1): downsample = None if stride != 1 or self.inplanes != planes * block.expansion: downsample = nn.Sequential( nn.AvgPool2d(kernel_size=stride, stride=stride), nn.Conv2d(self.inplanes, planes * block.expansion, kernel_size=1, stride=1, bias=False), nn.BatchNorm2d(planes * block.expansion), ) layers = [] layers.append(block(self.inplanes, planes, stride, downsample)) self.inplanes = planes * block.expansion for _ in range(1, blocks): layers.append(block(self.inplanes, planes)) return nn.Sequential(*layers) def forward(self, x): x = self.conv1(x) x = self.bn1(x) x = F.relu(x) x = self.avgpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) x = x.view(x.size(0), -1) x = self.qkv_pool(x, x, x)[0] x = self.fc(x) return x class BasicBlock(nn.Module): expansion = 1 def __init__(self, inplanes, planes, stride=1, downsample=None): super(BasicBlock, self).__init__() self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=3, stride=stride, padding=1, bias=False) self.bn1 = nn.BatchNorm2d(planes) self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False) self.bn2 = nn.BatchNorm2d(planes) self.relu = nn.ReLU(inplace=True) self.downsample = downsample def forward(self, x): identity = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) if self.downsample is not None: identity = self.downsample(x) out += identity out = self.relu(out) return out class Bottleneck(nn.Module): expansion = 4 def __init__(self, inplanes, planes, stride=1, downsample=None): super(Bottleneck, self).__init__() self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) self.bn1 = nn.BatchNorm2d(planes) self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False) self.bn2 = nn.BatchNorm2d(planes) self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1, bias=False) self.bn3 = nn.BatchNorm2d(planes * self.expansion) self.relu = nn.ReLU(inplace=True) self.downsample = downsample def forward(self, x): identity = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) out = self.relu(out) out = self.conv3(out) out = self.bn3(out) if self.downsample is not None: identity = self.downsample(x) out += identity out = self.relu(out) return out ``` This implementation defines a ResNet class that takes a block type (`BasicBlock` or `Bottleneck`) and a list of layer sizes as input. The `block` argument determines the type of residual block used in the network (either the basic version with two convolutions, or the bottleneck version with three convolutions). The `layers` argument is a list of four integers that specify the number of blocks in each of the four layers of the network. The implementation includes the following changes from the standard torchvision ResNet: - There are now 3 "stem" convolutions instead of 1, with an average pool instead of a max pool. - Performs anti-aliasing strided convolutions, where an avgpool is prepended to convolutions with stride > 1. - The final pooling layer is a QKV attention instead of an average pool.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

余靖年Veronica

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值