StyleSwin: Transformer-based GAN for High-resolution Image Generation论文和代码

论文: 

论文地址:StyleSwin: Transformer-Based GAN for High-Resolution Image Generation (thecvf.com)

代码地址: https://github.com/microsoft/StyleSwin

前言:

StyleSwin模型框架是style-based generator上进行修改的,以SwinTransformer为基本框架,在看这篇论文之前,建议先看styleGAN、styleGAN2、SwinTransformer的论文,便于理解本论文中所提出的模型原理。该论文代码也是基于styleGAN2和SwinTransformer的代码进行修改的。

内容:

论文是用一种纯Transformer的生成对抗网络来生成高分辨率的图像。生成器框架如图(b)所示,框架原理理解如下:

整个generator由多个generator blocks和tRGB组成

左分支:一个潜在变量通过由8个全连接层组成的映射网络映射输出空间向量w(维度和z一样,512*1),将latent code 转换得到w后,向量w经可学习的仿射变换A(实际上是全连接层),分别送入右分支的合成网络,进行控制特征。

Mapping Network作用:为输入向量z的特征解耦提供一条学习通道,由于z是符合均匀分布或者高斯分布的随机变量,所以变量之间的耦合性比较大。需要将latent code z进行解耦,才能更好的进行后续操作,来改变其特征。如果仅通过z来控制视觉特征,那么其能力十分有限,因为它必须遵循训练数据的概率密度。比如数据集中长头发的人很常见,那么更多的输入值便会映射到该特征上,那么z中其他变量也会向着该值靠近、无法更好地映射其他特征。所以通过Mapping network,该模型可以生成一个不必遵循训练数据分布地向量w,减少了特征之间的相关性,完成解耦。

主(右)分支(generator):首先输入一个常量大小4*4*512,经过正弦位置编码(绝对位置)进入generator block,输入到自适应归一化(AdIN)和残差连接,然后注入样式风格,特征图输入到双重注意力层(Double Attn),给一个相对位置编码,在Double Attn中进行W-MSA和SW-MSA计算,计算完成后特征图进入下一个自适应归一化(AdIN)和残差连接,进行样式风格注入,经过MLP层输出特征图,特征图经上采样加上绝对位置向量进入下一个generator block,每次const和 w 进入一个generator block后,最后输出的feature map作为输入进入下一个generator block,同时会输入到tRGB层中输出一张RGB图像累加到历史的累加RGB图像上。最后,经过不断上采样形成高分辨率的特征图,通过累加的RGB图片作为生成图片(1024*1024)输出。

利用常量输入代替传统的输入,一个是防止不合理的输入生成错误的图像;另一个是有利于减少特征纠缠。

AdIN:基于AdIN可以自调制生成器实现任意图像风格的转换。

特征图的均值和方差带有图像的风格信息,所以在AdIN层中 ,特征图减去自己的均值除以方差,去掉自己的风格。再乘上新风格的方差加上均值,以实现转换的目的。

Double Attn中M-MSA/SM-MSA的计算方法:给定l层的输入特征图Xl,连续的swin transformer操作如下:

代码:

参考:https://zhuanlan.zhihu.com/p/391980528

论文代码是基于styleGAN2和SwinTransformer的代码进行修改的,所以可以结合styleGAN2和SwinTransforme的代码进行理解。

代码是基于Pytorch框架写的

generator.py

ToRGB: ToRGB层由一个全连接层,一个自定义卷积层和一个激活函数层组成,包含可学习参数bias

class ToRGB(nn.Module):
    def __init__(self, in_channel, upsample=True, resolution=None, blur_kernel=[1, 3, 3, 1]):
        super().__init__()
        self.is_upsample = upsample
        self.resolution = resolution

        if upsample:
            self.upsample = Upsample(blur_kernel)
            # 将输入feature map的channel变为3(即rgb三通道图),kernel_size=1x1,不变空间维度
        self.conv = nn.Conv2d(in_channel, 3, kernel_size=1)
        self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1))

    def forward(self, input, skip=None):
        out = self.conv(input)
        out = out + self.bias

        if skip is not None:
            if self.is_upsample:
                skip = self.upsample(skip)

            out = out + skip
        return out
    
    def flops(self):
        m = self.conv
        kernel_ops = torch.zeros(m.weight.size()[2:]).numel()  # Kw x Kh
        bias_ops = 1
        # N x Cout x H x W x  (Cin x Kw x Kh + bias)
        flops = 1 * self.resolution * self.resolution * 3 * (m.in_channels // m.groups * kernel_ops + bias_ops)
        if self.is_upsample:
            # there is a conv used in upsample
            w_shape = (1, 1, 4, 4)
            kernel_ops = torch.zeros(w_shape[2:]).numel()  # Kw x Kh
            # N x Cout x H x W x  (Cin x Kw x Kh + bias)
            flops = 1 * 3 * (2 * self.resolution + 3) * (2 *self.resolution + 3) * (3 * kernel_ops)
        return flops

MLP:构建MLP模块,该模块在SwinTransformer中的末尾位置,用于对SW-MSA形成的全局注意力进行整合。该模块由输入层、一层隐藏层和输出层组成。输入特征经过全连接层,由激活函数GELU函数进行激活,并添加Drop用于防止过拟合,生成隐藏特征。然后,隐藏特征经过第二个全连接层,再添加Drop层,生成了MLP的输出层特征。

class Mlp(nn.Module):
    def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
        super().__init__()
        out_features = out_features or in_features
        hidden_features = hidden_features or in_features
        self.hidden_features = hidden_features
        self.fc1 = nn.Linear(in_features, hidden_features)
        self.act = act_layer()
        self.fc2 = nn.Linear(hidden_features, out_features)
        self.drop = nn.Dropout(drop)

    def forward(self, x):
        x = self.fc1(x)
        x = self.act(x)
        x = self.drop(x)
        x = self.fc2(x)
        x = self.drop(x)
        return x

 window_partition: 窗口分割函数构建,第一个输入为图像Batch,对应的尺寸为(B,H,W,C),其中B为batch_size,H和W代表图像的高和宽,C为通道数,注意,这里的通道数为3,而torch直接读取的图像中通道数为1,因此要对图片做预处理。第二个输入为窗口的大小。

def window_partition(x, window_size):
    """
    Args:
        x: (B, H, W, C)
        window_size (int): window size

    Returns:
        windows: (num_windows*B, window_size, window_size, C)
    """
    B, H, W, C = x.shape
    x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
    windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
    return windows

首先将输入的特行形状调整为(B, H // window_size, window_size, W // window_size, window_size, C),6个维度参数,第2个维度和第4个维度参数为window_size,说明此处的窗口分割是正方形的形状。使用permute()来进行调换第2个和第3个特征维度,调整后的形状为(B, H // window_size, W // window_size, window_size, window_size, C),便于后面变形。然后使用view()操作进行最后的变形,特征形状变为(B*H*W//window_size//window_size,window_size,window_size,C),4个维度。

view():该函数操作,相当于将原来输入的每个图像(H,W,C)处理成(H*W//windows_size//windows_size,windows_size,,windows_size,C)的窗口batch,其中第0维表示窗口的数量,最后再将batch中的所有窗口在第0维进行连接,得到(B*H*W//windows_size//windows_size,windows_size,windows_size,C)。

window_reverse:

def window_reverse(windows, window_size, H, W):
    """
    Args:
        windows: (num_windows*B, window_size, window_size, C)
        window_size (int): Window size
        H (int): Height of image
        W (int): Width of image

    Returns:
        x: (B, H, W, C)
    """
    B = int(windows.shape[0] / (H * W / window_size / window_size))
    x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
    x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
    return x

Windows_reverse()与上面的函数操作相反,将若干个分割好的小窗口,还原成一个图像batch。对于原来batch_size的计算,可以参考上面背景色部分,其余的步骤就是对windows_partition函数的还原操作了,最后尺寸为(B, H, W, C),该函数用于对局部注意力Batch还原为与图像相同batch的尺寸。

Windows_Attention:

class WindowAttention(nn.Module):
    r""" Window based multi-head self attention (W-MSA) module with relative position bias.
    It supports both of shifted and non-shifted window.

    Args:
        dim (int): Number of input channels.
        window_size (tuple[int]): The height and width of the window.
        num_heads (int): Number of attention heads.
        qkv_bias (bool, optional):  If True, add a learnable bias to query, key, value. Default: True
        qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
        attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
    """

参数声明:输入的窗口尺寸为(nW*B, window_size*window_size, C),

                  im(int):输入通道数C,

                  window_size(tuple[int]):窗口尺寸的高和宽,

                  num_heads(int):注意力的head数量,每个head分别对QKV求若干次attention,然后对所有的head进行concat形成了最终的multihead attention,

                  qkv_bias(bool,optional):计算qkv时是否使用偏置项

                   qk_scale (float | None, optional): q的放缩因子,若未设置则使用head_dim ** -0.5

                   attn_drop (float, optional): 注意力的dropout

定义先略过,根据需要在回来看

    def __init__(self, dim, window_size, num_heads, qk_scale=None, attn_drop=0.):

        super().__init__()
        self.dim = dim
        self.window_size = window_size  # Wh, Ww
        self.num_heads = num_heads
        head_dim = dim // num_heads
        self.head_dim = head_dim
        self.scale = qk_scale or head_dim ** -0.5

        # define a parameter table of relative position bias
        self.relative_position_bias_table = nn.Parameter(
            torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads))  # 2*Wh-1 * 2*Ww-1, nH

        # get pair-wise relative position index for each token inside the window
        coords_h = torch.arange(self.window_size[0])
        coords_w = torch.arange(self.window_size[1])
        coords = torch.stack(torch.meshgrid([coords_h, coords_w]))  # 2, Wh, Ww
        coords_flatten = torch.flatten(coords, 1)  # 2, Wh*Ww
        relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :]  # 2, Wh*Ww, Wh*Ww
        relative_coords = relative_coords.permute(1, 2, 0).contiguous()  # Wh*Ww, Wh*Ww, 2
        relative_coords[:, :, 0] += self.window_size[0] - 1  # shift to start from 0
        relative_coords[:, :, 1] += self.window_size[1] - 1
        relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
        relative_position_index = relative_coords.sum(-1)  # Wh*Ww, Wh*Ww
        self.register_buffer("relative_position_index", relative_position_index)
        trunc_normal_(self.relative_position_bias_table, std=.02)

        self.attn_drop = nn.Dropout(attn_drop)

        self.softmax = nn.Softmax(dim=-1)

forward(): 

    def forward(self, q, k, v, mask=None):
        """
        Args:
            q: queries with shape of (num_windows*B, N, C)
            k: keys with shape of (num_windows*B, N, C)
            v: values with shape of (num_windows*B, N, C)
            mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
        """
        B_, N, C = q.shape
        q = q.reshape(B_, N, self.num_heads, C // self.num_heads).permute(0, 2, 1, 3)
        k = k.reshape(B_, N, self.num_heads, C // self.num_heads).permute(0, 2, 1, 3)
        v = v.reshape(B_, N, self.num_heads, C // self.num_heads).permute(0, 2, 1, 3)

        q = q * self.scale
        attn = (q @ k.transpose(-2, -1))

        relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
            self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1)  # Wh*Ww,Wh*Ww,nH
        relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous()  # nH, Wh*Ww, Wh*Ww
        attn = attn + relative_position_bias.unsqueeze(0)

        if mask is not None:
            nW = mask.shape[0]
            attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
            attn = attn.view(-1, self.num_heads, N, N)
            attn = self.softmax(attn)
        else:
            attn = self.softmax(attn)

        attn = self.attn_drop(attn)
        x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
        
        return x

①获取输入特征的尺寸,对应(nW*B,window_size*window_size,C) 记为(B_,N,C)

②使用全连接层计算qkv,使用全连接层计算qkv,得到(nW*B, window_size*window_size, 3C),进一步通过变形和换位,转换形状(B_N,self.num_size, N, C//self.num_size),便于后续矩阵乘法

③分别提取出q、k、v,并对q的值进行加权处理

④将k进行转置后,与q进行矩阵乘法得到attention

⑤计算relative_position_bias,并和attention相加

⑥当计算SW-MSA时,mask不为None,此时需要将attention转为与mask相同的形状,相加后再还原原形状

⑦对attention进行随机的dropout,然后与v进行矩阵乘法,并转换形状为(B_, N, C)

    def extra_repr(self) -> str:
        return f'dim={self.dim}, window_size={self.window_size}, num_heads={self.num_heads}'

    def flops(self, N):
        # calculate flops for 1 window with token length of N
        flops = 0
        # qkv = self.qkv(x)
        flops += N * self.dim * 3 * self.dim
        # attn = (q @ k.transpose(-2, -1))
        flops += self.num_heads * N * (self.dim // self.num_heads) * N
        #  x = (attn @ v)
        flops += self.num_heads * N * N * (self.dim // self.num_heads)
        # x = self.proj(x)
        flops += N * self.dim * self.dim
        return flops

AdIN:自适应实例归一化层

class AdaptiveInstanceNorm(nn.Module):                  # AdIN
    def __init__(self, in_channel, style_dim):
        super().__init__()
        self.norm = nn.InstanceNorm1d(in_channel)  # 创建IN层
        self.style = EqualLinear(style_dim, in_channel * 2)  # 从w向量变成AdIN层系数

    def forward(self, input, style):
        # print("AdIN style input="+str(style.shape)) # 默认值,风格向量长度512
        # 输入style为风格向量,长度为512;经过self.style得到输出风格矩阵,通道数等于输入通道
        style = self.style(style).unsqueeze(-1)
        gamma, beta = style.chunk(2, 1) # 获得缩放和偏置系数,按1轴(通道)分为2部分
        # print("AdIN style input="+str(style.shape))

        out = self.norm(input)  # IN归一化
        out = gamma * out + beta
        return out

StyleBasicLayer():是StyleSwinTransformerBlock的基本组成

class StyleBasicLayer(nn.Module):
    """ A basic StyleSwin layer for one stage.

    Args:
        dim (int): Number of input channels.
        input_resolution (tuple[int]): Input resolution.
        depth (int): Number of blocks.
        num_heads (int): Number of attention heads.
        window_size (int): Local window size.
        out_dim (int): Number of output channels.
        mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
        qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
        qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
        drop (float, optional): Dropout rate. Default: 0.0
        attn_drop (float, optional): Attention dropout rate. Default: 0.0
        upsample (nn.Module | None, optional): Upsample layer at the end of the layer. Default: None
        use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
        style_dim (int): Dimension of style vector.
    """

输入的参数:其中有一大部分是进一步传参到StykeSwinTransformerBlock()的,会在下面介绍。

depth (int): 模型中block的重复次数

upsample (nn.Module | None, optional):是否在blocks模块后面使用上采样

use_checkpoint (bool):是否保存checkpoint

style_dim(int):样式向量的维度

    def __init__(self, dim, input_resolution, depth, num_heads, window_size, out_dim=None,
                 mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., upsample=None, 
                 use_checkpoint=False, style_dim=512):

        super().__init__()
        self.dim = dim
        self.input_resolution = input_resolution
        self.depth = depth
        self.use_checkpoint = use_checkpoint

        # build blocks
        self.blocks = nn.ModuleList([
            StyleSwinTransformerBlock(dim=dim, input_resolution=input_resolution,
                                 num_heads=num_heads, window_size=window_size,
                                 mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
                                 drop=drop, attn_drop=attn_drop, style_dim=style_dim)
            for _ in range(depth)])

        if upsample is not None:
            self.upsample = upsample(input_resolution, dim=dim, out_dim=out_dim)
        else:
            self.upsample = None

初始化函数中,分别定义了blocks和upsample,其中,blocks由一些列的循环的StyleSwinTransformerBlock组成,循环的次数由传参depth决定,由此计算“局部→全局注意力”

upsample用于对特征进行上采样

    def forward(self, x, latent1, latent2):
        if self.use_checkpoint:
            x = checkpoint.checkpoint(self.blocks[0], x, latent1)
            x = checkpoint.checkpoint(self.blocks[1], x, latent2)
        else:
            x = self.blocks[0](x, latent1)
            x = self.blocks[1](x, latent2)

        if self.upsample is not None:
            x = self.upsample(x)
        return x

    def extra_repr(self) -> str:
        return f"dim={self.dim}, input_resolution={self.input_resolution}, depth={self.depth}"

    def flops(self):
        flops = 0
        for blk in self.blocks:
            flops += blk.flops()
        if self.upsample is not None:
            flops += self.upsample.flops()
        return flops

StyleSwinTransformerBlock():

class StyleSwinTransformerBlock(nn.Module):  # Swin block
    r""" StyleSwin Transformer Block.

    Args:
        dim (int): Number of input channels.
        input_resolution (tuple[int]): Input resulotion.
        num_heads (int): Number of attention heads.
        window_size (int): Window size.
        shift_size (int): Shift size for SW-MSA.
        mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
        qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
        qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
        drop (float, optional): Dropout rate. Default: 0.0
        attn_drop (float, optional): Attention dropout rate. Default: 0.0
        drop_path (float, optional): Stochastic depth rate. Default: 0.0
        act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
        style_dim (int): Dimension of style vector.
    """

输入的参数:和 StyleBasicLayer()大部分的参数相似;参考StyleBasicLayer()

dim (int): 输入的通道数,注意这边输入的是嵌入后的,而不是最早的图像通道数

input_resolution (tuple[int]): 宽度高度方向分别分成了几个patch

num_heads (int): 注意力的head数量

window_size (int):窗口的尺寸

shift_size (int):SW-MSA滑移窗口每次的移动距离

mlp_ratio (float): MLP的隐藏层维度与patch embedding 维度的比例

qkv_bias (bool, optional):为True时,则给QKV分别加一个可学习的偏置项

qk_scale (float | None, optional): q的放缩因子,若未设置则使用head_dim ** -0.5

drop (float, optional): Dropout比例,默认为0,用于防止过拟合

attn_drop (float, optional): Attention dropout比例,默认为0

drop_path (float, optional): Stochastic depth rate. Default: 0.0

act_layer (nn.Module, optional): 激活函数层,默认使用nn.GELU

norm_layer (nn.Module, optional): 归一化层,默认为nn.LayerNorm

    def __init__(self, dim, input_resolution, num_heads, window_size=7,
                 mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0.,
                 act_layer=nn.GELU, style_dim=512):
        super().__init__()
        self.dim = dim
        self.input_resolution = input_resolution
        self.num_heads = num_heads
        self.window_size = window_size
        self.mlp_ratio = mlp_ratio
        self.shift_size = self.window_size // 2
        self.style_dim = style_dim
        if min(self.input_resolution) <= self.window_size:
            # if window size is larger than input resolution, we don't partition windows
            self.shift_size = 0
            self.window_size = min(self.input_resolution)
        assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"

        self.norm1 = AdaptiveInstanceNorm(dim, style_dim)
        self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
        self.proj = nn.Linear(dim, dim)
        self.attn = nn.ModuleList([
            WindowAttention(
                dim // 2, window_size=to_2tuple(self.window_size), num_heads=num_heads // 2,
                qk_scale=qk_scale, attn_drop=attn_drop),
            WindowAttention(
                dim // 2, window_size=to_2tuple(self.window_size), num_heads=num_heads // 2,
                qk_scale=qk_scale, attn_drop=attn_drop),
        ])
        
        attn_mask1 = None
        attn_mask2 = None
        if self.shift_size > 0:
            # calculate attention mask for SW-MSA
            H, W = self.input_resolution
            img_mask = torch.zeros((1, H, W, 1))  # 1 H W 1
            h_slices = (slice(0, -self.window_size),
                        slice(-self.window_size, -self.shift_size),
                        slice(-self.shift_size, None))
            w_slices = (slice(0, -self.window_size),
                        slice(-self.window_size, -self.shift_size),
                        slice(-self.shift_size, None))
            cnt = 0
            for h in h_slices:
                for w in w_slices:
                    img_mask[:, h, w, :] = cnt
                    cnt += 1

            # nW, window_size, window_size, 1
            mask_windows = window_partition(img_mask, self.window_size)
            mask_windows = mask_windows.view(-1,
                                            self.window_size * self.window_size)
            attn_mask2 = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
            attn_mask2 = attn_mask2.masked_fill(
                attn_mask2 != 0, float(-100.0)).masked_fill(attn_mask2 == 0, float(0.0))
        
        self.register_buffer("attn_mask1", attn_mask1)
        self.register_buffer("attn_mask2", attn_mask2)

        self.norm2 = AdaptiveInstanceNorm(dim, style_dim)
        mlp_hidden_dim = int(dim * mlp_ratio)
        self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
    def forward(self, x, style):
        H, W = self.input_resolution  # 解析输入图像的分辨率
        B, L, C = x.shape  # 解析输入的维度
        assert L == H * W, "input feature has wrong size"  # 判断L和H*W是否一致
        
        # Double Attn
        shortcut = x  # 进行原始身份的映射
        x = self.norm1(x.transpose(-1, -2), style).transpose(-1, -2)
        
        qkv = self.qkv(x).reshape(B, -1, 3, C).permute(2, 0, 1, 3).reshape(3 * B, H, W, C)
        qkv_1 = qkv[:, :, :, : C // 2].reshape(3, B, H, W, C // 2)
        if self.shift_size > 0:  # 判断是否进行循环移位
            qkv_2 = torch.roll(qkv[:, :, :, C // 2:], shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)).reshape(3, B, H, W, C // 2)
        else:
            qkv_2 = qkv[:, :, :, C // 2:].reshape(3, B, H, W, C // 2)
        
        q1_windows, k1_windows, v1_windows = self.get_window_qkv(qkv_1)
        q2_windows, k2_windows, v2_windows = self.get_window_qkv(qkv_2)

        x1 = self.attn[0](q1_windows, k1_windows, v1_windows, self.attn_mask1)
        x2 = self.attn[1](q2_windows, k2_windows, v2_windows, self.attn_mask2)
        
        x1 = window_reverse(x1.view(-1, self.window_size * self.window_size, C // 2), self.window_size, H, W)
        x2 = window_reverse(x2.view(-1, self.window_size * self.window_size, C // 2), self.window_size, H, W)

        if self.shift_size > 0:  # 循环移位进行恢复
            x2 = torch.roll(x2, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
        else:
            x2 = x2

        x = torch.cat([x1.reshape(B, H * W, C // 2), x2.reshape(B, H * W, C // 2)], dim=2)
        x = self.proj(x)

        # FFN
        x = shortcut + x
        x = x + self.mlp(self.norm2(x.transpose(-1, -2), style).transpose(-1, -2))

        return x

分别计算q,k,v 

    def get_window_qkv(self, qkv):
        q, k, v = qkv[0], qkv[1], qkv[2]   # B, H, W, C
        C = q.shape[-1]
        q_windows = window_partition(q, self.window_size).view(-1, self.window_size * self.window_size, C)  # nW*B, window_size*window_size, C
        k_windows = window_partition(k, self.window_size).view(-1, self.window_size * self.window_size, C)  # nW*B, window_size*window_size, C
        v_windows = window_partition(v, self.window_size).view(-1, self.window_size * self.window_size, C)  # nW*B, window_size*window_size, C
        return q_windows, k_windows, v_windows
    def extra_repr(self) -> str:
        return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \
               f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}"

    def flops(self):
        flops = 0
        H, W = self.input_resolution
        # norm1
        flops += 1 * self.style_dim * self.dim * 2
        flops += 2 * (H * W) * self.dim
        # W-MSA/SW-MSA
        nW = H * W / self.window_size / self.window_size
        for attn in self.attn:
            flops += nW * (attn.flops(self.window_size * self.window_size))
        # mlp
        flops += 2 * H * W * self.dim * self.dim * self.mlp_ratio
        # norm2
        flops += 1 * self.style_dim * self.dim * 2
        flops += 2 * (H * W) * self.dim
        return flops

BilinearUpsample():

class BilinearUpsample(nn.Module):
    """ BilinearUpsample Layer.

    Args:
        input_resolution (tuple[int]): Resolution of input feature.
        dim (int): Number of input channels.
        out_dim (int): Number of output channels.
    """

    def __init__(self, input_resolution, dim, out_dim=None):
        super().__init__()
        assert dim % 2 == 0, f"x dim are not even."
        self.upsample = nn.Upsample(scale_factor=2, mode='bilinear')
        self.norm = nn.LayerNorm(dim)
        self.reduction = nn.Linear(dim, out_dim, bias=False)
        self.input_resolution = input_resolution
        self.dim = dim
        self.out_dim = out_dim
        self.alpha = nn.Parameter(torch.zeros(1))
        self.sin_pos_embed = SinusoidalPositionalEmbedding(embedding_dim=out_dim // 2, padding_idx=0, init_size=out_dim // 2)

    def forward(self, x):
        """
        x: B, H*W, C
        """
        H, W = self.input_resolution
        B, L, C = x.shape
        assert L == H * W, "input feature has wrong size"
        assert C == self.dim, "wrong in PatchMerging"

        x = x.view(B, H, W, -1)
        x = x.permute(0, 3, 1, 2).contiguous()   # B,C,H,W
        x = self.upsample(x)
        x = x.permute(0, 2, 3, 1).contiguous().view(B, L*4, C)   # B,H,W,C
        x = self.norm(x)
        x = self.reduction(x)

        # Add SPE    
        x = x.reshape(B, H * 2, W * 2, self.out_dim).permute(0, 3, 1, 2)
        x += self.sin_pos_embed.make_grid2d(H * 2, W * 2, B) * self.alpha
        x = x.permute(0, 2, 3, 1).contiguous().view(B, H * 2 * W * 2, self.out_dim)
        return x

    def extra_repr(self) -> str:
        return f"input_resolution={self.input_resolution}, dim={self.dim}"

    def flops(self):
        H, W = self.input_resolution
        # LN
        flops = 4 * H * W * self.dim
        # proj
        flops += 4 * H * W * self.dim * (self.out_dim)
        # SPE
        flops += 4 * H * W * 2
        # bilinear
        flops += 4 * self.input_resolution[0] * self.input_resolution[1] * self.dim * 5
        return flops

ConstantInput():

class ConstantInput(nn.Module):
    def __init__(self, channel, size=4):
        super().__init__()
        self.input = nn.Parameter(torch.randn(1, channel, size, size))

    def forward(self, input):
        batch = input.shape[0]
        out = self.input.repeat(batch, 1, 1, 1)

        return out  # return一个正态分布采样、参数化的tensor,维度为(batchsize,512,4,4)

常量初始化为1,是一个可学习的参数,要参与训练,选择用Parameter进行封装。

Generator(): self.size是生成图像大小,self.style_dim是隐层码维度,由输入参数指定。

class Generator(nn.Module):
    def __init__(
        self,
        size,                   # 生成图像大小
        style_dim,              # 隐层码维度=512
        n_mlp,                  # z到w的映射网络层数
        channel_multiplier=2,
        lr_mlp=0.01,
        enable_full_resolution=8,
        mlp_ratio=4,
        use_checkpoint=False,
        qkv_bias=True,
        qk_scale=None,
        drop_rate=0,
        attn_drop_rate=0,
    ):
        super().__init__()
        self.style_dim = style_dim
        self.size = size
        self.mlp_ratio = mlp_ratio

 self.style是由PixelNorm层和8个EqualLinear层组成的MLP,也就是将噪声z映射为隐层码w的网络。

  layers = [PixelNorm()]
        for _ in range(n_mlp):  # n_mlp=8 style_dim=512 lr_mlp=0.01
            layers.append(
                EqualLinear(
                    style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu'
                )
            )
        self.style = nn.Sequential(*layers)  # self.style是由PixelNorm层和8个EqualLinear层组成的MLP,也就是将噪声z映射为隐层码w的网络。
start = 2
        depths = [2, 2, 2, 2, 2, 2, 2, 2, 2]
        in_channels = [
            512,                         # 4
            512,                         # 8
            512,                         # 16
            512,                         # 32
            256 * channel_multiplier,    # 64  channel_multiplier=2
            128 * channel_multiplier,    # 128
            64 * channel_multiplier,     # 256
            32 * channel_multiplier,     # 512
            16 * channel_multiplier      # 1024
        ]

        end = int(math.log(size, 2))
        num_heads = [max(c // 32, 4) for c in in_channels]
        full_resolution_index = int(math.log(enable_full_resolution, 2))
        window_sizes = [2 ** i if i <= full_resolution_index else 8 for i in range(start, end + 1)]

        self.input = ConstantInput(in_channels[0])  # self.input是主支的常量输入   in_channels[0]=512
        self.layers = nn.ModuleList()
        self.to_rgbs = nn.ModuleList()
        num_layers = 0

对StyleBasicLayer进行循环

        for i_layer in range(start, end + 1):
            in_channel = in_channels[i_layer - start]
            layer = StyleBasicLayer(dim=in_channel,
                               input_resolution=(2 ** i_layer,2 ** i_layer),
                               depth=depths[i_layer - start],
                               num_heads=num_heads[i_layer - start],
                               window_size=window_sizes[i_layer - start],
                               out_dim=in_channels[i_layer - start + 1] if (i_layer < end) else None,
                               mlp_ratio=self.mlp_ratio,
                               qkv_bias=qkv_bias, qk_scale=qk_scale,
                               drop=drop_rate, attn_drop=attn_drop_rate,
                               upsample=BilinearUpsample if (i_layer < end) else None,
                               use_checkpoint=use_checkpoint, style_dim=style_dim)
            self.layers.append(layer)

            out_dim = in_channels[i_layer - start + 1] if (i_layer < end) else in_channels[i_layer - start]
            upsample = True if (i_layer < end) else False
            to_rgb = ToRGB(out_dim, upsample=upsample, resolution=(2 ** i_layer))
            self.to_rgbs.append(to_rgb)
            num_layers += 2

        self.n_latent = num_layers
        self.apply(self._init_weights)

_init_weights(): 权重,用权重来控制样式 

    def _init_weights(self, m):
        if isinstance(m, nn.Linear):
            trunc_normal_(m.weight, std=.02)
            if isinstance(m, nn.Linear) and m.bias is not None:
                nn.init.constant_(m.bias, 0)
        elif isinstance(m, nn.LayerNorm):
            if m.bias is not None:
                nn.init.constant_(m.bias, 0)
            if m.weight is not None:
                nn.init.constant_(m.weight, 1.0)
        elif isinstance(m, nn.Conv2d):
            nn.init.xavier_normal_(m.weight, gain=.02)
            if hasattr(m, 'bias') and m.bias is not None:
                nn.init.constant_(m.bias, 0)
    def forward(
        self,
        noise,
        return_latents=False,
        inject_index=None,
        truncation=1,
        truncation_latent=None,
    ):
        styles = self.style(noise)
        inject_index = self.n_latent

        if truncation < 1:
            style_t = []
            for style in styles:
                style_t.append(
                    truncation_latent + truncation * (style - truncation_latent)
                )

            styles = torch.cat(style_t, dim=0)

        if styles.ndim < 3:
            latent = styles.unsqueeze(1).repeat(1, inject_index, 1)
        else:
            latent = styles

        x = self.input(latent)
        B, C, H, W = x.shape
        x = x.permute(0, 2, 3, 1).contiguous().view(B, H * W, C)

        count = 0
        skip = None
        for layer, to_rgb in zip(self.layers, self.to_rgbs):
            x = layer(x, latent[:,count,:], latent[:,count+1,:])
            b, n, c = x.shape
            h, w = int(math.sqrt(n)), int(math.sqrt(n))
            skip = to_rgb(x.transpose(-1, -2).reshape(b, c, h, w), skip)
            count = count + 2

        B, L, C = x.shape
        assert L == self.size * self.size
        x = x.reshape(B, self.size, self.size, C).permute(0, 3, 1, 2).contiguous()
        image = skip

        if return_latents:
            return image, latent
        else:
            return image, None

    def flops(self):
        flops = 0
        for _, layer in enumerate(self.layers):
            flops += layer.flops()
        for _, layer in enumerate(self.to_rgbs):
            flops += layer.flops()
        # 8 FC + PixelNorm
        flops += 1 * 10 * self.style_dim * self.style_dim
        return flops

以上有理解不对的地方,欢迎讨论批评指正~~

  • 5
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值