LPIPS这个类

目录

1、L2范数

2、LPIPS

2.1、LPIPS类代码:

2.1.1、Alexnet模型

2.1.2、LPIPS计算图像相似性

2.2、LPIPS功能代码实现:

2.3、LPIPS和Torchvision


1、L2范数

        L2 范数常用于计算图像像素间的相似性度量。

        在计算两幅图像的相似性时,L2 范数可以直接用来衡量两幅图像在像素空间上的差异,即常见的均方误差(Mean Squared Error,MSE)或欧几里得距离。

L2 范数计算图像相似性

        假设两张示例图像,图像形状为 [1, 3, H, W] 的张量,像素值分别为I_{1}(x,y,z)I_{2}(x,y,z)。则这两张图像的L2 差异为:

L2_{difference}=\sqrt{\sum_{z=1}^{3}\sum_{x=1}^{H}\sum_{y=1}^{W}(I_{1}(x,y,z)-I_{2}(x,y,z))^{2}}

代码实现如下:

import torch

# 创建两个示例图像(假设你有两个形状为 [1, 3, H, W] 的图像张量)
img1 = torch.randn(1, 3, 256, 256)  # 随机生成的示例图像1
img2 = torch.randn(1, 3, 256, 256)  # 随机生成的示例图像2

# 计算两个图像之间的 L2 差异
L2_difference = torch.norm(img1 - img2, p=2)

print("L2 Difference: ", L2_difference)

输出结果:

L2 Difference:  tensor(627.1748)

2、LPIPS

        LPIPS(Learned Perceptual Image Patch Similarity) 是一种用于衡量图像感知相似度的方法。它基于深度学习模型的特征提取,通过对比不同图像的特征来评估它们之间的感知差异。例如,在图像超分辨率、图像修复和风格迁移等任务中,LPIPS 可以作为一种评估指标来衡量模型的性能。

        LPIPS 采用预训练的卷积神经网络(如AlexNetVGGVGG16SqueezeNet)提取图像的特征,然后通过一组线性层对这些特征进行处理,以计算图像之间的相似度分数。

2.1、LPIPS类代码:

import torch
import torch.nn as nn
import torch.nn.init as init
from torch.autograd import Variable
import numpy as np
from . import pretrained_networks as pn
import torch.nn

import lpips


def spatial_average(in_tens, keepdim=True):
    return in_tens.mean([2,3],keepdim=keepdim)

def upsample(in_tens, out_HW=(64,64)): # assumes scale factor is same for H and W
    in_H, in_W = in_tens.shape[2], in_tens.shape[3]
    return nn.Upsample(size=out_HW, mode='bilinear', align_corners=False)(in_tens)

# Learned perceptual metric
class LPIPS(nn.Module):
    def __init__(self, pretrained=True, net='alex', version='0.1', lpips=True, spatial=False, 
        pnet_rand=False, pnet_tune=False, use_dropout=True, model_path=None, eval_mode=True, verbose=True):
        """ Initializes a perceptual loss torch.nn.Module

        Parameters (default listed first)
        ---------------------------------
        lpips : bool
            [True] use linear layers on top of base/trunk network
            [False] means no linear layers; each layer is averaged together
        pretrained : bool
            This flag controls the linear layers, which are only in effect when lpips=True above
            [True] means linear layers are calibrated with human perceptual judgments
            [False] means linear layers are randomly initialized
        pnet_rand : bool
            [False] means trunk loaded with ImageNet classification weights
            [True] means randomly initialized trunk
        net : str
            ['alex','vgg','squeeze'] are the base/trunk networks available
        version : str
            ['v0.1'] is the default and latest
            ['v0.0'] contained a normalization bug; corresponds to old arxiv v1 (https://arxiv.org/abs/1801.03924v1)
        model_path : 'str'
            [None] is default and loads the pretrained weights from paper https://arxiv.org/abs/1801.03924v1

        The following parameters should only be changed if training the network

        eval_mode : bool
            [True] is for test mode (default)
            [False] is for training mode
        pnet_tune
            [False] tune the base/trunk network
            [True] keep base/trunk frozen
        use_dropout : bool
            [True] to use dropout when training linear layers
            [False] for no dropout when training linear layers
        """

        super(LPIPS, self).__init__()
        if(verbose):
            print('Setting up [%s] perceptual loss: trunk [%s], v[%s], spatial [%s]'%
                ('LPIPS' if lpips else 'baseline', net, version, 'on' if spatial else 'off'))

        self.pnet_type = net
        self.pnet_tune = pnet_tune
        self.pnet_rand = pnet_rand
        self.spatial = spatial
        self.lpips = lpips # false means baseline of just averaging all layers
        self.version = version
        self.scaling_layer = ScalingLayer()

        # 选择预训练的模型
        if(self.pnet_type in ['vgg','vgg16']):
            net_type = pn.vgg16
            self.chns = [64,128,256,512,512]
        elif(self.pnet_type=='alex'):
            net_type = pn.alexnet
            self.chns = [64,192,384,256,256]
        elif(self.pnet_type=='squeeze'):
            net_type = pn.squeezenet
            self.chns = [64,128,256,384,384,512,512]
        self.L = len(self.chns)
        
        # 加载预训练模型和参数(注意这里加载的是官方torchvision的权重参数)
        self.net = net_type(pretrained=not self.pnet_rand, requires_grad=self.pnet_tune)

        # 构建卷积多个 1x1 conv层
        if(lpips):
            self.lin0 = NetLinLayer(self.chns[0], use_dropout=use_dropout)
            self.lin1 = NetLinLayer(self.chns[1], use_dropout=use_dropout)
            self.lin2 = NetLinLayer(self.chns[2], use_dropout=use_dropout)
            self.lin3 = NetLinLayer(self.chns[3], use_dropout=use_dropout)
            self.lin4 = NetLinLayer(self.chns[4], use_dropout=use_dropout)
            self.lins = [self.lin0,self.lin1,self.lin2,self.lin3,self.lin4]
            if(self.pnet_type=='squeeze'): # 7 layers for squeezenet
                self.lin5 = NetLinLayer(self.chns[5], use_dropout=use_dropout)
                self.lin6 = NetLinLayer(self.chns[6], use_dropout=use_dropout)
                self.lins+=[self.lin5,self.lin6]
            self.lins = nn.ModuleList(self.lins)

            if(pretrained):
                if(model_path is None):
                    import inspect
                    import os
                    model_path = os.path.abspath(os.path.join(inspect.getfile(self.__init__), '..', 'weights/v%s/%s.pth'%(version,net)))
                    # model_path = D:\Anaconda\envs\env_python3.9\lib\site-packages\lpips\weights\v0.1\alex.pth

                if(verbose):
                    print('Loading model from: %s'%model_path)
                # 加载模型参数(注意这里加载的是官方lpips的权重参数)
                self.load_state_dict(torch.load(model_path, map_location='cpu'), strict=False)          

        if(eval_mode):
            self.eval()

    def forward(self, in0, in1, retPerLayer=False, normalize=False):
        if normalize: # turn on this flag if input is [0,1] so it can be adjusted to [-1, +1]
            in0 = 2 * in0  - 1
            in1 = 2 * in1  - 1

        # v0.0 - original release had a bug, where input was not scaled
        in0_input, in1_input = (self.scaling_layer(in0), self.scaling_layer(in1)) if self.version=='0.1' else (in0, in1)
        outs0, outs1 = self.net.forward(in0_input), self.net.forward(in1_input)
        feats0, feats1, diffs = {}, {}, {}

        for kk in range(self.L):
            feats0[kk], feats1[kk] = lpips.normalize_tensor(outs0[kk]), lpips.normalize_tensor(outs1[kk])
            diffs[kk] = (feats0[kk]-feats1[kk])**2

        if(self.lpips):
            if(self.spatial):
                res = [upsample(self.lins[kk](diffs[kk]), out_HW=in0.shape[2:]) for kk in range(self.L)]
            else:
                res = [spatial_average(self.lins[kk](diffs[kk]), keepdim=True) for kk in range(self.L)]
        else:
            if(self.spatial):
                res = [upsample(diffs[kk].sum(dim=1,keepdim=True), out_HW=in0.shape[2:]) for kk in range(self.L)]
            else:
                res = [spatial_average(diffs[kk].sum(dim=1,keepdim=True), keepdim=True) for kk in range(self.L)]

        val = 0
        for l in range(self.L):
            val += res[l]
        
        if(retPerLayer):
            return (val, res)
        else:
            return val


class ScalingLayer(nn.Module):
    def __init__(self):
        super(ScalingLayer, self).__init__()
        self.register_buffer('shift', torch.Tensor([-.030,-.088,-.188])[None,:,None,None])
        self.register_buffer('scale', torch.Tensor([.458,.448,.450])[None,:,None,None])

    def forward(self, inp):
        return (inp - self.shift) / self.scale


class NetLinLayer(nn.Module):
    ''' A single linear layer which does a 1x1 conv '''
    def __init__(self, chn_in, chn_out=1, use_dropout=False):
        super(NetLinLayer, self).__init__()

        layers = [nn.Dropout(),] if(use_dropout) else []
        layers += [nn.Conv2d(chn_in, chn_out, 1, stride=1, padding=0, bias=False),]
        self.model = nn.Sequential(*layers)

    def forward(self, x):
        return self.model(x)

2.1.1、Alexnet模型

pn.alexnet

        上面这行代码可以跳转到alexnet类。

        alexnet类代码:

class alexnet(torch.nn.Module):
    def __init__(self, requires_grad=False, pretrained=True):
        super(alexnet, self).__init__()
        alexnet_pretrained_features = tv.alexnet(pretrained=pretrained).features
        self.slice1 = torch.nn.Sequential()
        self.slice2 = torch.nn.Sequential()
        self.slice3 = torch.nn.Sequential()
        self.slice4 = torch.nn.Sequential()
        self.slice5 = torch.nn.Sequential()
        self.N_slices = 5
        for x in range(2):
            self.slice1.add_module(str(x), alexnet_pretrained_features[x])
        for x in range(2, 5):
            self.slice2.add_module(str(x), alexnet_pretrained_features[x])
        for x in range(5, 8):
            self.slice3.add_module(str(x), alexnet_pretrained_features[x])
        for x in range(8, 10):
            self.slice4.add_module(str(x), alexnet_pretrained_features[x])
        for x in range(10, 12):
            self.slice5.add_module(str(x), alexnet_pretrained_features[x])
        if not requires_grad:
            for param in self.parameters():
                param.requires_grad = False

    def forward(self, X):
        h = self.slice1(X)
        h_relu1 = h
        h = self.slice2(h)
        h_relu2 = h
        h = self.slice3(h)
        h_relu3 = h
        h = self.slice4(h)
        h_relu4 = h
        h = self.slice5(h)
        h_relu5 = h
        alexnet_outputs = namedtuple("AlexnetOutputs", ['relu1', 'relu2', 'relu3', 'relu4', 'relu5'])
        out = alexnet_outputs(h_relu1, h_relu2, h_relu3, h_relu4, h_relu5)

        return out

注意

alexnet_pretrained_features = tv.alexnet(pretrained=pretrained).features

这行代码调用的是torchvision的预训练模型的features(卷积层、池化层那些,不包括全连接层,全连接层在classifier里面)。

因此,加载的特征层的模型和参数

2.1.2、LPIPS计算图像相似性

        在LPIPS模型中,L2 范数用于计算特征空间中的差异。LPIPS通过一个预训练的神经网络(如VGG、AlexNet 等)提取图像的多层特征,然后计算这些特征之间的差异。具体步骤如下:

        (1)、特征提取:将两幅图像输入预训练网络,提取多层特征。

self.net = net_type(pretrained=not self.pnet_rand, requires_grad=self.pnet_tune)
outs0, outs1 = self.net.forward(in0_input), self.net.forward(in1_input)
feats0, feats1, diffs = {}, {}, {}

for kk in range(self.L):
    feats0[kk], feats1[kk] = lpips.normalize_tensor(outs0[kk]), lpips.normalize_tensor(outs1[kk])

        (2)、特征差异计算:计算对应层特征的 L2 范数差异。

    diffs[kk] = (feats0[kk]-feats1[kk])**2

        (3)、特征融合:将所有层的特征差异加权平均,得到最终的相似性度量。

res = [spatial_average(self.lins[kk](diffs[kk]), keepdim=True) for kk in range(self.L)]

val = 0
for l in range(self.L):
    val += res[l]

return val

2.2、LPIPS功能代码实现:

import lpips
import torch


lpips_model = lpips.LPIPS(net='alex').eval()

# 创建两个示例图像(假设你有两个形状为 [1, 3, H, W] 的图像张量)
img1 = torch.randn(1, 3, 256, 256)  # 随机生成的示例图像1
img2 = torch.randn(1, 3, 256, 256)  # 随机生成的示例图像2

# 计算两个图像之间的 LPIPS 相似性
lpips_distance = lpips_model(img1, img2)

print("LPIPS Distance: ", lpips_distance.item())

输出结果:

100%|██████████| 233M/233M [00:53<00:00, 4.54MB/s]
Loading model from: D:\Anaconda\envs\env_python3.9\lib\site-packages\lpips\weights\v0.1\alex.pth
LPIPS Distance:  0.19887171685695648

2.3、LPIPS和Torchvision

        tip:alex.pth 权重文件来自于 lpips 库,这个文件和 torchvision 的预训练权重有不同的用途和来源,即使它们可能使用相同的基础网络架构(如 AlexNet 或 VGG)。

        a、torchvision 提供的预训练模型(如 VGG16)是针对 ImageNet 数据集进行训练的,用于通用的图像分类任务。
        b、lpips 的权重是针对图像相似度度量任务进行训练和优化的。这些权重在训练时使用了特定的损失函数,旨在使模型更好地衡量图像之间的感知相似度。  

import torch
import torchvision.models as models

# 加载 torchvision 提供的预训练模型结构
vgg_pretrained_features = models.vgg16(pretrained=True).features

# 自定义 LPIPS 模型类
class LPIPS(nn.Module):
    def __init__(self, model_path=None):
        super(LPIPS, self).__init__()
        self.features = vgg_pretrained_features
        # 如果提供了特定的 LPIPS 权重文件,则加载它
        if model_path:
            self.load_state_dict(torch.load(model_path, map_location='cpu'), strict=False)

# LPIPS 提供的特定权重文件路径
model_path = "D:\\Anaconda\\envs\\env_python3.9\\lib\\site-packages\\lpips\\weights\\v0.1\\alex.pth"
lpips_model = LPIPS(model_path)

# 确保模型使用 LPIPS 的特定权重
lpips_model.eval()

通过这种方式,确保了模型不仅使用了预训练的基础结构,而且加载了针对感知相似度任务优化的特定权重,从而在这一特定任务上达到最佳表现。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值