PSNR/SSIM/LPIPS图像质量评估三件套(含代码)

本文介绍了图像质量评估中的PSNR、SSIM和LPIPS三个关键指标,提供了使用Torchmetrics库在Python中实现这些指标的代码示例。作者还提到PSNR有不同实现方式,且所有指标都支持在torchmetrics中找到。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

在图像质量评估上,有三个重要指标:PSNR,SSIM,LPIPS。本文提供简易脚本分别实现。

PSNR,峰值信噪比,是基于MSE的像素比较低质量评估,一般30dB以上质量就不错,到40dB以上肉眼就很难分别了。
SSIM,结构相似性,从分布上来比较相似性,量化到(0-1)之间,越接近1则证明图像质量越好。具体数学公式可以看我之前的博客《SSIM》。
LPIPS,利用AI模型来量化图像之间的相似性。取值范围也是[0,1],与SSIM相反,LPIPS是越小则证明图像质量越好

像这种常见的图像质量评价指标,都会收录到torchmetrics里面。只需安装:

pip install torchmetrics

实验脚本:

import torch
from torchmetrics.image.lpip import LearnedPerceptualImagePatchSimilarity
from torchmetrics.image import StructuralSimilarityIndexMeasure
from torchmetrics.image import PeakSignalNoiseRatio

_ = torch.manual_seed(123)


def psnr_torch(img1, img2):
    mse = ((img1 - img2) ** 2).view(img1.shape[0], -1).mean(1, keepdim=True)
    return 20 * torch.log10(1.0 / torch.sqrt(mse))


def psnr(img1, img2):
    metric = PeakSignalNoiseRatio()
    return metric(img1, img2)


def ssim(img1, img2):
    metric = StructuralSimilarityIndexMeasure(data_range=1.0)
    return metric(img1, img2)


def lpips(img1, img2):
    metric = LearnedPerceptualImagePatchSimilarity(net_type='vgg')
    return metric(img1, img2)


def _main():
    img1 = torch.rand(1, 3, 100, 100)
    img2 = torch.rand(1, 3, 100, 100)

    # PSNR
    print("PNSR: ", psnr_torch(img1, img2))
    print("PNSR1: ", psnr(img1, img2))
    print("SSIM: ", ssim(img1, img2))
    print("LPIPS: ", lpips(img1, img2))


if __name__ == "__main__":
    _main()

代码里给了两种PSNR实现方法,计算结果差别不大。欢迎自取~

### LPIPS Implementation with Inception Architecture For implementing the Learned Perceptual Image Patch Similarity (LPIPS) using an Inception architecture, one can leverage a pretrained model to extract features from images before computing their perceptual distance. The Inception v3 model is commonly used due to its effectiveness in capturing hierarchical visual information[^1]. The following Python code demonstrates how to implement this approach: ```python import torch from torchvision import models, transforms from lpips import LPIPS def get_inception_features(image_tensor): inception_model = models.inception_v3(pretrained=True).eval() # Modify the forward pass of InceptionV3 to output intermediate layers' activations. class FeatureExtractor(torch.nn.Module): def __init__(self, submodule, extracted_layers=None): super(FeatureExtractor, self).__init__() self.submodule = submodule def forward(self, x): outputs = [] for name, module in self.submodule._modules.items(): x = module(x) if isinstance(module, torch.nn.Conv2d): outputs.append(x) return outputs feature_extractor = FeatureExtractor(inception_model) with torch.no_grad(): # Disable gradient calculation as we are only extracting features features = feature_extractor(image_tensor) return features[-1] # Initialize LPIPS loss function based on AlexNet by default; replace it with custom extractor lpips_loss_fn = LPIPS(net='alex') image_transforms = transforms.Compose([ transforms.Resize((299, 299)), transforms.ToTensor(), ]) img1_path = 'path_to_image_1.jpg' img2_path = 'path_to_image_2.jpg' preprocessed_img1 = image_transforms(Image.open(img1_path)).unsqueeze_(0) preprocessed_img2 = image_transforms(Image.open(img2_path)).unsqueeze_(0) inception_feature_map_1 = get_inception_features(preprocessed_img1) inception_feature_map_2 = get_inception_features(preprocessed_img2) distance_score = lpips_loss_fn.forward(inception_feature_map_1, inception_feature_map_2) print(f"The LPIPS score between two images: {distance_score.item()}") ``` This script initializes both an Inception V3 model and an LPIPS metric calculator. It extracts deep convolutional layer responses from each input image through modified Inception net's forward propagation process. Finally, these high-level representations serve as inputs into the LPIPS comparator which quantifies dissimilarity.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

木盏

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值