#保姆级教学 「图像评价指标」(MSE、LPIPS)——理论+代码

  •  均方误差MSE

给定一个大小为m*n的原图I和生成图K,计算均方误(MSE)定义为:的干净图像和噪声图像,均方误差定义为:   

在这里插入图片描述

#原图为I,生成图为K

#pytorch ——直接调用torch.nn.MSELoss()函数
function = torch.nn.MSELoss()
mse_loss = funciton(I, K)

#tensorflow 1.x
mse_loss = tf.keras.losses.MSE(I, K)

#tensorflow 2.x
mse_loss = tf.losses.MSE(I, K)
  • LPIPS

 学习感知图像块相似度(Learned Perceptual Image Patch Similarity, LPIPS)也称为“感知损失”(perceptual loss),用于度量两张图像之间的差别,来源于论文《The Unreasonable Effectiveness of Deep Features as a Perceptual Metric》。

论文地址:

https://arxiv.org/pdf/1801.03924.pdf

代码地址:

pytorch:https://github.com/richzhang/PerceptualSimilarity

tensorflow:https://github.com/alexlee-gk/lpips-tensorflow

        计算相似度需逐层计算网络输出的对应channel的Cos Distance,然后对得到的distance进行平均(所有层,空间维度),LPIPS主要是把两个Cos Distance作为网络的输入,然后用Cross Entropy Loss训练网络学习2AFC。

        计算xx_{0} 之间的距离d_{0}:给定不同的BaseNet F,首先计算深度嵌入,规格化通道维度中的激活,用向量w缩放每个通道,取L_{2 }距离,然后对空间维度和所有层次求平均。

在这里插入图片描述

        从l层提取特征堆并在通道维度中进行单元标准化。通过w_{l }缩放激活通道维并计算 L_{2 }距离

,在空间上取平均,在层上求和。 

#pytorch 求LPIPS

import torch
import lpips
import os

use_gpu = False         # Whether to use GPU
spatial = True         # Return a spatial map of perceptual distance.

# Linearly calibrated models (LPIPS)
loss_fn = lpips.LPIPS(net='alex', spatial=spatial) # Can also set net = 'squeeze' or 'vgg'
# loss_fn = lpips.LPIPS(net='alex', spatial=spatial, lpips=False) # Can also set net = 'squeeze' or 'vgg'

if(use_gpu):
	loss_fn.cuda()
	
## Example usage with dummy tensors
rood_path = r'D:\Project\results\faces'
img0_path_list = []
img1_path_list = []
## path in net is already exist
'''
for root, _, fnames in sorted(os.walk(rood_path, followlinks=True)):
	for fname in fnames:
		path = os.path.join(root, fname)
		if '_generated' in fname:
			im0_path_list.append(path)
		elif '_real' in fname:
			im1_path_list.append(path)
'''

dist_ = []
for i in range(len(img0_path_list)):
	dummy_img0 = lpips.im2tensor(lpips.load_image(img0_path_list[i]))
	dummy_img1 = lpips.im2tensor(lpips.load_image(img1_path_list[i]))
	if(use_gpu):
		dummy_img0 = dummy_img0.cuda()
		dummy_img1 = dummy_img1.cuda()
	dist = loss_fn.forward(dummy_img0, dummy_img1)
	dist_.append(dist.mean().item())
print('Avarage Distances: %.3f' % (sum(dist_)/len(img0_path_list)))

需要注意的是tensorflow版本需要下载.pb数据文件

http://rail.eecs.berkeley.edu/models/lpips/

#tensorflow 求LPIPS

import numpy as np
import tensorflow as tf
import lpips_tf

batch_size = 32
image_shape = (batch_size, 64, 64, 3)
image0 = np.random.random(image_shape)  #read real image
image1 = np.random.random(image_shape)  #read generate image
image0_ph = tf.placeholder(tf.float32)
image1_ph = tf.placeholder(tf.float32)

distance_t = lpips_tf.lpips(image0_ph, image1_ph, model='net-lin', net='alex')

with tf.Session() as session:
    distance = session.run(distance_t, feed_dict={image0_ph: image0, image1_ph: image1})

  • 18
    点赞
  • 62
    收藏
    觉得还不错? 一键收藏
  • 3
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值