YCbCr 图像为什么保存后有坏点?? 小坑请注意!
问题描述
我最近在把 github 上的开源代码一直到自己的实验平台时,一些要把图片从 RGB 转换成 YCbCr 然后计算,再把 YCbCr 图片转换成 RGB 的算法,经常会出现坏点,如下图:
Debug
经过长时间的煎熬,我发现:cv2.imwrite,如果保存的是cv2.imwrite(name,sr_image*255.0)
就不会有错,如果是cv2.imwrite(name,(sr_image*255.0).astype(np.uint8))
就会有错!
原来,cv2.imwrite()如果输入的是一个浮点数,就算他是0~255 的小数,输入进去,他也会默认进行截断!
所以如果想用其他方法保存图片,比如 PIL 的 Image.save(),就需要手动进行这一步截断!
下面是我改好的代码(这个算法是超分辨率重建的复现,SRCNN,改自https://github.com/Lornatang/SRCNN-PyTorch)
import torch
import torchvision.transforms as transforms
import numpy as np
from PIL import Image
from ....utils import path_to_rgb
from .utils import *
def inference(model, iml, img, opts):
# Start the verification mode of the model.
model.eval()
# Read LR image and HR image
lr_image = np.array(path_to_rgb(iml)) / 255.0
# Get Y channel image data
lr_y_image = rgb2ycbcr(lr_image, True)
# Get Cb Cr image data from hr image
lr_ycbcr_image = rgb2ycbcr(lr_image, False)
[_, lr_cb_image, lr_cr_image] = [lr_ycbcr_image[:,:,i] for i in range(3)]
# Convert RGB channel image format data to Tensor channel image format data
lr_y_tensor = image2tensor(lr_y_image, False, False).unsqueeze_(0)
# Transfer Tensor channel image format data to CUDA device
lr_y_tensor = lr_y_tensor.to(device=opts.device)
# Only reconstruct the Y channel image data.
with torch.no_grad():
sr_y_tensor = model(lr_y_tensor).clamp_(0, 1.0)
# Save image
sr_y_image = tensor2image(sr_y_tensor, False, False)
sr_y_image = sr_y_image.astype(np.float32) / 255.0
sr_ycbcr_image = np.stack([sr_y_image[:,:,0], lr_cb_image, lr_cr_image],axis=-1)
sr_image = ycbcr2rgb(sr_ycbcr_image)
sr_image = (np.clip(sr_image*255.0, 0, 255)).astype(np.uint8) # Important!
sr_image = Image.fromarray(np.array(sr_image))
return transforms.ToTensor()(sr_image)
详细内容请看:https://github.com/CharlesShan-hub/CVPlayground
其中,重点在这里:在转换成 Image 类型之前,要截断溢出的值!
sr_image = (np.clip(sr_image*255.0, 0, 255)).astype(np.uint8)
sr_image = Image.fromarray(np.array(sr_image))
改好后的代码运行结果如下: