拉普拉斯金字塔融合原理浅析【转载】

转载自:https://blog.csdn.net/TracelessLe/article/details/120654696

前言

拉普拉斯金字塔融合(Laplacian Pyramid Blending)也称为多频带融合(Multi-band Blending),可以看做是对Alpha融合的一种改进,避免出现鬼影(Ghosting)和截断(Seams)现象。
在这里插入图片描述

方法原理

图像可以认为是由不同频率的信息组成,包含了很多不同的特征,频谱跨度很大。

图像中的低频信号和高频信号也叫做低频分量和高频分量。
图像中的高频分量,指的是图像强度(亮度/灰度)变化剧烈的地方,也就是我们常说的边缘(轮廓);
图像中的低频分量,指的是图像强度(亮度/灰度)变换平缓的地方,也就是大片色块的地方。 人眼对图像中的高频信号更为敏感。

因此对不同频率分量选择不同大小的融合窗口进行处理,在低频处采用较大的融合窗口以避免截断现象,在高频处采用较小的融合窗口以避免鬼影现象,最终得到平滑无鬼影的融合结果。
在这里插入图片描述
具体算法流程如下:
1、对要融合的两张图片,构建拉普拉斯金字塔;
2、对于给定的融合区域,构建高斯金字塔;
3、对金字塔的每一层,应用类似Alpha融合的公式进行该层的融合;
4、利用融合后的金字塔重建出输出图像。
在这里插入图片描述
在这里插入图片描述
事实上,拉普拉斯金字塔的某层图像就是源图像减去丢掉高频分量的图像后的结果,也就是高频分量,抽取出来用于后续高频分量的融合以及恢复图像高频信息的恢复。关于高斯金字塔和拉普拉斯金字塔的释义见参考资料[5]。

拉普拉斯金字塔示例:在这里插入图片描述
拉普拉斯融合金字塔示例:
在这里插入图片描述
代码实现:

#!/usr/bin/python
__author__ = 'TracelessLe'

import numpy as np
import cv2
import sys
import argparse


def preprocess(img1, img2, overlap_w, flag_half, need_mask=False):
    if img1.shape[0] != img2.shape[0]:
        print ("error: image dimension error")
        sys.exit()
    if overlap_w > img1.shape[1] or overlap_w > img2.shape[1]:
        print ("error: overlapped area too large")
        sys.exit()

    w1 = img1.shape[1]
    w2 = img2.shape[1]

    if flag_half:
        shape = np.array(img1.shape)
        shape[1] = w1 // 2 + w2 // 2

        subA = np.zeros(shape)
        subA[:, :w1 // 2 + overlap_w // 2] = img1[:, :w1 // 2 + overlap_w // 2]
        subB = np.zeros(shape)
        subB[:, w1 // 2 - overlap_w // 2:] = img2[:,
                                                w2 - (w2 // 2 + overlap_w // 2):]
        if need_mask:
            mask = np.zeros(shape)
            mask[:, :w1 // 2] = 1
            return subA, subB, mask
    else:
        shape = np.array(img1.shape)
        shape[1] = w1 + w2 - overlap_w

        subA = np.zeros(shape)
        subA[:, :w1] = img1
        subB = np.zeros(shape)
        subB[:, w1 - overlap_w:] = img2
        if need_mask:
            mask = np.zeros(shape)
            mask[:, :w1 - overlap_w // 2] = 1
            return subA, subB, mask

    return subA, subB, None


def GaussianPyramid(img, leveln):
    GP = [img]
    for i in range(leveln - 1):
        GP.append(cv2.pyrDown(GP[i]))
    return GP


def LaplacianPyramid(img, leveln):
    LP = []
    for i in range(leveln - 1):
        next_img = cv2.pyrDown(img)
        LP.append(img - cv2.pyrUp(next_img, img.shape[1::-1]))
        img = next_img
    LP.append(img)
    return LP


def blend_pyramid(LPA, LPB, MP):
    blended = []
    for i, M in enumerate(MP):
        blended.append(LPA[i] * M + LPB[i] * (1.0 - M))
    return blended


def reconstruct(LS):
    img = LS[-1]
    for lev_img in LS[-2::-1]:
        img = cv2.pyrUp(img, lev_img.shape[1::-1])
        img += lev_img
    return img


def multi_band_blending(img1, img2, mask, overlap_w, leveln=None, flag_half=False, need_mask=False):
    if overlap_w < 0:
        print ("error: overlap_w should be a positive integer")
        sys.exit()

    if need_mask:  # no input mask
        subA, subB, mask = preprocess(img1, img2, overlap_w, flag_half, True)
    else:  # have input mask
        subA, subB, _ = preprocess(img1, img2, overlap_w, flag_half)

    max_leveln = int(np.floor(np.log2(min(img1.shape[0], img1.shape[1],
                                          img2.shape[0], img2.shape[1]))))
    if leveln is None:
        leveln = max_leveln
    if leveln < 1 or leveln > max_leveln:
        print ("warning: inappropriate number of leveln")
        leveln = max_leveln

    # Get Gaussian pyramid and Laplacian pyramid
    MP = GaussianPyramid(mask, leveln)
    LPA = LaplacianPyramid(subA, leveln)
    LPB = LaplacianPyramid(subB, leveln)

    # Blend two Laplacian pyramidspass
    blended = blend_pyramid(LPA, LPB, MP)

    # Reconstruction process
    result = reconstruct(blended)
    result[result > 255] = 255
    result[result < 0] = 0

    return result.astype(np.uint8)


if __name__ == '__main__':
    # construct the argument parse and parse the arguments
    ap = argparse.ArgumentParser(
        description="A Python implementation of multi-band blending")
    ap.add_argument('-f', '--first', required=True,
                    help="path to the first (left) image")
    ap.add_argument('-s', '--second', required=True,
                    help="path to the second (right) image")
    ap.add_argument('-m', '--mask', required=False,
                    help="path to the mask image")
    ap.add_argument('-o', '--overlap', required=True, type=int,
                    help="width of the overlapped area between two images, \
                          even number recommended")
    ap.add_argument('-l', '--leveln', required=False, type=int,
                    help="number of levels of multi-band blending, \
                          calculated from image size if not provided")
    ap.add_argument('-H', '--half', required=False, action='store_true',
                    help="option to blend the left half of the first image \
                          and the right half of the second image")
    args = vars(ap.parse_args())

    flag_half = args['half']
    img1 = cv2.imread(args['first'])
    img2 = cv2.imread(args['second'])
    if args['mask'] != None:
        mask = cv2.imread(args['mask'])
        mask = mask//255
        need_mask = False
    else:
        mask = None
        need_mask =True
    overlap_w = args['overlap']
    leveln = args['leveln']
    print('args: ', args)
    
    result = multi_band_blending(img1, img2, mask, overlap_w, leveln, flag_half, need_mask)
    cv2.imwrite('result.png', result)
    print("blending result has been saved in 'result.png'")

输入:
在这里插入图片描述
输出:
在这里插入图片描述
其他
关于更多图像融合方法可以查阅参考资料中关于泊松融合的内容。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值