大气湍流模拟算法

大气湍流模糊的算法模拟(python语言实现)

** 简介 **:
为了深入研究大气湍流的成因以及其对成像质量的影响,前苏联学者Kolmogorov 提出了湍流模型,该模型得到了很多实验的支持,并且广泛应用于天文成像仿真过程。 该理论认为大气湍流引起的波前畸变来自大气折射率的变化,正是这种折射率的随机变化导致了光波位相分布的随机起伏。通常从空间和时间两方面来描述大气湍流的强弱:
(1)大气相干长度 r0 表征了大气湍流的空间强度。它的物理意义是:在直径为 r0 的圆内,由大气湍流引起的波前畸变的方差为 1 rad2。
(2)格林伍德频率 fG 表征了大气湍流的时间强度。它的物理意义是:大气湍流引起的波前畸变中高于格林伍德频率的成分的方差为 1 rad2

两种仿真方式

第一种方式,整体扭曲,局部模糊,算法流程如下:

算法1流程图

具体的代码实现:

import numpy as np
import cv2
from skimage.transform import PiecewiseAffineTransform, warp
import matplotlib.pyplot as plt


def DistortBlur(img, S, sigma_kernel_vertor_field, sigma_blur_image, N=15, M_distortion=1000, M_blur=50):
    """
        Return an artificially distorted image.
        :param img: np array - image=(h, w, 3)
        :param S: float - distortion strength
        :param sigma_kernel_vertor_field: float - std of the kernel when smoothing the vector field a each iteration
        :param N: int - half size of the patch
        :param M_distortion: int - number of iterations for generating vector field patches
        :param M_blur: int - number of iterations for performing high blur in random patches
        :return:
        """
    assert (N % 2) == 1, "N must be odd!."
    img_height, img_width, img_channel = img.shape
    # generate the grid of the src image
    src_cols = np.arange(0, img_width)
    src_rows = np.arange(0, img_height)
    src_cols, src_rows = np.meshgrid(src_cols, src_rows)
    src = np.dstack([src_cols.flat, src_rows.flat])[0]

    # generate the vector field
    vector_field = np.zeros(shape=(img_height, img_width, 2), dtype=np.float32)
    for i in range(M_distortion):
        x = np.random.randint(low=0, high=img_width - 2 * N) + N
        y = np.random.randint(low=0, high=img_height - 2 * N) + N
        vector_field_current_patch_x = np.random.randn(2 * N, 2 * N)
        vector_field_current_patch_y = np.random.randn(2 * N, 2 * N)
        vector_field_current_patch_x = S * cv2.GaussianBlur(vector_field_current_patch_x,
                                                            ksize=(N // 2, N // 2),
                                                            sigmaX=sigma_kernel_vertor_field)
        vector_field_current_patch_y = S * cv2.GaussianBlur(vector_field_current_patch_y,
                                                            ksize=(N // 2, N // 2),
                                                            sigmaX=sigma_kernel_vertor_field)
        vector_field[y - N:y + N, x - N:x + N, 0] = vector_field[y - N:y + N, x - N:x + N,
                                                    0] + vector_field_current_patch_x
        vector_field[y - N:y + N, x - N:x + N, 1] = vector_field[y - N:y + N, x - N:x + N,
                                                    1] + vector_field_current_patch_y
    vector_field[:, :, 0] = cv2.GaussianBlur(vector_field[:, :, 0],
                                             ksize=(N // 2, N // 2),
                                             sigmaX=sigma_kernel_vertor_field)
    vector_field[:, :, 1] = cv2.GaussianBlur(vector_field[:, :, 1],
                                             ksize=(N // 2, N // 2),
                                             sigmaX=sigma_kernel_vertor_field)
    # generate the grid of the ouput image
    dst_cols = np.arange(0, img_width)
    dst_rows = np.arange(0, img_height)
    dst_cols, dst_rows = np.meshgrid(dst_cols, dst_rows)
    dst_cols, dst_rows = dst_cols.astype('float32'), dst_rows.astype('float32')
    dst_rows += vector_field[:, :, 0]
    dst_cols += vector_field[:, :, 1]
    dst = np.dstack([dst_cols.flat, dst_rows.flat])[0]

    # compute the transform
    tform = PiecewiseAffineTransform()
    step = 20
    src, dst = src[step // 2:-1:step], dst[step // 2:-1:step]
    tform.estimate(src, dst)

    # perform the transform
    distorded_image = warp(img, tform, output_shape=(img_height, img_width))

    # blur the image globally
    distorded_image = cv2.GaussianBlur(distorded_image, (N, N), sigma_blur_image)

    # blur the image in random patch
    for i in range(M_blur):
        x = np.random.randint(low=0, high=img_width - 2 * N) + N
        y = np.random.randint(low=0, high=img_height - 2 * N) + N
        current_patch = distorded_image[y - N:y + N, x - N:x + N, :]
        current_patch = cv2.GaussianBlur(current_patch, (N, N), sigma_blur_image)
        distorded_image[y - (N - 1) // 2:y + (N - 1) // 2 + 1,
        x - (N - 1) // 2:x + (N - 1) // 2 + 1, :] = current_patch[(N - 1) // 2 + 1:3 * (N - 1) // 2 + 2,
                                                    (N - 1) // 2 + 1:3 * (N - 1) // 2 + 2, :]

    # blur the image globally
    distorded_image = cv2.GaussianBlur(distorded_image, (N, N), sigma_blur_image / 10)

    return distorded_image

网络模拟(自己详见论文吧!这个还挺有用的)

参考文献为《Accelerating Atmospheric Turbulence Simulation via Learned
Phase-to-Space Transform》:链接: link.

** 如果觉得有用就点个赞吧 **

  • 2
    点赞
  • 24
    收藏
    觉得还不错? 一键收藏
  • 14
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 14
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值