Due to atmospheric turbulence, light randomly refracts in three dimensions (3D), eventually entering a camera at a perturbed angle. Each viewed object point thus has a distorted projection in a two-dimensional (2D) image. Simulating 3D random refraction for all viewed points via complex simulated 3D random turbulence is computationally expensive. We derive an efficient way to render 2D image distortions, consistent with turbulence. Our approach bypasses 3D numerical calculations altogether. We directly create 2D random physics-based distortion vector fields, where correlations are derived in closed form from turbulence theory. The correlations are nontrivial: they depend on the perturbation directions relative to the orientation of all object-pairs, simultaneously. Hence, we develop a theory characterizing and rendering such a distortion field. The theory is turned to a few simple 2D operations, which render images based on camera and atmospheric properties.
Imaging through refractive media is of interest both for rendering and scene analysis. It is studied in computer vision and graphics, as the importance of participating media is acknowledged. Complex, random refraction is created by turbulent media, often encountered in long range observations through the atmosphere and ground-based astronomy. There, random perturbations of the refractive index follow a complicated fractal multiscale structure of eddies in the three dimensional (3D) domain. This structure, in turn, creates highly complex refraction of propagating light, in the 5D plenoptic domain (space and direction). This leads to random perturbations of all light rays passing through this medium. Finally, the complexly refracted light is projected by a camera, forming a distorted image of the scene.
Now, suppose one seeks to render images that are distorted as if they are taken through atmospheric turbulence. A motivation for rendering can be computer graphics. Another motivation is to form a database on which to test and develop recovery algorithms, to correct for turbulence-induced distortion. A database can also train a learning system to recognize objects via turbulence. From the description above, apparently, rendering should include a series of computationally complex steps: Simulate a huge 3D turbulent random field (scale of kilometers) at a 3D resolution that is relevant to optics; Raytrace refraction through this 3D medium, from an object point to the camera; Repeat this propagation process for all resolvable points in the field of view. In large scales, such a rendering approach poses a computational burden. It is also unnecessary.
We believe that for some applications, there is no need for 3D simulations, in order to render turbulence-induced image distortion. There is no need to simulate a fractal random refractive index 3D field, or ray-trace through such a field. Basically, image distortion is an operation in just two dimensions (2D). The input is a 2D image, free of random distortion, i.e. the view in the absence of turbulence. The output is a 2D distorted image. Rendering boils down, thus, to creating a 2D distortion operation, which is consistent with distortion that turbulence can induce.
Random 3D turbulence eventually leads to a randomly distorted 2D projection. The random distorted projection must be drawn from a distribution that is characterised by a covariance function. The covariance function determines how the distortion of any pixel is correlated to any other pixel, and what the variance is. The covariance function of turbulence-induced distortion is defined by physics. In other words, the physics of turbulence in 3D (a random process), and 5D refraction in it, dictate the image distortion covariance function, in 2D. The probability distribution of distortion had already been derived using the theory of turbulence, for pairs of object points (not full-field images). This pair-wise function had also been verified empirically, using field experiments, where correlations between image points were measured. We thus use the pair-wise covariance function of turbulence-induced image distortion, to create 2D distortion fields.
Transferring the physics-based pair-wise covariance function to a full distortion field is nontrivial in turbulence. Distortion is a vector-valued spatial field. The covariance of this field is a matrix-valued function. It is a function of relative coordinates that vary for each pair of pixels, including the relative orientation of each pair, and the distortion orientation in each pixel. We derive theoretically the solution: a full-field covariance of a 2D distortion field, based on the physics-based pair-wise orientation-sensitive covariance. We then give a recipe how to render random 2D distortion fields that satisfy the physical model. The recipe is composed of several simple 2D image operations.
A common method for simulating imaging through turbulence is based on light propagation through multiple random 2D phase screens, approximating a 3D turbulent medium. Phase-shifting layers are generated using either fast Fourier transform (FFT), the Zernike polynomial method or the fractal interpolation method. This approach requires simulating a 3D refractive field and light propagation in 3D.
There are rendering techniques that trace rays through a 3D randomly refractive media. Physically based simulation of atmospheric phenomena is done in. Rendering complicated lighting effects through various refractive objects is presented in. These 3D methods often require specialized hardware such as GPU and extensive computation time.
2D image distortion is simpler to implement, and does not require extensive computational power. Usually, parametric models are used. These models are based on analysis of real empirical distortions observed through various atmospheric conditions. Other methods, as in, use simple Gaussian random functions to generate image distortion fields. The results resemble turbulence distortion. However, these methods do not use a physical model and are not set to have physically consistent spatial correlations.