LoG 和 DoG的本质联系与区别

1.Laplacian/Laplacian of Gaussian1 (LoG)

As Laplace operator may detect edges as well as noise (isolated, out-of-range), it may be desirable to smooth the image first by a convolution with a Gaussian kernel of width $\sigma$ 

\begin{displaymath}G_{\sigma}(x,y)=\frac{1}{\sqrt{2\pi\sigma^2}}exp\left(-\frac{x^2+y^2}{2\sigma^2}\right)\end{displaymath}

to suppress the noise before using Laplace for edge detection: 

\begin{displaymath}\bigtriangleup[G_{\sigma}(x,y) * f(x,y)]=[\bigtriangleup G_{\sigma}(x,y)] * f(x,y)=LoG*f(x,y)\end{displaymath}

The first equal sign is due to the fact that 

\begin{displaymath}\frac{d}{dt}[h(t)*f(t)]=\frac{d}{dt} \int f(\tau) h(t-\tau)d\......int f(\tau) \frac{d}{dt} h(t-\tau)d\tau=f(t)*\frac{d}{dt} h(t)\end{displaymath}

So we can obtain the Laplacian of Gaussian $\bigtriangleup G_{\sigma}(x,y)$ first and then convolve it with the input image. To do so, first consider 

\begin{displaymath}\frac{\partial}{\partial x} G_{\sigma}(x,y)=\frac{\partia......+y^2)/2\sigma^2}=-\frac{x}{\sigma^2}e^{-(x^2+y^2)/2\sigma^2}\end{displaymath}

and 

\begin{displaymath}\frac{\partial^2}{\partial^2 x} G_{\sigma}(x,y)=\frac{x^2......gma^2}=\frac{x^2-\sigma^2}{\sigma^4}e^{-(x^2+y^2)/2\sigma^2}\end{displaymath}

Note that for simplicity we omitted the normalizing coefficient $1/\sqrt{2\pi \sigma^2}$. Similarly we can get 

\begin{displaymath}\frac{\partial^2}{\partial^2 y} G_{\sigma}(x,y)=\frac{y^2-\sigma^2}{\sigma^4}e^{-(x^2+y^2)/2\sigma^2}\end{displaymath}

Now we have LoG as an operator or convolution kernel defined as 

\begin{displaymath}LoG \stackrel{\triangle}{=}\bigtriangleup G_{\sigma}(x,y)=\f......)=\frac{x^2+y^2-2\sigma^2}{\sigma^4}e^{-(x^2+y^2)/2\sigma^2}\end{displaymath}

The Gaussian $G(x,y)$ and its first and second derivatives $G'(x,y)$ and $\bigtriangleup G(x,y)$ are shown here:

LoG.gif

LoG_plot.gif

This 2-D LoG can be approximated by a 5 by 5 convolution kernel such as 

\begin{displaymath}\left[ \begin{array}{ccccc}0 & 0 & 1 & 0 & 0 \\0 & 1 &......0 & 1 & 2 & 1 & 0 \\0 & 0 & 1 & 0 & 0 \end{array} \right]\end{displaymath}

The kernel of any other sizes can be obtained by approximating the continuous expression of LoG given above. However, make sure that the sum (or average) of all elements of the kernel has to be zero (similar to the Laplace kernel) so that the convolution result of a homogeneous regions is always zero.

The edges in the image can be obtained by these steps:

  • Applying LoG to the image
  • Detection of zero-crossings in the image
  • Threshold the zero-crossings to keep only those strong ones (large difference between the positive maximum and the negative minimum)
The last step is needed to suppress the weak zero-crossings most likely caused by noise.

forest_LoG.gif






2.Difference of Gaussian (DoG)


Similar to Laplace of Gaussian, the image is first smoothed by convolution with Gaussian kernel of certain width $\sigma_1$ 

\begin{displaymath}G_{\sigma_1}(x,y)=\frac{1}{\sqrt{2\pi \sigma_1^2}}exp\left(-\frac{x^2+y^2}{2\sigma_1^2}\right)\end{displaymath}

to get 

\begin{displaymath}g_1(x,y)=G_{\sigma_1}(x,y)*f(x,y) \end{displaymath}

With a different width $\sigma_2$, a second smoothed image can be obtained: 

\begin{displaymath}g_2(x,y)=G_{\sigma_2}(x,y)*f(x,y) \end{displaymath}

We can show that the difference of these two Gaussian smoothed images, called difference of Gaussian (DoG), can be used to detect edges in the image. 

\begin{displaymath}g_1(x,y)-g_2(x,y)=G_{\sigma_1}*f(x,y)-G_{\sigma_2}*f(x,y)=(G_{\sigma_1}-G_{\sigma_2})*f(x,y)=DoG*f(x,y)\end{displaymath}

The DoG as an operator or convolution kernel is defined as 

\begin{displaymath}DoG \stackrel{\triangle}{=}G_{\sigma_1}-G_{\sigma_2}=\frac{1}......gma_1^2}-\frac{1}{\sigma_2}e^{-(x^2+y^2)/2\sigma_2^2}\right)\end{displaymath}

Both 1-D and 2-D functions of $G_{\sigma_1}(x,y)$ and $G_{\sigma_2}(x,y)$ and their difference are shown below:

DoG.gif

DoG_plot.gif

As the difference between two differently low-pass filtered images, the DoG is actually a band-pass filter, which removes high frequency components representing noise, and also some low frequency components representing the homogeneous areas in the image. The frequency components in the passing band are assumed to be associated to the edges in the images.

The discrete convolution kernel for DoG can be obtained by approximating the continuous expression of DoG given above. Again, it is necessary for the sum or average of all elements of the kernel matrix to be zero.

Comparing this plot with the previous one, we see that the DoG curve is very similar to the LoG curve. Also, similar to the case of LoG, the edges in the image can be obtained by these steps:

  • Applying DoG to the image
  • Detection of zero-crossings in the image
  • Threshold the zero-crossings to keep only those strong ones (large difference between the positive maximum and the negative minimum)
The last step is needed to suppress the weak zero-crossings most likely caused by noise.

Edge detection by DoG operator:

forest_dog.gif



讲义的地址:http://fourier.eng.hmc.edu/e161/lectures/gradient/gradient.html


基础知识:

1.1Gradient 

The Gradient (also called the Hamilton operator) is a vector operator for any N-dimensional scalar function $f(x_1,\cdots, x_N)=f({\bf x})$, where ${\bf x}=[x_1,\cdots,x_N]^T$ is an N-D vector variable. For example, when $N=3$$f({\bf x})$ may represent temperature, concentration, or pressure in the 3-D space. The gradient of this N-D function is a vector composed of $N$components for the $N$ partial derivatives: 

\begin{displaymath}{\bf g}({\bf x})=\bigtriangledown f({\bf x})=\frac{d}{d{\bf ......x_1},\cdots,\frac{\partial f({\bf x})}{\partial x_N}\right]^T\end{displaymath}

  • The direction $\angle {\bf g}$ of the gradient vector ${\bf g}$ is the direction in the N-D space along which the function $f({\bf x})$ increases most rapidly.
  • The magnitude $\vert{\bf g}\vert$ of the gradient ${\bf g}$ is the rate of the increment.

In image processing we only consider 2-D field: 

\begin{displaymath}\bigtriangledown\stackrel{\triangle}{=}\frac{\partial}{\partial x}\vec{i}+\frac{\partial}{\partial y} \vec{j}\end{displaymath}

When applied to a 2-D function $f(x,y)$, this operator produces a vector function: 

\begin{displaymath}{\bf g}=\vec{g}(x,y)\stackrel{\triangle}{=}\bigtriangledown f......}{\partial y} \vec{j}\right) f(x,y)=f_x \vec{i}+f_y \vec{j}\end{displaymath}

where $f_x=\partial f/\partial x$ and $f_y=\partial f/\partial y$. The direction and magnitude of ${\bf g}$ are respectively 

\begin{displaymath}\angle {\bf g}=\tan^{-1} (f_y/f_x),\;\;\;\;\;\;\;\vert\vert{\bf g}\vert\vert=\sqrt{f_x^2+f_y^2}\end{displaymath}

Now we show that $f(x,y)$ increases most rapidly along the direction of ${\bf g}=\vec{g}(x,y)$ and the rate of increment is equal to the magnitude of ${\bf g}=\vec{g}(x,y)$.

gradient_direction.gif

Consider the directional derivative of $f(x,y)$ along an arbitrary direction $r$

\begin{displaymath}\frac{d}{dr}f(x,y)=\frac{\partial f}{\partial x}\frac{dx}{d......l f}{\partial y}\frac{dy}{dr}=f_x cos \theta+f_y\sin\theta\end{displaymath}

This directional derivative is a function of $\theta$, defined as the angle between directions $r$ and the positive direction of $x$. To find the direction along which $df/dr$ is maximized, we let 

\begin{displaymath}\frac{d}{d\theta} \frac{df(x,y)}{dr}=\frac{d}{d\theta} (f_x cos \theta+f_y\sin\theta)=-f_x sin\theta +f_y cos (\theta)=0 \end{displaymath}

Solving this for $\theta$, we get 

\begin{displaymath}f_x sin\theta=f_y cos \theta \end{displaymath}

i.e., 

\begin{displaymath}\theta =\tan^{-1} \left(\frac{f_y}{f_x}\right) \end{displaymath}

which is indeed the direction $\angle {\bf g}$ of ${\bf g}=\vec{g}(x,y)$.

From $tan  \theta=f_y/f_x$, we can also get 

\begin{displaymath}sin\theta=\frac{f_y}{\sqrt{f_x^2+f_y^2}},\;\;\;\;cos \theta=\frac{f_x}{\sqrt{f_x^2+f_y^2}}\end{displaymath}

Substituting these into the expression of $df/dr$, we obtain its maximum magnitude, 

\begin{displaymath}\left. \frac{d}{dr}f(x,y) \right\vert _{max}=\frac{f_x^2+f_y^2}{\sqrt{f_x^2+f_y^2}}=\sqrt{f_x^2+f_y^2} \end{displaymath}

which is the magnitude of $\vec{g}(x,y)$.

For discrete digital images, the derivative in gradient operation 

\begin{displaymath}D_x[f(x)]=\frac{d}{dx}f(x)=\lim_{\Delta x \rightarrow 0}\frac{f(x+\Delta x)-f(x)}{\Delta x} \end{displaymath}

becomes the difference 

\begin{displaymath}D_n[f[n]]=f[n+1]-f[n],\;\;\;\;\mbox{or}\;\;\;\;\frac{f[n+1]-f[n-1]}{2} \end{displaymath}

Two steps for finding discrete gradient of a digital image:

  • Find the difference: in the two directions: 

    \begin{displaymath}g_m[m,n]=D_m[f[m,n]]=f[m+1,n]-f[m,n] \end{displaymath}


    \begin{displaymath}g_n[m,n]=D_n[f[m,n]]=f[m,n+1]-f[m,n] \end{displaymath}

  • Find the magnitude and direction of the gradient vector: 

    \begin{displaymath}\vert\vert g[m,n]\vert\vert=\sqrt{g^2_m[m,n]+g^2_n[m,n]},\;\;......n]\vert\vert=\vert\vert g_m\vert\vert+\vert\vert g_n\vert\vert \end{displaymath}


    \begin{displaymath}\angle g[m,n]=\tan^{-1} \left(\frac{g_n[m,n]}{g_m[m,n]}\right) \end{displaymath}

The differences in two directions $g_m$ and $g_n$ can be obtained by convolution with the following kernels:

  • Roberts 

    \begin{displaymath}\left[ \begin{array}{rr} -1 & 1  0 & 0 \end{array} \right],......\left[ \begin{array}{rr} -1 & 0  1 & 0 \end{array} \right]\end{displaymath}

    or 

    \begin{displaymath}\left[ \begin{array}{rr} 0 & 1  -1 & 0 \end{array} \right],......\left[ \begin{array}{rr} 1 & 0  0 & -1 \end{array} \right]\end{displaymath}

  • Sobel (3x3) 

    \begin{displaymath}\left[ \begin{array}{rrr} -1 & 0 & 1  -2 & 0 & 2  -1 & 0 ......} -1 & -2 & -1  0 & 0 & 0  1 & 2 & 1\end{array} \right]\end{displaymath}

  • Prewitt (3x3) 

    \begin{displaymath}\left[ \begin{array}{rrr} -1 & 0 & 1  -1 & 0 & 1  -1 & 0 ......} -1 & -1 & -1  0 & 0 & 0  1 & 1 & 1\end{array} \right]\end{displaymath}

  • Prewitt (4x4) 

    \begin{displaymath}\left[ \begin{array}{rrrr} -3 & -1 & 1 & 3  -3 & -1 & 1 & 3......1 & -1 \\1 & 1 & 1 & 1  3 & 3 & 3 & 3 \end{array} \right]\end{displaymath}

Note Sobel and Prewitt operators first find the averages of one direction and then find the difference of these averages in the another direction.



1.2 Laplace operator

The Laplace operator is a scalar operator defined as the dot product (inner product) of two gradient vector operators: 

\begin{displaymath}\bigtriangleup = \bigtriangledown^2=\bigtriangledown \cdot ...... x_N}\right)^T=\sum_{n=1}^N \frac{\partial^2}{\partial x_n^2}\end{displaymath}

In  $N=2$  dimensional space, we have: 

\begin{displaymath}\bigtriangleup =\bigtriangledown^2=\bigtriangledown\cdot\bi......frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}\end{displaymath}

When applied to a 2-D function  $f(x,y)$ , this operator produces a scalar function: 

\begin{displaymath}\bigtriangleup f(x,y)=\frac{\partial^2 f}{\partial x^2}+\frac{\partial^2 f}{\partial y^2}\end{displaymath}

In discrete case, the second order differentiation becomes second order difference. In 1-D case, if the first order difference is defined as 

\begin{displaymath}\bigtriangledown f[n]=f'[n]=D_n[f[n]]=f[n+1]-f[n] \end{displaymath}

then the second order difference is 
$\displaystyle \bigtriangleup f[n]$$\textstyle =$$\displaystyle \bigtriangledown^2 f[n]=f''[n]=D^2_n[f[n]]=f'[n]-f'[n-1]$ 
 $\textstyle =$$\displaystyle (f[n+1]-f[n])-(f[n]-f[n-1])=f[n+1]-2f[n]+f[n-1]$ 

Note that  $f''[n]$  is so defined that it is symmetric to the center element  $f[n]$ . The Laplace operation can be carried out by 1-D convolution with a kernel  $[1, -2, 1]$ .

In 2-D case, Laplace operator is the sum of two second order differences in both dimensions: 

$\displaystyle \bigtriangleup f[m,n]$$\textstyle =$$\displaystyle D^2_m[f[m,n]]+D^2_n[f[m,n]]$ 
 $\textstyle =$$\displaystyle f[m+1,n]-2f[m,n]+f[m-1,n]+f[m,n+1]-2f[m,n]+f[m,n-1]$ 
 $\textstyle =$$\displaystyle f[m+1,n]+f[m-1,n]+f[m,n+1]+f[m,n-1]-4f[m,n]$ 

This operation can be carried out by 2-D convolution kernel: 

\begin{displaymath}\left[ \begin{array}{ccc} 0 & 1 & 0  1 & -4 & 1  0 & 1 & 0\end{array} \right] \end{displaymath}

Other Laplace kernels can be used: 

\begin{displaymath}\left[ \begin{array}{ccc} 1 & 1 & 1  1 & -8 & 1  1 & 1 & 1\end{array} \right] \end{displaymath}

We see that these Laplace kernels are actually the same as the high-pass filtering kernels discussed before.

Gradient operation is an effective detector for sharp edges where the pixel gray levels change over space very rapidly. But when the gray levels change slowly from dark to bright (red in the figure below), the gradient operation will produce a very wide edge (green in the figure). It is helpful in this case to consider using the Laplace operation. The second order derivative of the wide edge (blue in the figure) will have a zero crossing in the middle of edge. Therefore the location of the edge can be obtained by detecting the zero-crossings of the second order difference of the image.

fat_edge.gif

fat_edge_negate.gif

One dimensional example:

fat_edge_detection_1d.gif

In the two dimensional example, the image is on the left, the two Laplace kernels generate two similar results with zero-crossings on the right:

fat_edge_detection_2d.gif

Edge detection by Laplace operator followed by zero-crossing detection:

forest_laplace.gif


  • 1
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值