1.Laplacian/Laplacian of Gaussian1 (LoG)
As Laplace operator may detect edges as well as noise (isolated, out-of-range), it may be desirable to smooth the image first by a convolution with a Gaussian kernel of width

to suppress the noise before using Laplace for edge detection:
![\begin{displaymath}\bigtriangleup[G_{\sigma}(x,y) * f(x,y)]=[\bigtriangleup G_{\sigma}(x,y)] * f(x,y)=LoG*f(x,y)\end{displaymath}](http://fourier.eng.hmc.edu/e161/lectures/gradient/img94.png)
The first equal sign is due to the fact that
![\begin{displaymath}\frac{d}{dt}[h(t)*f(t)]=\frac{d}{dt} \int f(\tau) h(t-\tau)d\......int f(\tau) \frac{d}{dt} h(t-\tau)d\tau=f(t)*\frac{d}{dt} h(t)\end{displaymath}](http://fourier.eng.hmc.edu/e161/lectures/gradient/img95.png)
So we can obtain the Laplacian of Gaussian


and

Note that for simplicity we omitted the normalizing coefficient


Now we have LoG as an operator or convolution kernel defined as

The Gaussian and its first and second derivatives
and
are
shown here:
This 2-D LoG can be approximated by a 5 by 5 convolution kernel such as
![\begin{displaymath}\left[ \begin{array}{ccccc}0 & 0 & 1 & 0 & 0 \\0 & 1 &......0 & 1 & 2 & 1 & 0 \\0 & 0 & 1 & 0 & 0 \end{array} \right]\end{displaymath}](http://fourier.eng.hmc.edu/e161/lectures/gradient/img105.png)
The kernel of any other sizes can be obtained by approximating the continuous expression of LoG given above. However, make sure that the sum (or average) of all elements of the kernel has to be zero (similar to the Laplace kernel) so that the convolution result of a homogeneous regions is always zero.
The edges in the image can be obtained by these steps:
- Applying LoG to the image
- Detection of zero-crossings in the image
- Threshold the zero-crossings to keep only those strong ones (large difference between the positive maximum and the negative minimum)
2.Difference of Gaussian (DoG)
Similar to Laplace of Gaussian, the image is first smoothed by convolution with Gaussian kernel of certain width

to get

With a different width


We can show that the difference of these two Gaussian smoothed images, called difference of Gaussian (DoG), can be used to detect edges in the image.

The DoG as an operator or convolution kernel is defined as

Both 1-D and 2-D functions of


As the difference between two differently low-pass filtered images, the DoG is actually a band-pass filter, which removes high frequency components representing noise, and also some low frequency components representing the homogeneous areas in the image. The frequency components in the passing band are assumed to be associated to the edges in the images.
The discrete convolution kernel for DoG can be obtained by approximating the continuous expression of DoG given above. Again, it is necessary for the sum or average of all elements of the kernel matrix to be zero.
Comparing this plot with the previous one, we see that the DoG curve is very similar to the LoG curve. Also, similar to the case of LoG, the edges in the image can be obtained by these steps:
- Applying DoG to the image
- Detection of zero-crossings in the image
- Threshold the zero-crossings to keep only those strong ones (large difference between the positive maximum and the negative minimum)
Edge detection by DoG operator:
讲义的地址:http://fourier.eng.hmc.edu/e161/lectures/gradient/gradient.html
基础知识:
1.1Gradient
The Gradient (also called the Hamilton operator) is a vector operator for any N-dimensional scalar function ,
where
is an N-D vector variable. For example, when
,
may
represent temperature, concentration, or pressure in the 3-D space. The gradient of this N-D function is a vector composed of
components
for the
partial derivatives:
![\begin{displaymath}{\bf g}({\bf x})=\bigtriangledown f({\bf x})=\frac{d}{d{\bf ......x_1},\cdots,\frac{\partial f({\bf x})}{\partial x_N}\right]^T\end{displaymath}](http://fourier.eng.hmc.edu/e161/lectures/gradient/img19.png)
- The direction
of the gradient vector
is the direction in the N-D space along which the function
increases most rapidly.
- The magnitude
of the gradient
is the rate of the increment.
In image processing we only consider 2-D field:

When applied to a 2-D function


where




Now we show that increases most rapidly along the direction of
and
the rate of increment is equal to the magnitude of
.
Consider the directional derivative of along an arbitrary direction
:

This directional derivative is a function of





Solving this for


i.e.,

which is indeed the direction


From , we can also get

Substituting these into the expression of


which is the magnitude of

For discrete digital images, the derivative in gradient operation
![\begin{displaymath}D_x[f(x)]=\frac{d}{dx}f(x)=\lim_{\Delta x \rightarrow 0}\frac{f(x+\Delta x)-f(x)}{\Delta x} \end{displaymath}](http://fourier.eng.hmc.edu/e161/lectures/gradient/img42.png)
becomes the difference
![\begin{displaymath}D_n[f[n]]=f[n+1]-f[n],\;\;\;\;\mbox{or}\;\;\;\;\frac{f[n+1]-f[n-1]}{2} \end{displaymath}](http://fourier.eng.hmc.edu/e161/lectures/gradient/img43.png)
Two steps for finding discrete gradient of a digital image:
- Find the difference: in the two directions:
- Find the magnitude and direction of the gradient vector:
The differences in two directions and
can
be obtained by convolution with the following kernels:
- Roberts
or
- Sobel (3x3)
- Prewitt (3x3)
- Prewitt (4x4)
The Laplace operator is a scalar operator defined as the dot product (inner product) of two gradient vector operators:

In


When applied to a 2-D function


In discrete case, the second order differentiation becomes second order difference. In 1-D case, if the first order difference is defined as
![\begin{displaymath}\bigtriangledown f[n]=f'[n]=D_n[f[n]]=f[n+1]-f[n] \end{displaymath}](http://fourier.eng.hmc.edu/e161/lectures/gradient/img78.png)
then the second order difference is
![]() | ![]() | ![]() | |
![]() | ![]() |
Note that
![$f''[n]$](http://fourier.eng.hmc.edu/e161/lectures/gradient/img83.png)
![$f[n]$](http://fourier.eng.hmc.edu/e161/lectures/gradient/img84.png)
![$[1, -2, 1]$](http://fourier.eng.hmc.edu/e161/lectures/gradient/img85.png)
In 2-D case, Laplace operator is the sum of two second order differences in both dimensions:
![]() | ![]() | ![]() | |
![]() | ![]() | ||
![]() | ![]() |
This operation can be carried out by 2-D convolution kernel:
![\begin{displaymath}\left[ \begin{array}{ccc} 0 & 1 & 0 1 & -4 & 1 0 & 1 & 0\end{array} \right] \end{displaymath}](http://fourier.eng.hmc.edu/e161/lectures/gradient/img90.png)
Other Laplace kernels can be used:
![\begin{displaymath}\left[ \begin{array}{ccc} 1 & 1 & 1 1 & -8 & 1 1 & 1 & 1\end{array} \right] \end{displaymath}](http://fourier.eng.hmc.edu/e161/lectures/gradient/img91.png)
We see that these Laplace kernels are actually the same as the high-pass filtering kernels discussed before.
Gradient operation is an effective detector for sharp edges where the pixel gray levels change over space very rapidly. But when the gray levels change slowly from dark to bright (red in the figure below), the gradient operation will produce a very wide edge (green in the figure). It is helpful in this case to consider using the Laplace operation. The second order derivative of the wide edge (blue in the figure) will have a zero crossing in the middle of edge. Therefore the location of the edge can be obtained by detecting the zero-crossings of the second order difference of the image.
One dimensional example:
In the two dimensional example, the image is on the left, the two Laplace kernels generate two similar results with zero-crossings on the right:
Edge detection by Laplace operator followed by zero-crossing detection: