Histogram equalization

      This method usually increases the global contrast of many images, especially when the usable data of the image is represented by close contrast values. Through this adjustment, the intensities can be better distributed on the histogram. This allows for areas of lower local contrast to gain a higher contrast. Histogram equalization accomplishes this by effectively spreading out the most frequent intensity values.

      The method is useful in images with backgrounds and foregrounds that are both bright or both dark. In particular, the method can lead to better views of bone structure in x-ray images, and to better detail in photographs that are over or under-exposed. A key advantage of the method is that it is a fairly straightforward technique and an invertible operator. So in theory, if the histogram equalization function is known, then the original histogram can be recovered. The calculation is not computationally intensive. A disadvantage of the method is that it is indiscriminate. It may increase the contrast of background noise, while decreasing the usable signal.

       In scientific imaging where spatial correlation is more important than intensity of signal (such as separating DNA fragments of quantized length), the small signal to noise ratio usually hampers visual detection.

       Histogram equalization often produces unrealistic effects in photographs; however it is very useful for scientific images like thermal, satellite or x-ray images, often the same class of images that user would apply false-color to. Also histogram equalization can produce undesirable effects (like visible image gradient) when applied to images with low color depth. For example, if applied to 8-bit image displayed with 8-bit gray-scale palette it will further reduce color depth (number of unique shades of gray) of the image. Histogram equalization will work the best when applied to images with much higher color depth than palette size, like continuous data or 16-bit gray-scale images.

       There are two ways to think about and implement histogram equalization, either as image change or as palette change. The operation can be expressed as P(M(I)) where I is the original image, M is histogram equalization mapping operation and P is a palette. If we define new palette as P'=P(M) and leave image I unchanged then histogram equalization is implemented as palette change. On the other hand if palette P remains unchanged and image is modified to I'=M(I) then the implementation is by image change. In most cases palette change is better as it preserves the original data.

        Generalizations of this method use multiple histograms to emphasize local contrast, rather than overall contrast. Examples of such methods include adaptive histogram equalization and contrast limiting adaptive histogram equalization or CLAHE.

Histogram equalization also seems to be used in biological neural networks so as to maximize the output firing rate of the neuron as a function of the input statistics. This has been proved in particular in the fly retina.[1]

       Histogram equalization is a specific case of the more general class of histogram remapping methods. These methods seek to adjust the image to make it easier to analyze or improve visual quality (e.g., retinex)

Back projection

    The back projection (or "back project") of a histogrammed image is the re-application of the modified histogram to the original image, functioning as a look-up table for pixel brightness values.

    For each group of pixels taken from the same position from all input single-channel images the function puts the histogram bin value to the destination image, where the coordinates of the bin are determined by the values of pixels in this input group. In terms of statistics, the value of each output image pixel characterizes probability that the corresponding input pixel group belongs to the object whose histogram is used.[2]

Implementation

     Consider a discrete grayscale image {x} and let ni be the number of occurrences of gray level i. The probability of an occurrence of a pixel of level i in the image is

\ p_x(i) = p(x=i) = \frac{n_i}{n},\quad 0 \le i < L

    L being the total number of gray levels in the image, n being the total number of pixels in the image, and p_x(i) being in fact the image's histogram for pixel value i, normalized to [0,1].

    Let us also define the cumulative distribution function corresponding to px as

\ cdf_x(i) = \sum_{j=0}^i p_x(j),

which is also the image's accumulated normalized histogram.

     We would like to create a transformation of the form y = T(x) to produce a new image {y}, such that its CDF will be linearized across the value range, i.e.

\ cdf_y(i) = iK

    for some constant K. The properties of the CDF allow us to perform such a transform (see Cumulative distribution function#Inverse); it is defined as

\ y = T(x) = cdf_x(x)

     Notice that the T maps the levels into the range [0,1]. In order to map the values back into their original range, the following simple transformation needs to be applied on the result:

\ y^\prime = y \cdot(\max\{x\} - \min\{x\}) + \min\{x\}
Histogram equalization of color images

      The above describes histogram equalization on a grayscale image. However it can also be used on color images by applying the same method separately to the Red, Green and Blue components of the RGB color values of the image. However, applying the same method on the Red, Green, and Blue components of an RGB image may yield dramatic changes in the image's color balance since the relative distributions of the color channels change as a result of applying the algorithm. However, if the image is first converted to another color space, Lab color space, or HSL/HSV color space in particular, then the algorithm can be applied to the luminance or value channel without resulting in changes to the hue and saturation of the image [3]. There are several histogram equalization methods in 3D space. Trahanias and Venetsanopoulos applied histogram equalization in 3D color space[4] However, it results in “whitening” where the probability of bright pixels are higher than that of dark ones [5]. Han et al. proposed to use a new cdf defined by the iso-luminance plane, which results in uniform gray distribution [6].

Examples
Small image

The following is the same 8x8 subimage as used in JPEG. The 8-bit greyscale image shown has the following values:

\begin{bmatrix}  52 & 55 & 61 & 66 & 70 & 61 & 64 & 73 \\  63 & 59 & 55 & 90 & 109 & 85 & 69 & 72 \\  62 & 59 & 68 & 113 & 144 & 104 & 66 & 73 \\  63 & 58 & 71 & 122 & 154 & 106 & 70 & 69 \\  67 & 61 & 68 & 104 & 126 & 88 & 68 & 70 \\  79 & 65 & 60 & 70 & 77 & 68 & 58 & 75 \\  85 & 71 & 64 & 59 & 55 & 61 & 65 & 83 \\  87 & 79 & 69 & 68 & 65 & 76 & 78 & 94 \end{bmatrix}

The histogram for this image is shown in the following table. Pixel values that have a zero count are excluded for the sake of brevity.

image

The cumulative distribution function (cdf) is shown below. Again, pixel values that do not contribute to an increase in the cdf are excluded for brevity.

image

       This cdf shows that the minimum value in the subimage is 52 and the maximum value is 154. The cdf of 64 for value 154 coincides with the number of pixels in the image. The cdf must be normalized to [0,255]. The general histogram equalization formula is:

h(v) =  \mathrm{round}  \left(    \frac {cdf(v) - cdf_{min}} {(M \times N) - cdf_{min}}    \times (L - 1)  \right)

Where cdfmin is the minimum value of the cumulative distribution function (in this case 1), M × N gives the image's number of pixels (for the example above 64, where M is width and N the height) and L is the number of grey levels used (in most cases, like this one, 256). The equalization formula for this particular example is:

h(v) =  \mathrm{round}  \left(    \frac {cdf(v) - 1} {63}    \times 255  \right)

For example, the cdf of 78 is 46. (The value of 78 is used in the bottom row of the 7th column.) The normalized value becomes

h(78) =  \mathrm{round}  \left(    \frac {46 - 1} {63}    \times 255  \right) =  \mathrm{round}  \left(    0.714286    \times 255  \right) = 182

Once this is done then the values of the equalized image are directly taken from the normalized cdf to yield the equalized values:

\begin{bmatrix}    0 &  12 &  53 &  93 & 146 &  53 &  73 & 166 \\   65 &  32 &  12 & 215 & 235 & 202 & 130 & 158 \\   57 &  32 & 117 & 239 & 251 & 227 &  93 & 166 \\   65 &  20 & 154 & 243 & 255 & 231 & 146 & 130 \\   97 &  53 & 117 & 227 & 247 & 210 & 117 & 146 \\  190 &  85 &  36 & 146 & 178 & 117 &  20 & 170 \\  202 & 154 &  73 &  32 &  12 &  53 &  85 & 194 \\  206 & 190 & 130 & 117 &  85 & 174 & 182 & 219  \end{bmatrix}

Notice that the minimum value (52) is now 0 and the maximum value (154) is now 255.

image
image

 

image

直方图均衡化是一种图像增强的方法,它通过对图像像素值的直方图进行调整,使得图像的对比度增强,细节更加清晰。直方图均衡化在图像处理中起到了非常重要的作用。 直方图均衡化的基本原理是将图像像素值的分布调整为更均匀的分布。首先,计算图像的灰度直方图,即统计每个灰度级的像素个数。然后,根据直方图,计算每个灰度级的累计概率分布。接下来,根据累计概率分布,将原图像的每个像素值映射到新的像素值,使得所得到的图像像素值分布更均匀。 直方图均衡化能够有效地增强图像的对比度,使得图像中的细节更加明显。通过调整图像的像素值分布,直方图均衡化能够增加图像中的亮度差异,使得暗区域变亮、亮区域变暗,从而使得整个图像具有更好的视觉效果。 直方图均衡化的应用非常广泛,可以用于图像增强、图像配准、图像压缩等领域。在图像增强中,直方图均衡化可用于改善照明条件差的图像,提升图像的视觉质量。在图像配准中,直方图均衡化可用于改善不同图像之间的亮度差异,使得它们更容易对齐。在图像压缩中,直方图均衡化可用于减小图像中像素值的动态范围,从而提高压缩效果。 综上所述,直方图均衡化是一种重要的图像处理技术,它能够改善图像的对比度,提升图像的视觉效果。在实际应用中,直方图均衡化有着广泛的应用前景,并且可以与其他图像处理方法相结合,进一步提高图像处理的效果。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值