A short review of image processing techniques used to develop a satellite image processing

Abstract: This article presents the use of image processing techniques in advancement of satellite image processing sensing technology, mainly satellite image processing technology categorizes into three groups regarding to its operations: image Rectification and Restoration, Enhancement and Information. Traditional method deals with initial processing of raw data to correct for geometric distortion, to calibrate the data radio metrically and used to eliminate noise present in data. The enhancement procedures are applied to image data in order to effectively display the data for subsequent visual interpretation. It involves techniques for increasing the visual distinction between features in a scene. The objective of the information extraction operations is to replace visual analysis of the image data with quantitative techniques for automating the identification of features in a scene. This involves the analysis of multispectral image data and the application of statistically based decision rules for determining the land cover identity of each pixel in an image. The intent of classification process is to categorize all pixels in a digital image into one of several land cover classes or themes. This classified data may be used to produce thematic maps of the land cover present in an image.

Introduction

Image processing is a key issue in the field of satellite image processing. Pictures are the most common and convenient means of conveying or transmitting information. A picture is worth a thousand words. Pictures concisely convey information about positions, sizes and inter-relationships between objects. They portray spatial information that we can recognize as objects. Human beings are good at deriving information from such images, because of our innate visual and mental abilities. About 75% of the information received by human is in pictorial form.

In the present context, the analysis of pictures that employ an overhead perspective, including the radiation not visible to human eye are considered. Thus our discussion will be focusing on analysis of remotely sensed images. These images are represented in digital form. When represented as numbers, brightness can be added, subtracted, multiplied, divided and, in general, subjected to statistical manipulations that are not possible if an image is presented only as a photograph. Although digital analysis of remotely sensed data dates from the early days of remote sensing, the launch of the first Landsat earth observation satellite in 1972 began an era of increasing interest in machine processing (Cambell, 1996 and Jensen, 1996). Previously, digital remote sensing data could be analyzed only at specialized remote sensing laboratories. Specialized equipment and trained personnel necessary to conduct routine machine analysis of data were not widely available, in part because of limited availability of digital remote sensing data and a lack of appreciation of their qualities

DIGITAL IMAGE: A digital remotely sensed image is typically composed of picture elements (pixels) located at the intersection of each row i and column j in each K bands of imagery. Associated with each pixel is a number known as Digital Number (DN) or Brightness Value (BV), that depicts the average radiance of a relatively small area within a scene (Fig. 1). A smaller number indicates low average radiance from the area and the high number is an indicator of high radiant properties of the area. The size of this area effects the reproduction of details within the scene. As pixel size is reduced more scene detail is presented in digital representation Figure 1 : Structure of a Digital Image and Multispectral Image.

Figure 1 : Structure of a Digital Image and Multispectral Image.

COLOR COMPOSITES: While displaying the different bands of a multispectral data set, images obtained in different bands are displayed in image planes (other than their own) the color composite is regarded as False Color Composite (FCC). High spectral resolution is important when producing color components. For a true color composite an image data used in red, green and blue spectral region must be assigned bits of red, green and blue image processor frame buffer memory. A color infrared composite ‘standard false color composite’ is displayed by placing the infrared, red, green in the red, green and blue frame buffer memory (Fig. 2). In this healthy vegetation shows up in shades of red because vegetation absorbs most of green and red energy but reflects approximately half of incident Infrared energy. Urban areas reflect equal portions of NIR, R & G, and therefore they appear as steel grey (Fig. 2). In this healthy vegetation shows up in shades of red because vegetation absorbs most of green and red energy but reflects approximately half of incident Infrared energy. Urban areas reflect equal portions of NIR, R & G, and therefore they appear as steel grey

 IMAGE RECTIFICATION AND REGISTRATION Geometric distortions manifest themselves as errors in the position of a pixel relative to other pixels in the scene and with respect to their absolute position within some defined map projection. If left uncorrected, these geometric distortions render any data extracted from the image useless. This is particularly so if the information is to be compared to other data sets, be it from another image or a GIS data set. Distortions occur for many reasons. Screen Color Gun Assignment Blue Gun Green Gun Red Gun Green Infrared Red 84 Digital Image Processing For instance distortions occur due to changes in platform attitude (roll, pitch and yaw), altitude, earth rotation, earth curvature, panoramic distortion and detector delay. Most of these distortions can be modelled mathematically and are removed before you buy an image. Changes in attitude however can be difficult to account for mathematically and so a procedure called image rectification is performed. Satellite systems are however geometrically quite stable and geometric rectification is a simple procedure based on a mapping transformation relating real ground coordinates, say in easting and northing, to image line and pixel coordinates. Rectification is a process of geometrically correcting an image so that it can be represented on a planar surface, conform to other images or conform to a map (Fig. 3). That is, it is the process by which geometry of an image is made planimetric. It is necessary when accurate area, distance and direction measurements are required to be made from the imagery. It is achieved by transforming the data from one grid system into another grid system using a geometric transformation. Rectification is not necessary if there is no distortion in the image. For example, if an image file is produced by scanning or digitizing a paper map that is in the desired projection system, then that image is already planar and does not require rectification unless there is some skew or rotation of the image. Scanning and digitizing produce images that are planar, but do not contain any map coordinate information. These images need only to be geo-referenced, which is a much simpler process than rectification. In many cases, the image header can simply be updated with new map coordinate information. This involves redefining the map coordinate of the upper left corner of the image and the cell size (the area represented by each pixel). Ground Control Points (GCP) are the specific pixels in the input image for which the output map coordinates are known. By using more points than necessary to solve the transformation equations a least squares solution may be found that minimizes the sum of the squares of the errors. Care should be exercised when selecting ground control points as their number, quality and distribution affect the result of the rectification. Once the mapping transformation has been determined a procedure called resampling is employed. Resampling matches the coordinates of image pixels to their real world coordinates and writes a new image on a pixel by pixel basis. Since the grid of pixels in the source image rarely matches the grid for Minakshi Kumar 85 the reference image, the pixels are resampled so that new data file values for the output file can be calculated.

IMAGE ENHANCEMENT TECHNIQUES Image enhancement techniques improve the quality of an image as perceived by a human. These techniques are most useful because many satellite images when examined on a color display give inadequate information for image interpretation. There is no conscious effort to improve the fidelity of the image with regard to some ideal form of the image. There exists a wide variety of techniques for improving image quality. The contrast stretch, density slicing, edge enhancement, and spatial filtering are the more commonly used techniques. Image enhancement is attempted after the image is corrected for geometric and radiometric distortions. Image enhancement methods are applied separately to each band of a multispectral image. Digital techniques Fig.3. Image Rectification (a & b) Input and reference image with GCP locations, (c) using polynomial equations the grids are fitted together, (d) using resampling method the output grid pixel values are assigned (source modified from ERDAS Field guide) (A) (B) (C) (D) 86 Digital Image Processing have been found to be most satisfactory than the photographic technique for image enhancement, because of the precision and wide variety of digital processes. Contrast generally refers to the difference in luminance or grey level values in an image and is an important characteristic. It can be defined as the ratio of the maximum intensity to the minimum intensity over an image. Contrast ratio has a strong bearing on the resolving power and detectability of an image. Larger this ratio, more easy it is to interpret the image. Satellite images lack adequate contrast and require contrast improvement. Contrast Enhancement Contrast enhancement techniques expand the range of brightness values in an image so that the image can be efficiently displayed in a manner desired by the analyst. The density values in a scene are literally pulled farther apart, that is, expanded over a greater range. The effect is to increase the visual contrast between two areas of different uniform densities. This enables the analyst to discriminate easily between areas initially having a small difference in density. Linear Contrast Stretch This is the simplest contrast stretch algorithm. The grey values in the original image and the modified image follow a linear relation in this algorithm. A density number in the low range of the original histogram is assigned to extremely black and a value at the high end is assigned to extremely white. The remaining pixel values are distributed linearly between these extremes. The features or details that were obscure on the original image will be clear in the contrast stretched image. Linear contrast stretch operation can be represented graphically as shown in Fig. 4. To provide optimal contrast and color variation in color composites the small range of grey values in each band is stretched to the full brightness range of the output or display unit.

Figure 4: Linear Contrast Stretch (source Lillesand and Kiefer, 1993).

CONCLUSIONS: Digital image processing’s of satellite data can be primarily grouped into three categories: Image Rectification and Restoration, Enhancement and Information extraction. Image rectification is the pre-processing of satellite data for geometric and radiometric connections. Enhancement is applied to image data in order to effectively display data for subsequent visual interpretation. Information extraction is based on digital classification and is used for generating digital thematic map.

References

 [1] J.A. Richards, J. Richards, Remote Sensing Digital Image Analysis, Springer, 1999.

 [2] J.A. Benediktsson, J. Chanussot, W.M. Moon, Very high-resolution remote sensing: Challenges and opportunities, Proc. IEEE 100 (6) (2012) 1907–1910.

 [3] M. Hansen, R. DeFries, J.R. Townshend, R. Sohlberg, Global land cover classification at 1 km spatial resolution using a classification tree approach, Int. J. Remote Sens. 7 (2000) 1331–1364.

 [4] E.L. Hestir, S. Khanna, M.E. Andrew, M.J. Santos, J.H. Viers, J.A. Greenberg, S.S. Rajapakse, S.L. Ustin, Identification of invasive vegetation using hyperspectral remote sensing in the California Delta ecosystem, Remote Sens. Environ. 112 (11) (2008) 4034–4047.

 [5] Y. Chebud, G. Naja, R. Rivero, A. Melesse, Water quality monitoring using remote sensing and an artificial neural network, Water Air Soil Pollut. 223 (8) (2012) 4875–4887.

 [6] Q. Weng, Thermal infrared remote sensing for urban climate and environmental studies: methods, applications, and trends, ISPRS J. Photogramm. Remote Sens. 64 (4) (2009) 335–344.

 [7] L. Giglio, J. Descloitres, C.O. Justice, Y.J. Kaufman, An enhanced contextual fire detection algorithm for MODIS, Remote Sens. Environ. 3 (2003) 273–282.

 [8] F.F. Sabins, Remote sensing for mineral exploration, Ore Geol. Rev. 4 (1999) 157–183.

 [9] M. Fingas, C. Brown, Review of oil spill remote sensing, Mar. Pollut. Bull. 83 (1) (2014) 9–23.

 [10] D.J. Mulla, Twenty five years of remote sensing in precision agriculture: key advances and remaining knowledge gaps, Biosyst. Eng. 114 (4) (2013) 358–371        Zhiping ZhouYunfeng NieMin GaoEnhanced ant colony optimization algorithm for global path planning of mobile robotsC2013 5th International Conference on Computational and Information Sciences2013: 698701

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值