Each generation of sensors shows improved data acquisition and image quality over previous generations. However, some anomalies still exist that are inherent to certain sensors and that can be corrected by applying mathematical formulas derived from the distortions (Lillesand and Kiefer, 1987). In addition, the natural distortion that results from the curvature and rotation of the Earth in relation to the sensor platform produces distortions in the image data, which can also be corrected.
Generally, there are two types of data correction: radiometric and geometric. Radiometric correction addresses variations in the pixel intensities (DNs) that are not caused by the object or scene being scanned. These variations include:
- differing sensitivities or malfunctioning of the detectors
- topographic effects
- atmospheric effects
Geometric correction addresses errors in the relative positions of pixels. These errors are induced by:
- sensor viewing geometry
- terrain variations
Because of the differences in radiometric and geometric correction between traditional, passively detected visible to infrared imagery and actively acquired radar imagery, the two are discussed separately. See Radar Imagery Enhancement.
Radiometric Correction: Visible to Infrared Imagery
Striping or banding occurs if a detector goes out of adjustment—that is, it provides readings consistently greater than or less than the other detectors for the same band over the same ground cover.
Some Landsat 1, 2, and 3 data have striping every sixth line, because of improper calibration of some of the 24 detectors that were used by the MSS. The stripes are not constant data values, nor is there a constant error factor or bias. The differing response of the errant detector is a complex function of the data value sensed.
This problem has been largely eliminated in newer sensors. Various algorithms have been advanced in current literature to help correct this problem in older data. Among these algorithms are simple along-line convolution, high-pass filtering, and forward and reverse principal component transformations (Crippen, 1989a).
Data from airborne multispectral or hyperspectral imaging scanners also shows a pronounced striping pattern due to varying offsets in the multielement detectors. This effect can be further exacerbated by unfavorable sun angle. These artifacts can be minimized by correcting each scan line to a scene-derived average (Kruse, 1988).
Use Periodic Noise Removal or Spatial Modeler to implement algorithms to eliminate striping. Use Spatial Modeler editing capabilities to adapt the algorithms to best address the data. Use the IMAGINE Radar Adjust Brightness function to correct some of these problems.
Another common remote sensing device error is line dropout. Line dropout occurs when a detector either completely fails to function, or becomes temporarily saturated during a scan (like the effect of a camera flash on the retina). The result is a line or partial line of data with higher data file values, creating a horizontal streak until the detector recovers, if it recovers.
Line dropout is usually corrected by replacing the bad line with a line of estimated data file values, which is based on the lines above and below it.
The effects of the atmosphere upon remotely-sensed data are not considered errors, since they are part of the signal received by the sensing device (Bernstein, 1983). However, it is often important to remove atmospheric effects, especially for scene matching and change detection analysis.
A number of algorithms have been developed to correct for variations in atmospheric transmission. Four categories are mentioned here:
- dark pixel subtraction
- radiance to reflectance conversion
- linear regressions
- atmospheric modeling
Use Spatial Modeler to construct the algorithms for these operations.
Dark Pixel Subtraction
The dark pixel subtraction technique assumes that the pixel of lowest DN in each band should really be zero, and hence its radiometric value (DN) is the result of atmosphere-induced additive errors (Crane, 1971; Chavez et al, 1977). These assumptions are very tenuous and recent work indicates that this method may actually degrade rather than improve the data (Crippen, 1987).
Radiance to Reflectance Conversion
Radiance to reflectance conversion requires knowledge of the true ground reflectance of at least two targets in the image. These can come from either at-site reflectance measurements, or they can be taken from a reflectance table for standard materials. The latter approach involves assumptions about the targets in the image.
A number of methods using linear regressions have been tried. These techniques use bispectral plots and assume that the position of any pixel along that plot is strictly a result of illumination. The slope then equals the relative reflectivities for the two spectral bands. At an illumination of zero, the regression plots should pass through the bispectral origin. Offsets from this represent the additive extraneous components, due to atmosphere effects (Crippen, 1987).
Atmospheric modeling is computationally complex and requires either assumptions or inputs concerning the atmosphere at the time of imaging. The atmospheric model used to define the computations is frequently Lowtran or Modtran (Kneizys et al, 1988). This model requires inputs such as atmospheric profile (for example, pressure, temperature, water vapor, ozone), aerosol type, elevation, solar zenith angle, and sensor viewing angle.
Accurate atmospheric modeling is essential in preprocessing hyperspectral data sets where bandwidths are typically 10 nm or less. These narrow bandwidth corrections can then be combined to simulate the much wider bandwidths of Landsat or SPOT sensors (Richter, 1990).
As previously noted, geometric correction is applied to raw sensor data to correct errors of perspective due to the Earth’s curvature and sensor motion. Some of these errors are commonly removed at the sensor’s data processing center. In the past, some data from Landsat MSS 1, 2, and 3 were not corrected before distribution.
Many visible to infrared sensors are not nadir-viewing: they look to the side. For some applications, such as stereo viewing or DEM generation, this is an advantage. For other applications, it is a complicating factor.
In addition, even a nadir-viewing sensor is viewing only the scene center at true nadir. Other pixels, especially those on the view periphery, are viewed off-nadir. For scenes covering very large geographic areas (such as AVHRR), this can be a significant problem.
This and other factors, such as Earth curvature, result in geometric imperfections in the sensor image. Terrain variations have the same distorting effect, but on a smaller (pixel-by-pixel) scale. These factors can be addressed by rectifying the image to a map.
A more rigorous geometric correction utilizes a DEM and sensor position information to correct these distortions. This is orthocorrection.