Remote Sensing

Producer Field Guide

Producer Field Guide

Remote sensing is the acquisition of data about an object or scene by a sensor that is far from the object (Colwell, 1983). Aerial photography, satellite imagery, and radar are all forms of remotely sensed data.

Usually, remotely sensed data refer to data of the Earth collected from sensors on satellites or aircraft. Most of the images used as input to the ERDAS IMAGINE system are remotely sensed. However, you are not limited to remotely sensed data.

This section is a brief introduction to remote sensing. There are many books available for more detailed information, including Colwell, 1983, Swain and Davis, 1978; and Slater, 1980.

Electromagnetic Radiation Spectrum

The sensors on remote sensing platforms usually record electromagnetic radiation. Electromagnetic radiation (EMR) is energy transmitted through space in the form of electric and magnetic waves (Star and Estes, 1990). Remote sensors are made up of detectors that record specific wavelengths of the electromagnetic spectrum. The electromagnetic spectrum is the range of electromagnetic radiation extending from cosmic waves to radio waves (Jensen, 1996).

All types of land cover (rock types, water bodies, and so forth) absorb a portion of the electromagnetic spectrum, giving a distinguishable signature of electromagnetic radiation. Armed with the knowledge of which wavelengths are absorbed by certain features and the intensity of the reflectance, you can analyze a remotely sensed image and make fairly accurate assumptions about the scene. The figure below illustrates the electromagnetic spectrum (Suits, 1983; Star and Estes, 1990).

Electromagnetic Spectrum



The near-infrared and middle-infrared regions of the electromagnetic spectrum are sometimes referred to as the short wave infrared region (SWIR). This is to distinguish this area from the thermal or far infrared region, which is often referred to as the long wave infrared region (LWIR). The SWIR is characterized by reflected radiation whereas the LWIR is characterized by emitted radiation.

Absorption / Reflection Spectra

When radiation interacts with matter, some wavelengths are absorbed and others are reflected.To enhance features in image data, it is necessary to understand how vegetation, soils, water, and other land covers reflect and absorb radiation. The study of the absorption and reflection of EMR waves is called spectroscopy.


Most commercial sensors, with the exception of imaging radar sensors, are passive solar imaging sensors. Passive solar imaging sensors can only receive radiation waves; they cannot transmit radiation. (Imaging radar sensors are active sensors that emit a burst of microwave radiation and receive the backscattered radiation.)

The use of passive solar imaging sensors to characterize or identify a material of interest is based on the principles of spectroscopy. Therefore, to fully utilize a visible and infrared (VIS and IR) multispectral data set and properly apply enhancement algorithms, it is necessary to understand these basic principles. Spectroscopy reveals the:

  • absorption spectra—EMR wavelengths that are absorbed by specific materials of interest
  • reflection spectra—EMR wavelengths that are reflected by specific materials of interest

Absorption Spectra

Absorption is based on the molecular bonds in the (surface) material. Which wavelengths are absorbed depends upon the chemical composition and crystalline structure of the material. For pure compounds, these absorption bands are so specific that the SWIR region is often called an infrared fingerprint.

Atmospheric Absorption

In remote sensing, the sun is the radiation source for passive sensors. However, the sun does not emit the same amount of radiation at all wavelengths. The figure below shows the solar irradiation curve, which is far from linear.

Sun Illumination Spectral Irradiance at the Earth’s Surface


Source: Modified from Chahine et al, 1983

Solar radiation must travel through the Earth’s atmosphere before it reaches the Earth’s surface. As it travels through the atmosphere, radiation is affected by four phenomena (Elachi, 1987):

  • absorption—amount of radiation absorbed by the atmosphere
  • scattering—amount of radiation scattered away from the field of view by the atmosphere
  • scattering source—divergent solar irradiation scattered into the field of view
  • emission source—radiation re-emitted after absorption

Factors Affecting Radiation


Source: Elachi, 1987

Absorption is not a linear phenomena—it is logarithmic with concentration (Flaschka, 1969). In addition, the concentration of atmospheric gases, especially water vapor, is variable. The other major gases of importance are carbon dioxide (CO2) and ozone (O3), which can vary considerably around urban areas. Thus, the extent of atmospheric absorbance varies with humidity, elevation, proximity to (or downwind of) urban smog, and other factors.

Scattering is modeled as Rayleigh scattering with a commonly used algorithm that accounts for the scattering of short wavelength energy by the gas molecules in the atmosphere (Pratt, 1991)—for example, ozone. Scattering is variable with both wavelength and atmospheric aerosols. Aerosols differ regionally (ocean vs. desert) and daily (for example, Los Angeles smog has different concentrations daily).

Scattering source and emission source may account for only 5% of the variance. These factors are minor, but they must be considered for accurate calculation. After interaction with the target material, the reflected radiation must travel back through the atmosphere and be subjected to these phenomena a second time to arrive at the satellite.

The mathematical models that attempt to quantify the total atmospheric effect on the solar illumination are called radiative transfer equations. Some of the most commonly used are Lowtran (Kneizys et al, 1988) and Modtran (Berk et al, 1989).

See Enhancement for more information on atmospheric modeling.

Reflectance Spectra

After rigorously defining the incident radiation (solar irradiation at target), it is possible to study the interaction of the radiation with the target material. When an electromagnetic wave (solar illumination in this case) strikes a target surface, three interactions are possible (Elachi, 1987):

  • reflection
  • transmission
  • scattering

It is the reflected radiation, generally modeled as bidirectional reflectance (Clark and Roush, 1984), that is measured by the remote sensor.

Remotely sensed data are made up of reflectance values. The resulting reflectance values translate into discrete digital numbers (or values) recorded by the sensing device. These gray scale values fit within a certain bit range (such as 0 to 255, which is 8-bit data) depending on the characteristics of the sensor.

Each satellite sensor detector is designed to record a specific portion of the electromagnetic spectrum. For example, Landsat Thematic Mapper (TM) band 1 records the 0.45 to 0.52 mm portion of the spectrum and is designed for water body penetration, making it useful for coastal water mapping. It is also useful for soil and vegetation discriminations, forest type mapping, and cultural features identification (Lilles and Kiefer, 1987).

The characteristics of each sensor provide the first level of constraints on how to approach the task of enhancing specific features, such as vegetation or urban areas. Therefore, when choosing an enhancement technique, one should pay close attention to the characteristics of the land cover types within the constraints imposed by the individual sensors.

The use of VIS and IR imagery for target discrimination, whether the target is mineral, vegetation, man-made, or even the atmosphere itself, is based on the reflectance spectrum of the material of interest (see the figure below). Every material has a characteristic spectrum based on the chemical composition of the material. When sunlight (illumination source for VIS and IR imagery) strikes a target, certain wavelengths are absorbed by the chemical bonds; the rest are reflected back to the sensor. It is, in fact, the wavelengths that are not returned to the sensor that provide information about the imaged area.

Specific wavelengths are also absorbed by gases in the atmosphere (H2O vapor, CO2, O2, and so forth). If the atmosphere absorbs a large percentage of the radiation, it becomes difficult or impossible to use that particular wavelength to study the Earth. For the present Landsat and SPOT sensors, only the water vapor bands are considered strong enough to exclude the use of their spectral absorption region. The figure below shows how Landsat TM bands 5 and 7 were carefully placed to avoid these regions. Absorption by other atmospheric gases was not extensive enough to eliminate the use of the spectral region for present day broad band sensors.

Reflectance Spectra


Source: Modified from Fraser, 1986; Crist et al, 1986; Sabins, 1987

This chart is for comparison only, and is not meant to show actual values. The spectra are offset to better display the lines.

An inspection of the spectra reveals the theoretical basis of some of the indices in the ERDAS IMAGINE Image Interpretation tools. Consider the vegetation index TM4 / TM3. It is readily apparent that for vegetation this value could be very large. For soils, the value could be much smaller, and for clay minerals, the value could be near zero. Conversely, when the clay ratio TM5 / TM7 is considered, the opposite applies.

Hyperspectral Data

As remote sensing moves toward the use of more and narrower bands (for example, AVIRIS with 224 bands each only 10 nm wide), absorption by specific atmospheric gases must be considered. These multiband sensors are called hyperspectral sensors. As more and more of the incident radiation is absorbed by the atmosphere, the digital number (DN) values of that band get lower, eventually becoming useless—unless one is studying the atmosphere. Someone wanting to measure the atmospheric content of a specific gas could utilize the bands of specific absorption.

Hyperspectral bands are generally measured in nanometers (nm).

The figure above shows the spectral bandwidths of the channels for the Landsat sensors plotted above the absorption spectra of some common natural materials (kaolin clay, silty loam soil, and green vegetation). Note that while the spectra are continuous, the Landsat channels are segmented or discontinuous. We can still use the spectra in interpreting the Landsat data. For example, a Normalized Difference Vegetation Index (NDVI) ratio for the three would be very different and, therefore, could be used to discriminate between the three materials. Similarly, the ratio TM5 / cTM7 is commonly used to measure the concentration of clay minerals. Evaluation of the spectra shows why.

The figure below shows detail of the absorption spectra of three clay minerals. Because of the wide bandpass (2080 to 2350 nm) of TM band 7, it is not possible to discern between these three minerals with the Landsat sensor. As mentioned, the AVIRIS hyperspectral sensor has a large number of approximately 10 nm wide bands. With the proper selection of band ratios, mineral identification becomes possible. Using this data set, it would be possible to discriminate between these three clay minerals, again using band ratios. For example, a color composite image prepared from RGB = 2160nm/2190nm, 2220nm/2250nm, 2350nm/2488nm could produce a color-coded clay mineral image-map.

The commercial airborne multispectral scanners are used in a similar fashion. The Airborne Imaging Spectrometer from the Geophysical & Environmental Research Corp. (GER) has 79 bands in the UV, visible, SWIR, and thermal-infrared regions. The Airborne Multispectral Scanner Mk2 by Geoscan Pty, Ltd., has up to 52 bands in the visible, SWIR, and thermal-infrared regions. To properly utilize these hyperspectral sensors, you must understand the phenomenon involved and have some idea of the target materials being sought.

Laboratory Spectra of Clay Minerals in the Infrared Region


Source: Modified from Sabins, 1987

Spectra are offset vertically for clarity.

The characteristics of Landsat, AVIRIS, and other data types are discussed in Raster and Vector Data Sources. See Enhancement for more information on NDVI ratio.

Imaging Radar Data

Radar remote sensors can be broken into two broad categories: passive and active. The passive sensors record the very low intensity, microwave radiation naturally emitted by the Earth. Because of the very low intensity, these images have low spatial resolution (that is, large pixel size).

It is the active sensors, termed imaging radar, that are introducing a new generation of satellite imagery to remote sensing. To produce an image, these satellites emit a directed beam of microwave energy at the target, and then collect the backscattered (reflected) radiation from the target scene. Because they must emit a powerful burst of energy, these satellites require large solar collectors and storage batteries. For this reason, they cannot operate continuously; some satellites are limited to 10 minutes of operation per hour.

The microwave energy emitted by an active radar sensor is coherent and defined by a narrow bandwidth. The following table summarizes the bandwidths used in remote sensing.

Band Designation*

Wavelength, cm

Frequency, GHz

(109 cycles · sec-1)

Ka (0.86 cm)

0.8 to 1.1

40.0 to 26.5


1.1 to 1.7

26.5 to 18.0


1.7 to 2.4

18.0 to 12.5

X (3.0 cm, 3.2 cm)

2.4 to 3.8

12.5 to 8.0


3.8 to 7.5

8.0 to 4.0


7.5 to 15.0

4.0 to 2.0

L (23.5 cm, 25.0 cm)

15.0 to 30.0

2.0 to 1.0


30.0 to 100.0

1.0 to 0.3

* Wavelengths commonly used in imaging radars are shown in parentheses.

A key element of a radar sensor is the antenna. For a given position in space, the resolution of the resultant image is a function of the antenna size. This is termed a real-aperture radar (RAR). At some point, it becomes impossible to make a large enough antenna to create the desired spatial resolution. To get around this problem, processing techniques have been developed which combine the signals received by the sensor as it travels over the target. Thus, the antenna is perceived to be as long as the sensor path during backscatter reception. This is termed a synthetic aperture and the sensor a synthetic aperture radar (SAR).

The received signal is termed a phase history or echo hologram. It contains a time history of the radar signal over all the targets in the scene, and is itself a low resolution RAR image. In order to produce a high resolution image, this phase history is processed through a hardware and software system called an SAR processor. The SAR processor software requires operator input parameters, such as information about the sensor flight path and the radar sensor's characteristics, to process the raw signal data into an image. These input parameters depend on the desired result or intended application of the output imagery.

One of the most valuable advantages of imaging radar is that it creates images from its own energy source and therefore is not dependent on sunlight. Thus one can record uniform imagery any time of the day or night. In addition, the microwave frequencies at which imaging radars operate are largely unaffected by the atmosphere, permitting image collection through cloud cover or rain storms. However, the backscattered signal can be affected. Radar images collected during heavy rainfall are often seriously attenuated, which decreases the signal-to-noise ratio (SNR). In addition, the atmosphere does cause perturbations in the signal phase, which decreases resolution of output products, such as the SAR image or generated DEMs.