Essentially, classical radiometry relies on geometrical optics (to relate source radiance), geometrical aspects of an optical layout, and the irradiance at the detector. One considers the propagation of radiation from points on the surface of the source to points on the surface of the detector. In reality, the wave nature of light causes light to diffract at the edges of intervening apertures, mirrors, and lenses. Diffraction can also result in diffraction-limited focusing. Overall, these effects can give rise to losses or gains of flux reaching the detector.
To infer one quantity from an optical radiometric measurement, such as source radiance, detector response, aperture area, and so forth, one must know all other pertinent quantities in the measurement. Yet the measurement equation that relates various quantities should also consider the effects of diffraction. Diffraction causes the actual throughput of an optical system to differ from its geometrical throughput. One must calculate the diffraction effects in order to incorporate the above difference in throughput in final results. This step is typically referred to as including "diffraction corrections."
We have developed a robust suite of computer programs that allow one to specify an optical setup and determine diffractions effects on throughput as a function of wavelength, or, in the case of a Planck source, of temperature. We have studied several multi-stage systems, and have conducted the most thorough investigations in those systems that can be "unfolded" to yield a (nearly) symmetric system and/or can be treated in terms of subsystems, each of which consists of a source, a single aperture or lens, and a detector (such subsystems are denoted by the acronym, SAD).