INFRARED CLOUD IMAGING SYSTEMS CHARACTERIZATION by David Walter Riesland A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Optics and Photonics MONTANA STATE UNIVERSITY Bozeman, Montana November 2016 c©COPYRIGHT by David Walter Riesland 2016 All Rights Reserved ii DEDICATION To my friends and family, for your loving support. To my wife, my steadfast companion over oceans and years. iii ACKNOWLEDGMENTS I would like to thank my advisor, Dr. Joseph Shaw for your guidance and support, and for providing such valuable opportunities throughout my undergraduate and graduate career. I would also like to thank graduate committee members Dr. Wataru Nakagawa for providing such great advice and feedback, and Dr. Charles Kankelborg for inspiring me to pursue optics as an undergraduate through the NASA National Student Solar Spectrograph Competition. A special thanks to Paul Nugent for putting up with me in the lab through all the years, late nights, and coffee; and for being such a great sounding board. I also thank the entire Optical Remote Sensor Lab team at Montana State University for all of your support. A big thank you to Dr. Jed Hancock and Dr. James Peterson for allowing me to come take measurements at Space Dynamics Lab, and for such a great summer. I would also like to thank the team at NASA Glenn Research Center, Dr. James Nessel, Michael Zemba, and especially Jacquelynne Houts for your advice and collaboration. Funding Acknowledgments I would like to acknowledge the generous support of the NASA Montana Space Grant Consortium, the NSF Arctic Observing Network Program, NASA Glenn Research Center, NASA EPSCoR, Montana Research and Economic Development Initiative, Space Dynamics Lab, and NASA JPL for various aspects of this work. iv TABLE OF CONTENTS 1. INTRODUCTION ........................................................................................1 2. ICI METHODOLOGY AND BACKGROUND ...............................................7 Introduction .................................................................................................7 Conclusion ................................................................................................. 10 Acknowledgments ................................................................................ 11 3. INFRARED CLOUD IMAGER HARDWARE OVERVIEW .......................... 12 Introduction ............................................................................................... 12 Camera ...................................................................................................... 13 Microbolometer Focal Plane................................................................. 14 Lenses and Windows ........................................................................... 15 Support Electronics Upgrades...................................................................... 16 Camera Communication Module .......................................................... 17 Microprocessor/Analog to Digital Converter.......................................... 17 Enclosure............................................................................................ 18 Environmental Module Upgrades ................................................................. 20 Proposed Cooling System..................................................................... 24 NASA GRC Deployment...................................................................... 25 Conclusion .......................................................................................... 27 4. RADIOMETRIC CALIBRATION BACKGROUND...................................... 28 Introduction ............................................................................................... 28 Initial Assumptions..................................................................................... 28 Simple Camera Calibration ......................................................................... 29 Temperature-Dependent FPA Calibration .................................................... 30 Conclusion ................................................................................................. 31 5. LWIR CAMERA RELATIVE SPECTRAL RESPONSE................................ 32 Introduction ............................................................................................... 32 Methodology .............................................................................................. 34 Simulated Tau2 Camera RSR ...................................................................... 36 Simulated Photon Camera RSR................................................................... 38 Deviation from Manufacturer’s RSR ............................................................ 39 Conclusion ................................................................................................. 40 vTABLE OF CONTENTS - CONTINUED 6. TAU2 CHARACTERIZATION.................................................................... 43 Introduction ............................................................................................... 43 Low-signal Suppression and Shutter Offset Correction................................... 44 External Flat-Field Correction ............................................................. 45 External FFC of a False-Temperature Internal Shutter .......................... 47 Tau2 Calibration Results............................................................................. 48 Noise-equivalent Radiance ........................................................................... 50 Conclusion ................................................................................................. 51 7. TAMARISK CHARACTERIZATION .......................................................... 53 Introduction ............................................................................................... 53 Calibration Algorithm................................................................................. 54 Calibration Results ..................................................................................... 56 Conclusion ................................................................................................. 57 8. SYSTEM COMPARISONS.......................................................................... 59 Introduction ............................................................................................... 59 Photon vs Tau2 .......................................................................................... 59 Photon vs Tamarisk.................................................................................... 61 Image Comparison............................................................................... 62 Radiometric Comparison...................................................................... 62 Conclusion ................................................................................................. 65 9. CLOUD PHASE DETECTION ................................................................... 67 Introduction ............................................................................................... 67 Algorithm Development .............................................................................. 70 Algorithm Comparisons .............................................................................. 73 Conclusion ................................................................................................. 76 10. CONCLUSIONS ......................................................................................... 77 REFERENCES CITED.................................................................................... 80 vi LIST OF FIGURES Figure Page 1.1 Emission from clouds can be seen with an infrared imager at Montana State University, 23 January 2016, 1:44 am MST. ...................3 1.2 Size progression of microbolometer cameras, from left to right: 1999 (Amber Sentinel), 2009 (FLIR Photon 320), 2013 (FLIR Tau2), and 2016 (FLIR Lepton). Photo courtesy of Joseph Shaw. ......................................................................................6 2.1 Cloud data taken by the Svalbard ICI during a validation period at Bozeman, MT. Total atmospheric-emitted radiance is seen in (a); the modeled atmosphere is subtracted and (b) is the residual cloud radiance, while (c) shows the calculated optical depth and (d) shows the expected cloud attenuation at 550 nm...........................................................................................9 2.2 MODTRAN simulations showing radiance contributions and clouds for zenith measurement in Bozeman, MT. ................................ 10 3.1 CAD Drawing of GRC ICI System, Spring 2015.................................. 12 3.2 Flir Tau2 Camera. Photo courtesy of Paul Nugent. ............................. 13 3.3 Microbolometer topology ................................................................... 14 3.4 Air mass follows a predicted trend with zenith angle, here simplified as a secant......................................................................... 16 3.5 JPL ICI legacy support electronics from 2014. Photo courtesy of Joseph Shaw.................................................................... 16 3.6 The GigE communication module, shown here attached to the back of a Tau2 camera, provides more camera control while reducing overall component footprint ......................................... 17 3.7 JPL tube enclosure mounted to cooling box. Paint can be seen peeling from windowed flange due to previous deployment. Tube enclosure has been refinished with powder coat for longevity. ............................................................................. 19 3.8 Svalbard and White Sands ICI systems deployed at Montana State University. Photo courtesy of Joseph Shaw. ............................... 20 3.9 Heater prototype tests....................................................................... 22 vii LIST OF FIGURES - CONTINUED Figure Page 3.10 Cooling option prototype CAD drawing, here shown in blue. Systems were designed for modularity................................................. 22 3.11 Prototype tube ICI cooling tests ........................................................ 23 3.12 Operational field deployment at Glenn Research Center in Cleveland, OH. ................................................................................. 26 4.1 Simple two-point radiometric camera calibration ................................. 30 5.1 FLIR Camera RSRs measured at SDL................................................ 33 5.2 RSR simulation flowchart .................................................................. 34 5.3 Tau2 camera measured and simulated RSRs ....................................... 37 5.4 Tau2 calibration error due to RSR uncertainty. ................................... 38 5.5 Photon camera measured and simulated RSRs .................................... 39 5.6 Photon calibration error due to RSR uncertainty ................................ 40 5.7 RSR measurements vs manufacturer-specified spectra ......................... 41 5.8 Scatter plots of cloud scene radiances calculated with the SDL-measured and manufacturer-specified RSR spectra for (a) Photon and (b) Tau2 cameras. ..................................................... 42 5.9 Uncertainty in percentage of scene radiance due to generic RSR from manufacturer .................................................................... 42 6.1 Manual FFC vs External FFC at different FPA temperatures .............. 46 6.2 Proposed external FFC data collection routine.................................... 47 6.3 Corrected radiance for a Tau2 Camera using MSU standard algorithm over time in comparison with uncorrected and source radiance. Images were collected every 20 seconds. ..................... 49 6.4 Distribution of error in calibrated radiance relative to the source radiance ................................................................................. 50 7.1 Tamarisk camera first sky image, with tripod and operator in view. The colorbar shows digital number output of the FPA............ 53 viii LIST OF FIGURES - CONTINUED Figure Page 7.2 Tamarisk FPA temperature sensor output has a fairly linear response with external camera temperature ........................................ 54 7.3 Tamarisk digital FPA temperature, digital number response, and scene radiance can be solved with a linear regression..................... 55 7.4 Radiance correction standard deviation for Tamarisk camera over FPA.......................................................................................... 57 8.1 Svalbard ICI (G02) and ICI3 reported dB compared over time............. 59 8.2 Statistical analysis of instrument comparisons. .................................... 60 8.3 Photon (left) and Tamarisk (right) ICI systems deployed side by side ............................................................................................. 61 8.4 Radiometric image comparison of cloudy sky ...................................... 63 8.5 Thin clouds can be seen above the noise when doing scene- to-scene subtraction .......................................................................... 64 8.6 Distribution of scene comparisons during deployment period................ 65 8.7 Scatter plot showing correlation of 0.996............................................. 66 9.1 The imaginary part of the index of refraction is plotted against wavelength. Arrows show the differences at 1.64 µm and 1.7 µm. ...................................................................................... 68 9.2 Observation of clouds simulated in MODTRAN .................................. 69 9.3 Sunlight scattered from simulated simulated ice clouds (cir- rus) have a much different spectral shape than liquid water clouds .............................................................................................. 70 9.4 The Knap (2002) algorithm allows ice clouds to be distin- guished from liquid clouds in MODTRAN simulations......................... 71 9.5 PWV has little affect on the algorithm for OD > 1. ............................ 72 9.6 The spectral ratios between ice and water clouds become less discrete after gaussian 150 nm bandwidth filters are used. ................... 73 ix LIST OF FIGURES - CONTINUED Figure Page 9.7 Adding a third integrated radiance channel allows for greater separation of data vs optical depth..................................................... 74 9.8 Optical Depth can be replaced by the RMS addition of the radiance from each channel ................................................................ 75 9.9 Two-channel algorithm using calibrated radiance values ...................... 76 xABSTRACT Infrared cloud imaging (ICI) is a useful tool for characterizing cloud cover for a variety of fields. Clouds play an important role in free-space high frequency (optical and mm-wave) terrestrial communications. Ground-based infrared imagers are used to provide long-term, high resolution (spatial and temporal) cloud data without the need for sunlight. This thesis describes the development and characterization of two ICI systems for deployment at remote field sites in support of Earth-to-space mm- wave and optical communication experiments. The hardware upgrades, calibration process, sensitivity analysis, system validation, and algorithm developments are all discussed for these systems. Relative spectral response sensitivity analysis is discussed in detail, showing as much as 35% calibrated scene radiance uncertainties when using generic manufacturer data in comparison with measured spectral responses. Cloud discrimination algorithms, as well as cloud phase (ice or water discrimination) algorithms are also discussed. INTRODUCTION NASA has a bandwidth problem. Don Cornwall, the director of the Space Communications and Navigation Program at NASA HQ, stated in 2015 that ”We are leaving 90% or more of our data on the surface of Mars” [1]. Modern space-borne science instruments are generating much more data than what can be sent back to earth through legacy communication systems. This communications bottleneck is due to the reliance on RF technology for deep-space communication. At the time of this paper, almost all NASA missions are using RF technology for earth-to-space and space-to-space communication. However, as space-borne missions are becoming more complex, there is a push within NASA to move to higher-frequency communication systems to allow for more data to be moved in a smaller amount of time. There are several missions underway in both NASA [1–3] and ESA [4] to increase communication bands from RF frequencies to mm-wave and optical frequencies. However, as these communication bands increase in frequency they become more susceptible to cloud attenuation in terrestrial satellite links. Therefore, it is becoming more important to model and understand cloud conditions and their relation to signal attenuation and distortion at operational and potential high-frequency earth-to-space ground communication sites [5–7]. A popular method for cloud detection is satellite observation. Many weather satellites are being used by forecasters to predict weather conditions across wide areas. However, these satellites have either poor temporal or poor spatial resolution at any given ground site. Most geostationary weather satellites such as GOES-15, MTSAT-2, and Meteosat-8 are stationed in equatorial orbit. This geometry makes it 1 2difficult or impossible to characterize clouds for high-latitude sites, which are most often of interest for both climate studies and earth-space communications. Likewise, satellites that scan the surface of the earth from a polar orbit, such as NOAA-17 and Metop-A, have higher spatial resolution due to their lower altitude, but only visit the same site twice a day, thus having poor temporal resolution. A ground-based system at the observational site can be used to increase both temporal and spatial resolution. Such cloud detection instruments include LIDAR, visible cameras, and IR cameras. Visible cameras give the most intuitive information since their responsivity is similar to the human observer. Visible imagers only reliably detect clouds during daytime and either do not work at night or do not provide consistent results throughout day and night [8,9]. This suggested that visible cameras may have degraded performance during nighttime periods. Active lidar or mm-wave radar sensing also provides excellent cloud data [10,11], but with larger and more costly instruments that require scanning to cover a large fraction of the sky. Dual-channel microwave radiometers are frequently deployed at propagation study sites for measuring precipitable water vapor [12], but they do not allow detection of ice clouds and have only weak sensitivity to thin liquid clouds. Furthermore, these other instruments typically measure at only a single point and require complex scanning systems to obtain spatial information. Many of these disadvantages can be solved through the use of long-wave infrared (LWIR) imaging systems. LWIR cameras measure emitted cloud radiance, which provides a consistent signal during both day and night. Multiple elevation angles can also be detected for simultaneous radiometric analysis with an imaging system. This allows for both high spatial and high temporal cloud statistics at the site of interest [9, 13]. 3Figure 1.1: Emission from clouds can be seen with an infrared imager at Montana State University, 23 January 2016, 1:44 am MST. As shown in Figure 1.1, thermal emission from clouds enable observation with an infrared system. A clear sky has relatively low absorption, while clouds have relatively high absorption at the IR wavelengths used by ICI systems. Invoking Kirchoff’s law of thermal radiation, a perfect blackbody will emit as much radiation as it absorbs. Although clouds are in thermal equilibrium with the surrounding air, there is a substantial radiometric contrast between cloud emission and clear-air emission because the clouds have high emissivity (due to high absorption) while the clear air has much lower emissivity (due to lower absorption). Therefore, clouds will emit more thermal radiation than the surrounding clear sky [14]. With a carefully characterized infrared camera, the emission from clouds lead to the classification of the cloud, both in terms of optical depth and optical attenuation. 4Infrared imaging with low-cost, uncooled imaging systems was enabled for civilian use when microbolometer technology was declassified in 1992. Many details of the development of the microbolometer technology are cited in these references [15,16]. Uncooled microbolometer focal planes allowed for relatively low-cost systems to be used for research and commercial applications [17]. The earliest infrared cloud imager (ICI) system was developed for atmospheric research by Joseph Shaw at NOAA starting in 1997 in collaboration with the Japanese Communication Research Laboratory or CRL (now National Institute of Information and Communication Technology, NICT) [18]. The project then moved with Dr. Shaw to Montana State University (MSU) in 2001 and the ICI system was moved to MSU in January 2002. This system was later deployed to Barrow, AK [9, 13] while other ICI systems were also deployed to Barrow, AK, and Poker Flat, AK, for cloud characterization studies in the Arctic. Later systems have been used to support cloud cover characterization campaigns at JPLs Table Mountain Facility (TMF) [19] and Goldstone Deep Space Communications Complex (GDSCC), both operated by the NASA Jet Propulsion Laboratory (JPL) in support of free-space optical communication studies. Researchers found through experimentation at sites such as NASA Glenn Research Center’s (GRC) Near-Earth Network (NEN) that current atmospheric models do not correctly account for total atmospheric attenuation at low elevation angles (propagation near the horizon) for Ka-band propagation. They believed that thin clouds may be undetected or poorly accounted for in the models, which pointed to the need for a cloud-detection instrument to characterize clouds and their effects on high-frequency communication [6]. Since ICI systems are perfectly suited for this role, a narrow-field-of view ICI was developed to generate cloud statistics both along the propagation beam and at zenith. 5The development of this new system revealed some limitations of miniaturized infrared cameras. As infrared imaging technology progressed, focal plane arrays became optimized for detection of higher temperatures at the cost of low temperature detection. The primary reason for this was that to increase the resolution of the focal plane array, pixels needed to get smaller and smaller. This progression can be seen in Figure 1.2 which shows cameras produced from 1999 on the left to a camera produced in 2016 on the right. If the f/# of the camera stays the same, a smaller pixel will generate a smaller electrical signal due to smaller throughput for an extended source, resulting in a decreased signal-to-noise ratio. Since the infrared signal coming from a cloud is so much smaller than the signal coming from something like a jet plume, and many more customers measure jet plumes than clouds, cameras that were useful for cloud imaging were being phased out of production. Optimization in manufacturing for small pixels caused significant problems for cloud detection. A specific problem arose when the manufacturer of the microbolometer cameras we used in 2013 changed their firmware so that it subtracted our low cloud emission signal from the scene by proprietary noise reduction algorithms before the data became accessible to the user. The firmware and internal processing algorithm updates were the largest contribution to challenges in cloud detection not seen in legacy ICI systems. This thesis describes two ICI systems that were built for long-term deployment in hostile environments using primarily OEM components. The design, analysis, characterization and calibration challenges of those systems will be addressed in this thesis, as well as meaningful lessons learned for future systems. The second chapter of this thesis gives more detailed background on ICI methodology. Chapter three gives an overview of the hardware used in the NASA Glenn ICI Systems. Chapter four provides a brief background of radiometric calibration and shows calibration algorithms previously developed at Montana State 6Figure 1.2: Size progression of microbolometer cameras, from left to right: 1999 (Amber Sentinel), 2009 (FLIR Photon 320), 2013 (FLIR Tau2), and 2016 (FLIR Lepton). Photo courtesy of Joseph Shaw. University. Chapter five addresses one of the primary sources of uncertainty for an ICI system, the relative spectral response of the camera. Chapter six talks about some of the calibration challenges faced when upgrading the system with a Tau2 camera with updated firmware. Chapter seven shows a preliminary look at another infrared camera (the Tamarisk by DRS) that could be used in future cloud imaging systems. Chapter eight shows comparisons between the Tamarisk, Tau2, and legacy Photon camera systems. Chapter nine is a discussion of possible radiometer units to use in the future to improve cloud optical depth retrievals of ICI systems through the retrieval of cloud thermodynamic phase. ICI METHODOLOGY AND BACKGROUND Introduction This chapter discusses the methodology behind infrared cloud imaging and some of the science behind the process. As stated previously, cloud imaging is enabled through observing cloud thermal emission. Since we assume that radiance from the cloud is the difference between the scene radiance and the estimated atmospheric- emitted radiance, cloud radiance Lcloud is calculated through Equation (2.1) Lcloud = Lmeasured − Lclearsky. (2.1) Lclearsky is either determined through imaging algorithms or radiative transfer modeling. One component of the clear-sky radiance estimation involves comparing the measured angular radiance pattern with the theoretically expected variation of radiance with elevation angle. This effect is modeled with a radiative transfer code for water vapor with a uniform horizontal distribution. Since the imager is able to detect multiple elevation angles simultaneously, any deviations from this previously established trend are assumed to be radiance from clouds. Once the clear-sky radiance is estimated and removed, the residual radiance is used to estimate the cloud optical depth (path-integrated cloud extinction), which is defined in Eq. (2.2) in terms of the extinction cross section σ and the particle number density N . Another component of the clear-sky radiance identification involves estimating the emission of the atmospheric column from the path-integrated water vapor content, called ”precipitable water vapor” (PWV), which is often measured 7 8with a microwave radiometer [12] or solar radiometer [20]. τ = ∫ ∞ 0 σ(z)N(z) dz (2.2) An empirical relationship between cloud optical depth and radiance is derived using methods established in [14,21]. Radiance is what is measured with an infrared imaging system and since radiance depends on both emissivity and temperature, knowing the temperature of the cloud (dependent on cloud height) reduces uncertainty of the emissivity estimate. For this reason, it is helpful to run an ICI system next to an atmospheric lidar. A useful reference wavelength near the middle of the visible spectrum is 550 nm, so optical depth is modeled here to determine optical depth for the entire visible spectrum. Of particular interest are cirrus clouds, because even though they are difficult to detect, they will still attenuate a free-space communication laser beam. Attenuation is expected to remain spectrally flat for the optical communcation channels of 532 nm, 860 nm, 1064 nm, and 1550 nm since ice particles are much larger than these wavelengths [19]. Figure 2.1 shows examples of emitted radiance measured with an ICI system, the residual radiance that remained after the simulated clear-sky emission was subtracted, and the corresponding cloud optical depth and attenuation. Infrared cloud imaging is done primarily in the 8-14 µm atmospheric window. These wavelengths were chosen because of the relatively low absorption, and thus emission, from atmospheric gases and the higher emission from clouds in this spectral region. H2O, CO2, and O3 are all contributors to gaseous atmospheric emission in this band. However, the largest contributor is H2O. Fig. 2.2 shows a MODTRAN [22] simulation of atmospheric emission for the 1976 U.S. standard atmosphere, both with and without additional water vapor. The figure also shows the radiance expected from 9Figure 2.1: Cloud data taken by the Svalbard ICI during a validation period at Bozeman, MT. Total atmospheric-emitted radiance is seen in (a); the modeled atmosphere is subtracted and (b) is the residual cloud radiance, while (c) shows the calculated optical depth and (d) shows the expected cloud attenuation at 550 nm a low altitude cumulus cloud at 0.066-km with an optical depth of 2, and a cirrus cloud with an optical depth of 1 at 8-km altitude. These are all with the observer altitude at 1524 m. Column-integrated water vapor content (called precipitable water vapor or PWV) is shown in the figure, scaled by 1.5 and 2 times that of the standard atmosphere. Atmospheric emission also increases with water vapor, and this shows the affect of humidity on radiance. Atmospheric emission is modeled down to 80◦ zenith angle (degrees from zenith). The model requires a well-characterized atmosphere and Montana State University has been researching the use of image processing 10 Figure 2.2: MODTRAN simulations showing radiance contributions and clouds for zenith measurement in Bozeman, MT. Source: Reference [7] algorithms to improve cloud detection at low elevation angles. Once the emission from the atmosphere is removed, the residual radiance (cloud emission) needs to be characterized. The characterization of this residual radiance requires a careful radiometric calibration of the camera, methods for which have been developed at Montana State University [23–25]. These methods will be covered in greater detail in later chapters. Conclusion This chapter discussed some of the methodology behind the ICI data collection mechanisms. First, atmospheric radiance is measured by imaging the sky with a 11 thermal camera. The radiance contribution of clear sky is estimated through an empirical model and subtracted from the scene. The residual radiance is then assumed to be radiance from clouds. This radiance is then converted to optical depth, and optical depth is converted to attenuation for a visible laser. This is all enabled through an atmospheric window in the 8-14 µm spectral region. This process requires both knowledge of the atmosphere and a well-characterized camera. Acknowledgments This section is derived from previous work by the author [7]. INFRARED CLOUD IMAGER HARDWARE OVERVIEW Introduction The following chapter describes the hardware systems developed for NASA Glenn Research Center in support of RF satellite propagation studies in two separate locations. The original design was constrained to be identical to legacy systems deployed at JPL (that I had built as an undergraduate) with the exception of environmental control. However, as problems were seen in the field at JPL, necessary design improvements were implemented. A CAD model of the newest ICI tube enclosure system is seen in Figure 3.1. Figure 3.1: CAD Drawing of GRC ICI System, Spring 2015 12 13 An ICI system consists of 3 main components: the camera, support electronics, and the environmental control module. The camera system is housed in a weatherized enclosure designed for long-term deployment at a variety of sites. Camera The system used long-wave infrared cameras mounted behind germanium windows, each with an uncooled microbolometer focal plane. The cameras were chosen to provide a narrow field of view that had high resolution across the ground- to-satellite atmospheric path. The uncooled microbolometer allowed for the camera to have lower upfront cost while maintaining reasonable cloud detection capabilities. The system used FLIR Tau2 cameras (Figure 3.2). The camera core had an FPGA interface that performed data preprocessing to reduce noise and focal plane non- uniformity. Image data were output via LVDS protocol, and camera-specific data were output by a RS232 interface. Figure 3.2: Flir Tau2 Camera. Photo courtesy of Paul Nugent. 14 Microbolometer Focal Plane A microbolometer focal plane can be thought of as an array of small resistors that are thermally coupled with the scene. As the resistive elements heat up or cool down because of a change in incident thermal radiation, their resistance changes. If a known voltage is injected across this resistive element, its current can be monitored, and a change in current is inversely proportional to a change in resistance via Ohm’s law. This allows for a measurement of thermal radiation. Figure 3.3: Microbolometer topology In practice, the incident radiation is absorbed by a cavity that is matched to the optical bandwidth of interest [17]. One side of the cavity is a highly reflective substrate and the other side is a resistive element. The resistive element is mounted with resistively conductive arms approximately one fourth of the wavelength above the reflective substrate, usually 2.5 µm. Owing to physical manufacturing constraints, this distance can vary slightly from camera to camera. This physical limitation causes some uncertainty in detector responsivity, since the distance determines the 15 depth of the cavity and the cavity determines the wavelength selectivity. Since the wavelength selectivity is particularly important for atmospheric remote sensing, this uncertainty was improved with a relative spectral response measurement. Relative spectral response will be discussed in more detail in Chapter 5. Lenses and Windows Because of the long wavelengths involved in infrared imaging, LWIR cameras use specialty windows that transmit energy in this spectral band. Lenses consist primarily of germanium, and the external window for the camera is made of carbon-coated germanium. Although the window emits radiation that depends on its temperature, a correction for window emission can be applied when using IR windows in a LWIR system [26]. The lens chosen for the GRC systems had a focal length selected to provide a narrow field of view to give high spatial resolution in the expected front radiation lobe of the RF antenna. The chosen lens was a 25-mm, f/1.1 lens with a 13◦ × 10◦ field of view, and 0.680 millirad instantaneous field of view. Heritage ICI systems had a much larger field of view, which assisted in identifying clear sky. From the point of view of a stationary observer looking on the ground towards zenith, radiance from a clear sky increases with zenith angle because the atmospheric path length increases with angle. This effect is illustrated in Figure 3.4 for a simple plane-parallel atmosphere. Since a narrow-field-of-view system will observe a narrower range of zenith angles than a wide-angle imager, this effect is minimized and the ability to classify clear sky is also reduced. However, near the horizon the higher angular resolution of a narrow field of view may provide a small advantage over a wider field (such as a fisheye lens field of view) because the airmass increases so rapidly there with zenith angle. 16 Figure 3.4: Air mass follows a predicted trend with zenith angle, here simplified as a secant. Support Electronics Upgrades Support electronics for the ICI system required significant upgrades in order to achieve both the level of communication needed with the camera and the required environmental control of the unit. A legacy support electronics rail is seen in Figure 3.5. Most of the electronics seen here were phased out with instrumentation upgrades. Figure 3.5: JPL ICI legacy support electronics from 2014. Photo courtesy of Joseph Shaw 17 Camera Communication Module The legacy camera communication module consisted of a Pleora R© Ethernet control module that was modified by FLIR R© to limit control of the LWIR camera to a few specific functions. However, these functions were not adequate to perform the specialized functions needed for modified shutter control, so the Ethernet control module was replaced with a Pleora R©GigE module that interfaced to the FLIR camera through a NWB Sensors R© interface board. The GigE module had a smaller footprint than the Ethernet control module and was needed to provide full control over the FLIR camera. Figure 3.6 shows the modified communication module upgrade. Figure 3.6: The GigE communication module, shown here attached to the back of a Tau2 camera, provides more camera control while reducing overall component footprint Microprocessor/Analog to Digital Converter The legacy system used a Beaglebone R© onboard microcontroller for temperature and humidity sensor data acquisition, which then communicated over Ethernet to the instrument computer. From long-term deployment at JPL’s Table Mountain Facility, 18 it was learned that power cycling would sometimes corrupt the microcontroller memory, resulting in a loss of environmental sensor data. To prevent microcontroller memory corruption, the microcontroller system was replaced by an ADAM R© industrial analog to digital converter. Moving to an industrial component increased system reliability and reduced complexity. The Ethernet physical layer remained in place, but Ethernet communication protocol was replaced by RS485. Enclosure Since the camera had a narrow field of view, a window could be used without limiting the view of the camera. This was advantageous for several reasons. Since the instrument would be subject to both blowing sand and corrosive sea water conditions, a window would protect the camera from the elements. A hatch system could be used, such as the hatch used in the ICI3 system deployed to Barrow, AK, but it would not provide protection from the blowing desert sand whenever it opened. The legacy system deployed at JPL had a corrosion-resistant stainless steel tube enclosure with a carbon-coated germanium window. This enclosure was chosen for the GRC systems as well, specifically to provide protection both from the corrosive environment in the high arctic and the sandy conditions in the desert. The enclosure was treated with a powder coating finish to eliminate peeling and increase container survivability. Although the enclosure originally seemed ideal for long-term deployment, it had several disadvantages. The enclosure was made from a Stainless Steel tube, alloy 636. This specific alloy is difficult to machine and the cylindrical shape was not conducive to modification. The tube was also notoriously difficult to work with in the field. If the system needed to be checked for any reason, the tube would need to be taken inside and dismantled. This became increasingly difficult as additional components 19 Figure 3.7: JPL tube enclosure mounted to cooling box. Paint can be seen peeling from windowed flange due to previous deployment. Tube enclosure has been refinished with powder coat for longevity. were packed into the tube. Temperature sensors on the front plate prevented quick access to key components, and the optical window would need to be handled every time the system was taken apart. This resulted in long maintenance times, even for relatively simple tasks. The tube also constrained the maximum efficiency of the thermal control system, due to constricted air flow and thermal sink separation. The stainless steel tube seen in Figure 3.8 does not provide enough thermal isolation from the ambient environment to allow for adequate cooling. Due to the small size of the chamber, additional insulation inside the tube would constrict airflow from the cooling module further. 20 Figure 3.8: Svalbard and White Sands ICI systems deployed at Montana State University. Photo courtesy of Joseph Shaw. Environmental Module Upgrades Two instruments were developed: one for deployment in the high arctic, the other for a desert site. Each of these sites posed significant challenges for environmental control. Svalbard, Norway was the chosen Arctic site with temperatures ranging from 20 ◦C to -20 ◦C. Since Svalbard is an island in the high Arctic, the instrument was expected to experience periodic corrosive salt water spray as well as periods of low temperature. Due to multiple Barrow deployments, the effects of salt water spray on systems are well known. The two-year ICI deployment in Barrow showed major corrosion issues, as well as problems with peeling paint. If not planned for, these problems could have detrimental effects mid-deployment. 21 White Sands, NM was the chosen desert site, with temperatures ranging from 45 ◦C to -7 ◦C. Blowing desert sand was expected at the site, as well as periods of high temperature. Due to the drastically different environmental scenarios for each system, modularity was implemented for the environmental control. The core system would consist of the camera and support electronics. Either a heating or cooling module could then be implemented based upon need. Environmental Control Environmental control of the enclosure originally con- sisted of an on/off temperature-controlled relay driving a small resistive heater. In order to control both heating and cooling from the same controller, a more advanced controller needed to be used. An ADAM R© PID controller replaced the legacy control system, primarily since it could share the same RS485 communication interface as the analog to digital controller. Heating Module Due to the cold temperatures involved at Svalbard, a high- capacity heater was chosen for this system. The heater was made by Hi-Heat Industries R© from flexible silicon. The heating element can be seen in initial testing in Figure 3.9b. Initial tests showed the heater was able to compensate for ambient temperature fluctuations down to -18◦ C in a snowstorm, even without insulation. Cooling Module Due to the remote instrument deployment site of the instru- ment, the cooling module needed to minimize as much support equipment as possible. This ruled out the use of freon, other cooling liquids, or compressed air. Instead, thermoelectric Peltier devices were used to provide cooling and heating using only electrical power. Heat sinks were used on either side of the device to dissipate heat and transfer energy from one side to the other. The peltier device is rated to give a specific temperature difference between the two plates with the application of power. 22 (a) 200 W silicon heater tested with enclo- sure in -18◦ C weather. (b) Front end of enclosure with camera inside insulation and copper heating shield. Figure 3.9: Heater prototype tests Figure 3.10: Cooling option prototype CAD drawing, here shown in blue. Systems were designed for modularity. 23 (a) Visible image of stainless steel tube with attached box cooling system. Interior temperatures were monitored. (b) LWIR image of cooling system showing relative temperature. Black regions are the coldest in the scene. Figure 3.11: Prototype tube ICI cooling tests Because of the cost and difficulty of machining 636 stainless steel, it was decided to use the end of the tube for heat exchange. An air-to-air cooler was purchased and ruggedized for outdoor use. The calculated temperature difference between the outside and inside of the chamber was 20 ◦C with a 40 ◦C ambient temperature. Original cooling tests showed that the cooling system was drawing heat out of the tube and was able to overcome the power dissipated by the electronics. In Figure 3.11 the lens at the end of the tube is black, showing that the window is the coldest temperature (brightness temperature) in the scene. The cooling system and tube enclosure were tested with a solar simulator to simulate thermal loading at an ambient temperature of 40 ◦C. Without a sun shield, the camera temperature stabilized at 37.2 ◦C, which meant the TEC was able to dissipate the heat generated in the enclosure by electronics and any thermal loading from the sun. The thermal loading was expected to be reduced with painting the enclosure white and adding a sun shield. 24 Since the power from the electronics was being dissipated by the TEC, the TEC would have been adequate if insulation were added to the exterior of the tube and box assembly. Any future systems should have isolated thermal layers between the camera and the external housing. As long as we were constrained to use the stainless steel enclosure, there existed multiple cooling efficiency losses between the cooling module and the camera. The main efficiency loss was due to restricted airflow between the heat exchanger and camera. A cooling manifold was designed to optimize airflow across the heat sink and electrical components were taken out of the tube and inserted into the cooling compartment. Packing the electronics into the cooling compartment provided better airflow to the camera, but made the electronics almost impossible to maintain. Any troubleshooting of the system required many man hours just to take the system apart. Although these modifications slightly improved performance, a better solution would have been to move to a different enclosure and abandon the tube system altogether. Proposed Cooling System When concerns were raised with NASA GRC engineers about the suitability of the tube system to perform adequate cooling performance in desert conditions, a legacy enclosure developed by NASA Glenn RF engineers was disclosed. The legacy enclosure had been deployed in White Sands to provide temperature stability for RF electronics at 50 ◦C. The enclosure was an insulated fiberglass 14”×16”×10” box with the back side replaced by a large heat sink. Peltier devices were linked between the heat sink and an interior bottom plate. Components then could be attached to the bottom plate and cooled/heated conductively. The camera could then be conductively coupled to the cooling plate, reducing the efficiency loss introduced when transferring 25 conduction to convection. The White Sands system is currently being retrofitted at Glenn Research Center with this enclosure. NASA GRC Deployment The Svalbard ICI tube system was deployed at NASA Glenn Research Center in January 2016 in a heating configuration. The software was written in MATLAB, which can be seen in Figure 3.12a. The software reports sky radiance, real-time processed attenuation, and cloud classification. Figure 3.12b shows a visible image of the sky near the time seen in the GUI. 26 (a) Svalbard ICI GUI showing cloud attenuation over time (b) Visible image looking along enclosure during same time period Figure 3.12: Operational field deployment at Glenn Research Center in Cleveland, OH. 27 Conclusion This chapter discussed the hardware improvements for the GRC ICI systems in comparison to the JPL ICI systems deployed a year prior. Most of the improvements to the system were necessitated by problems associated with firmware upgrades to the Tau2 cameras, which will be discussed in Chapter 6. The tube enclosure was not optimal for thermal control. Although the TEC was able to compensate for the thermal dissipation of the electronics, space did not allow for thermal isolation with the ambient environment. Future systems should always be well insulated when thermal control is used. When space and air flow is constricted in an enclosure, thermal control should be enabled through conduction instead of convection. This will allow for a more efficient transfer of energy from the cooler to the camera system. If a conductive thermal control system is used, the entire camera should be enclosed to prevent thermal gradients across the FPA. If adequate thermal control is used, thermal drift of the FPA should be minimized. This should simplify the calibration significantly and reduce many problems seen when the camera core gets too hot. NASA GRC is in the process of fabricating a box enclosure that should help the temperature stability of the camera core significantly, while at the same time allowing for easier maintenance with a larger enclosure. RADIOMETRIC CALIBRATION BACKGROUND Introduction The calibration process for microbolometer cameras has been well documented in previous work at the Optical Remote Sensor Laboratory (ORSL) at Montana State University [23–25]. However, a cursory review will be placed here to provide context for the rest of this document. This chapter covers the building blocks and basic formulas for calibration of thermal imagers. Initial Assumptions A perfect blackbody will emit spectral radiance according to the Planck radiation equation, equation (4.1), LBBλ = 2hc2 λ5 1 e  hc λkT  − 1 , (4.1) where LBBλ is spectral radiance [W/(m 2sr), c is speed of light in vacuum 2.998x108 [m/s], h is Planck’s constant 6.6626x10−34 [J s], k is Boltzmann’s constant 1.381x10−23 [J/K], and T is the absolute temperature in Kelvin of the blackbody [K]. The radiant flux incident on a detector in a radiometer viewing a blackbody source that overfills the radiometer’s field of view can be calculated from Equation (4.2), Pd = AdΩep,fp ∫ λ2 λ1 LBBλ T l λ T a λ R f λ dλ , (4.2) where Ad is the active area of the detector, Ωep,fp is the projected solid angle of the entrance pupil as seen from the focal plane, T lλ is the spectrally dependent 28 29 transmittance of the lens, T aλ is the spectrally dependent transmittance of the atmosphere between the radiometer and the blackbody, and Rfλ is the relative spectral response of the focal plane. The signal from the analog-to-digital (A2D) converter inside the camera is then shown in Equation (4.3) D# = GiPd +Do, (4.3) where D# is the digital number output of the A2D Gi is the temperature-independent gain, and Do is the digital number offset. Simple Camera Calibration Since Equation (4.3) shows a linear relationship between detected flux and digital number, a linear calibration can be derived. The simplest camera correction uses two blackbody source temperatures, one hot and one cold. A camera correction is then calculated using Equations (4.4) through (4.6), visualized by Figure 4.1. Lsi = gD# + d (4.4) where g = LHOT − LCOLD DHOT −DCOLD (4.5) d = LCOLD − gDCOLD. (4.6) 30 Figure 4.1: Simple two-point radiometric camera calibration Temperature-Dependent FPA Calibration Since a microbolometer focal plane array (FPA) experiences resistance change caused by both incident scene radiation and changes in its own temperature, it should not be surprising that this detector is extremely sensitive to its own temperature. In the case of cloud imaging, the radiation from the camera body is usually larger than the radiation from the scene. The basic temperature model for a microbolometer FPA is a resistive divider, where the resistance of the detector changes with signal from the blackbody target, and the reference resistor changes resistance with respect to the camera body temperature. A more thorough model of systematic contributors to this temperature dependence can be found in References [17,27]. Dr. Shaw’s group at Montana State University has developed methods to compensate for a microbolometer camera’s FPA-temperature response, thereby 31 stabilizing the camera so that the signal variations can be related only to changes in the scene radiation. In the primary method, the FPA-temperature response is compensated through a regression based on one linear term and 3 non-linear terms. This correction is applied to the camera’s digital numbers before the stabilized digital numbers are used in the linear regression shown in Equation (4.4). The correction takes the form DNC = DN − b(∆T ) 1 +m(∆T ) , (4.7) where b(∆T ) = b1∆T + b2∆T 2 + b3∆T 3 − σ1. (4.8) where DNC is the corrected digital number, DN is the uncorrected digital number from the FPA, b(∆T ) is the temperature-dependent bias correction, m(∆T ) is the temperature-dependent gain correction, ∆T is the difference in FPA temperature from 25◦ C, b1, b2, b3 are constants derived from the temperature-difference matrix inversion, and σ1 is a constant bias term. Further detail of the derivation of Equations (4.7) and (4.8) may be found in Reference [27]. Conclusion The Optical Remote Sensor Group at Montana State University has developed algorithms that thoroughly correct for microbolometer temperature drift. Once the greatest contributor of system error, this is now one of the smallest contributors to error if adequate information is collected from the camera system. A more thorough discussion of system error contributions is found in Chapters 5 and 6. LWIR CAMERA RELATIVE SPECTRAL RESPONSE Introduction This chapter discusses how the relative spectral response (RSR) of the camera can cause uncertainty in cloud characterization with infrared cloud imagers. The spectral response of the camera is a product of the detector’s spectral response function and the optics’ and filter’s spectral transmittance. For the purposes of this discussion, it is assumed that the spectral response of the camera does not change with temperature. Uncertainty in the RSR can cause error when calculating integrated scene radiance. In summer 2016 I worked with colleagues at the Space Dynamics Laboratory (SDL) in Logan, Utah, to measure the spectral response of two different LWIR cameras. SDL has performed spectral response measurements for infrared space systems such as Wide-field Infrared Survey Explorer (WISE), Michelson Interfer- ometer for Global High-resolution Thermospheric Imaging (MIGHTI), and Radiation Budget Instrument (RBI) telescopes, and has developed a technique to measure the spectral response of detectors using a Fourier Transform Spectrometer [28, 29]. The measurements taken at SDL can be seen in Figure 5.1. The uncertainty for the Tau2 and Photon camera spectral response measure- ments were estimated to be ±10%. Using the measured spectral response as a known RSR for a simulated camera, the effect of a shift of ±10% was then analyzed to determine the corresponding calibration error in radiance. 32 33 Figure 5.1: FLIR Camera RSRs measured at SDL 34 Methodology The following is a sensitivity analysis of the error in calculated radiance that are caused by uncertainties in the relative spectral response function (RSR), including RSR-induced errors in the calibration coefficients. The calibration coefficients are determined from (in this simple case) a two-point calibration using blackbody targets. The error in the derived coefficients are due to determining integrated radiance for the blackbody targets when using an uncertain relative spectral response. A flowchart of the overall simulation can be seen in Figure 5.2. Figure 5.2: RSR simulation flowchart This process includes assumptions that the emissivity of the blackbody is 1, atmospheric transmittance is 1, and therefore the radiance from the blackbody can be characterized perfectly by the Planck Formula. The reference spectral responsivity of the simulated cameras are the relative spectral responses of the Tau2 and Photon cameras collected at SDL. The camera is assumed to have no noise, and that the camera has no temperature dependence for this simulation. Relative spectral responses within the ±10% bounds were simulated to give 1000 statistically relevant RSRs. These RSRs were then used to simulate 1000 different 35 calibrations. Once the simulated calibrations were completed, measured cloud spectra radiance scenes were used to test the error between known scene integrated radiance and calculated scene integrated radiance within the bounds of error of the relative spectral response measurement. The RSR-induced calibration errors arise from incorrect calculations of how the camera responds to wavelength. For a simple two-point calibration, one blackbody target is used at two different temperatures. This incident radiation is expected to follow the Planck Radiation formula for spectral radiance, already shown in Equation (4.1). The uncertainty in the spectral response causes an error when integrated with the expected spectral scene radiance to produce integrated scene radiance. This integrated scene radiance is then used to determine the linear temperature- independent gain of the camera. To determine how this uncertainty in calibration coefficients affects measure- ments, a simulated camera was used. This perfect camera was known to have the response g and d, which were taken from an actual Tau2 calibration to give realistic digital numbers. G and B were then calculated according to Equations (5.1) and (5.2). The known digital number of the simulated camera were then calculated according to Equation (5.3) G = 1 g (5.1) D = −d g (5.2) D# = G ∫ λ2 λ1 Ls(λ)R(λ) dλ+D, (5.3) 36 where Ls(λ) is the spectral scene radiance, and R(λ) is the true relative spectral response of the camera. This gave simulated digital numbers in response to known scene spectral radiances. These simulated digital numbers were then used to calculate integrated radiance using erroneous calibrations. The error between the known integrated radiance and the simulated integrated radiance then gave an estimate of the radiance uncertainty arising from the relative spectral response uncertainty. The simulated-error RSRs were used to calculate the integrated radiance on the focal plane array when viewing a perfect blackbody by multiplying the blackbody spectral radiance by the simulated RSR. The spectral radiance was then integrated, and these resulting integrated radiances were then used as data points in a two-point calibration to derive gn and dn for each calibration. The values gn and dn were then used to calculate the integrated radiance Lsim for each scene, as shown in Equation (5.4). The simulated scenes where cloud spectra collected at the Atmospheric Radiation Measurements (ARM) program site in Barrow, AK from July 2012 to July 2014. Lsim = gnD# + dn. (5.4) The radiance difference between the known integrated radiance of the scene obtained by integrating spectral radiance with the known RSR and the calculated integrated radiance simulated with the perturbed RSR can then be shown as ∆L = Ls − Lsim. (5.5) Simulated Tau2 Camera RSR The spectral uncertainty was calculated using 1000 perturbed RSRs, using the SDL RSR as a reference. The perturbed RSRs can be seen in Figure 5.3. The radiance 37 uncertainty for these simulated data is shown in Figure 5.4b to be 3.5% of the scene radiance. Figure 5.3: Tau2 camera measured and simulated RSRs The simulated scenes were built from 2.4 million spectra measured of the Arctic atmosphere. The scenes varied from 5 to 45 W/(m2 sr) integrated scene radiance. The data shown in Figure 5.4a show the strongest bias between known and simulated data to occur at the highest scene radiance values. This is due to a small change in gain having the largest correction error for large digital numbers for a linear correction term. In order to equally compare uncertainty over the span of scenes, error in terms of percent of scene radiance is shown in Figure 5.4b, where the uncertainty for the coldest scene was calculated to be 3.5% for the Tau2, or 0.175 W/(m2 sr) uncertainty in a 5 W/(m2 sr) scene. 38 (a) Tau2 known vs calculated integrated scene radiance. (b) Tau2 camera uncertainty in terms of percent scene radiance. Figure 5.4: Tau2 calibration error due to RSR uncertainty. Simulated Photon Camera RSR I also worked with colleagues at SDL to measure the RSR of a FLIR Photon camera. Its spectral response is different from the Tau2, having a slower falloff for higher wavelengths and less structure within the 10-12 µm region. The reference and simulated RSRs can be seen in Figure 5.5. Similar to the Tau2, the error in terms of radiance is largest at the hottest scenes, shown in Figure 5.6a. Since the scenes of most interest are for clear sky and thin clouds, the uncertainty terms were presented in percentage of scene radiance. Due to the photon RSR having significantly less structure within the 10-12 µm region, the calibrated radiance uncertainty arising from the spectral response uncertainty for a cold scene is 4.7% uncertainty in a 5 W/(m2 sr) scene, seen in Figure 5.6b. This uncertainty is 14% larger than the Tau2 for a cold scene. This could be due to the Tau2 having a wider response than the Photon, where having more area under the curve increased the calculated integrated radiance more than the Photon. This 39 Figure 5.5: Photon camera measured and simulated RSRs wider response may have helped calibration uncertainty, but may also mean more error in atmospheric retrieval due to temperature of CO2 for the Tau2. Deviation from Manufacturer’s RSR Results so far have focused on the error due to the uncertainty within the RSR measurement. To see how well previous systems were calibrated, the error in radiance was calculated between the SDL RSR and the typical RSR from the manufacturer. Figure 5.7 shows the RSR measured at SDL and the RSR provided by the manufacturer as a typical curve for both cameras. The same calibration error analysis was done using the supplied FLIR RSRs for the Photon and Tau2 cameras. As expected, the error seen due to the incorrect RSR is larger than it was when perturbing the measured spectra within its uncertainty bounds. The difference in integrated radiance can be seen in Figure 5.8. Error for the 40 (a) Photon known vs calculated integrated scene radiance (b) Photon camera uncertainty in terms of percent scene radiance. Figure 5.6: Photon calibration error due to RSR uncertainty Photon camera was > 25% scene radiance at 5 W/(m2 sr), while error for the Tau2 was 10% scene radiance at 5 W/(m2 sr). When scene radiances were plotted against each other in Figure 5.8, one can hardly distinguish between the two for the Photon, while a definite bias can be seen in the Tau2 data. Note that the Tau2 gain crosses the 1:1 line in Figure 5.8b. This causes a zero percent error at 10 W/(m2 sr) in Figure 5.9b, which quickly goes to a large error either side of this null point. Conclusion Knowing the RSR of the infrared cameras used in cloud imaging allowed for an analysis of how much of the calibration uncertainty arises because of this parameter. The Tau2 uncertainty due to RSR for cold scenes was found to be 0.175 W/(m2 sr), and the Photon RSR uncertainty was found to be 0.235 W/(m2 sr) for cold scenes assuming a ±10% RSR measurement uncertainty. In other words, a ±10% RSR 41 (a) Photon measured vs generic manufac- turer RSR (b) Tau2 measured vs generic manufacturer RSR Figure 5.7: RSR measurements vs manufacturer-specified spectra measurement uncertainty caused a 4.7% calibration uncertainty for the Photon, and a 3.5% calibration uncertainty for the Tau2 for cold scenes. Both the generic spectral responses from FLIR were found to be out of the uncertainty bounds of the SDL measurement. The differences between the SDL and FLIR RSR showed a high sensitivity toward knowing the RSR for very cold scenes. In other words, not measuring the RSR of the Photon camera would give a 33% error for clear sky scenes, and a 15% error for the Tau2 camera. This points towards a need to understand the RSR better than what is given for a generic production run from the manufacturer if the characterization of thin clouds are to be prioritized. 42 (a) Photon RSR scene radiance compar- isons (b) Tau2 RSR scene radiance compar- isons Figure 5.8: Scatter plots of cloud scene radiances calculated with the SDL-measured and manufacturer-specified RSR spectra for (a) Photon and (b) Tau2 cameras. (a) Photon RSR calibration error (b) Tau2 RSR calibration uncertainty Figure 5.9: Uncertainty in percentage of scene radiance due to generic RSR from manufacturer TAU2 CHARACTERIZATION Introduction This chapter is a discussion of the characterization of a FLIR Tau2 Camera. Multiple issues were found in the calibration of this camera, even with respect to previous cameras of the same type. This chapter will discuss these issues and show the final calibration and characterization results. In 2012 when the first Tau2 cameras were purchased, the Tau2 was the next- generation microbolometer LWIR camera from FLIR. Since the MSU ICI sytems in the mid-2000s had been using FLIR Photon cameras, it was natural to progress to the Tau2. The Tau2 boasted a smaller pixel size with 17 µm pitch pixels, and was a smaller, more lightweight camera. The largest driving factor, however, was that FLIR Photon cameras were no longer being produced. This necessitated changing to a different camera core. Even though the spatial resolution of the Tau2 was higher, it was not necessarily a better camera for cloud imaging. The smaller pixel pitch and larger focal length meant a smaller instantaneous field of view, decreasing radiometric throughput and therefore the signal on the detector. This led to an increase in noise with respect to the signal, or a lower signal-to-noise ratio (SNR). In 2013, two ICI systems with Tau2 cameras were deployed to NASA JPL. A year later, two more Tau2 cameras were purchased for NASA GRC. Between the two purchases, FLIR upgraded the firmware of the Tau2 cameras. The following are findings from calibrations and deployments completed after the firmware upgrade. 43 44 Low-signal Suppression and Shutter Offset Correction When the GRC system was deployed at MSU during the initial testing period, the camera would get quite hot without the assistance of a cooling system. Once the focal plane temperature reached approximately 34 ◦C, the camera would report digital values of zero. The firmware of the Tau2.7 (the upgraded firmware version of the Tau2) had optimized the preprocessing algorithms internal to the camera for hot scenes. Since the temperature of the scene being analyzed was significantly below the temperature of the camera, the data were being filtered out with the noise, resulting in blank sky images. The imager was also giving noticeable jumps in calculated radiance with respect to small changes for certain camera temperatures. After discussion with FLIR engineers, it was discovered that using the Tau2.7 in manual flat-field mode (as we were) would automatically correct the digital number output in response to camera temperature according to factory-set tables. This complicated the calibration significantly. A flat-field correction (FFC) is a non-uniformity correction that FLIR systems applies using proprietary software. It fixes pixel-to-pixel deviations when viewing a perfectly flat scene. The uncooled microbolometers will both drift and reduce contrast over time, so FFCs need to be performed approximately every five minutes to fix these fluctuations. With the software update, internal correction tables were applied to the FFC algorithm. This linked the shutter temperature to the FFC and could only be disabled by applying an external FFC instead of a manual FFC. Previous systems used a manual FFC that allowed us to have control over when the FFC of the camera happened. The early versions did not preprocess gain and offset settings according to internal tables for manual FFCs, allowing us to rely entirely on our custom calibration. 45 To resolve this problem an external FFC was used. This meant that either an external shutter was used, or the internal camera shutter was told to to close so a FFC could be performed as if the shutter were an external to the camera. With using an external FFC, the internal tables were not used in the non-uniformity correction algorithm, so FFC behavior was similar to previous systems. External Flat-Field Correction It was discovered through experimentation that clouds could be seen with high focal plane temperatures by doing an external FFC on the sky. This very cold scene caused the camera to increase its internal offset settings to the point where clouds could be seen above the noise. This effect can be seen in Figure 6.1. The images on the left show what happened when a manual FFC was used. Directly after the image on the left was taken, the camera was switched to external FFC on the sky. The images on the right were recorded after the external FFC was applied. These images show fixed-pattern noise, but the skilled observer can see cloud patterns above the noise. In this case, the clouds produced a very small signal. The external FFC algorithm effectively subtracts the atmospheric signal (since the algorithm believes the camera is looking at a flat scene), and differences from this flat scene are seen in subsequent images. Using this method is very similar to scene-to-scene subtraction used in some of our cloud detection algorithms. This technique may be beneficial in the future for use in detecting clouds with low optical depth. By doing a flat field correction on the sky, a scene subtraction is accomplished. Assuming clear sky can be observed in the frame, anything different in the next frame should be radiance from clouds as they move through the field of view. Cloud determination would then need to be accomplished through image- to-image subtraction. An advantage to this technique is that self-emission from the 46 (a) Manual flat field correction 35◦ C (b) External flat field correction 34◦ C (c) Manual flat field correction 54◦ C (d) External flat field correction 51◦ C Figure 6.1: Manual FFC vs External FFC at different FPA temperatures camera body, camera optics, enclosure window, and atmospheric emission are all being removed from the scene, so only changes in signal due to the changing scene are being observed. The main disadvantage of technique is that normal calibration is lost when the gain and offset values change to unknown values. However, an in-situ calibration may still be accomplished by taking an image of the shutter and doing a throughput calculation according to the MSU-developed method discussed in Reference [23]. This idea should be pursued in future studies, especially if the systems are meant to be optimized for sub-visual cirrus cloud detection. 47 Another disadvantage of this technique is that every time an image is taken, an external FFC must be done the frame before. The reason this is a problem is that our previous experience showed that the shutter will wear out rapidly if used more often than approximately once every 5 minutes. If the scene does contain clouds, the scene will be changing over time as the clouds move across the scene. This scene is obviously not flat, so the cloud will show up as a negative radiance as scene-to- scene subtraction is performed. If the cloud moves and is replaced by another cloud of similar optical depth, the scene will not seem to change. A compromise to this technique may be to use the FFC from the shutter for the first 2.5 minutes, then flat field on the sky and take data for 2 minutes more. When dropping the shutter at the end of the 2 minutes of the sky external FFC data set, an image of the shutter can be taken before the next FFC is performed to establish throughput calibration. This proposed collection routine is visualized in Figure 6.2. Figure 6.2: Proposed external FFC data collection routine External FFC of a False-Temperature Internal Shutter Since it is desirable to have an absolute calibration, another method was pursued. The internal gain and offset of the camera were observed to change dependent on the scene observed. If the camera did an external FFC on the sky, which is much colder than the actual shutter, the gain and offset were automatically changed to allow 48 for small cloud signals. Since this change depended on the algorithm knowing the temperature of the scene, changing this temperature input should also change the internal offset and gain. NWB Sensors R© had developed a technique of doing an external flat-field correction on the shutter, and fixing the shutter temperature to a set value so the internal gain and offset tables of the camera would not change. I found out about this technique through discussion with NWB engineers [30], and decided to try the technique for cloud imaging applications. Instead of fixing the shutter temperature to near the camera temperature, I told the camera that the shutter temperature was actually much higher than it really was. I first tried telling the camera that the shutter temperature was 80 ◦C. This moved the offset of the camera up enough to allow the cloud data to be above the threshold of the noise-compensation algorithm, while keeping the internal offset and gain constant throughout the measurement. Data could then be calibrated through MSU’s standard algorithm. Tau2 Calibration Results Using MSU’s standard algorithm for FPA temperature correction and digital- number-to-radiance calibration, the data were processed from the experiment de- scribed next. A large-area blackbody source was set to six different temperatures while the camera was set inside a thermal chamber set to ”soak” the camera at a steady temperature. This was repeated for five camera temperatures. The camera was then put in a ramp environment, where the environmental chamber temperature (hence the camera temperature) was ramped up and down while the camera viewed the source. The results of this process are shown in Figure 6.3. The blue line shows the blackbody source radiance, which was calculated by integrating the Planck 49 function over the camera’s relative spectral response function for the blackbody source temperature. The yellow-orange line shows the camera’s output converted to radiance with a standard two-point radiometric calibration, but without any correction for the drifting FPA temperature. The red line shows the radiance values obtained from the two-point calibration after the FPA-temperature variation compensation was applied. The blackbody radiance was calculated, and then the compensation mechanisms corrected for radiance error. The uncertainty of this correction can be seen in Figure 6.4. Figure 6.3: Corrected radiance for a Tau2 Camera using MSU standard algorithm over time in comparison with uncorrected and source radiance. Images were collected every 20 seconds. Source: Reference [7] 50 Figure 6.4: Distribution of error in calibrated radiance relative to the source radiance Source: Reference [7] Noise-equivalent Radiance Sensitivity of the ICI system depends on the Noise Equivalent Radiance (NER) at the focal plane array. Previous ICI systems have been able to discriminate sub- visual cirrus clouds above pixel noise. A subvisual cloud can be defined as ”any cloud that cannot be detected” by a lidar system or otherwise and has been roughly approximated to a cloud optical depth of 0.03 [31]. NER is calculated by determining the pixel-to-pixel variation over time. This is done while the camera is viewing a flat scene, and the images are taken as quickly as possible to minimize any thermal changes within the camera over the image collection period. For this test, the camera was at room temperature and 200 images were taken as quickly as possible (60-Hz). Each pixel in the image was then evaluated over 200 images. The standard deviation in digital number was then calculated for each image set to determine pixel-to-pixel noise. Image sets were taken every 5 minutes for 51 one hour. The pixel noise was then converted to radiance after the calibration was completed, and the mean of this noise in radiance was reported as NER. The NER for the Svalbard ICI system at room temperature has been calculated to be 0.043 W/(m2sr) using an extended source. This NER is equivalent to a cirrus cloud of 0.004 optical depth, located at 8-km altitude. This is significantly below subvisual cirrus as defined previously. Such a cloud would have a transmission loss of 0.3% at 550 nm. A cloud this thin would mostly likely be undetectable due to atmospheric transmittance from ground to 8 km. It should also be noted that the system uncertainty is more than one order of magnitude higher than the NER. This means that although we can see thin clouds above pixel noise, we can only classify that their radiance is below the radiometric system uncertainty. The radiometric system uncertainty for an ICI system can be calculated using the method described in Reference [27]. The uncertainty of the radiometric calibration can be calculated to be the Root Sum Square (RSS) of the uncertainty of the radiometric camera correction (how well we can correct for temperature drift of the FPA), the uncertainty of the RSR, and the uncertainty of the source. This is then RSSed with the uncertainty of the atmospheric model for the deployment site to give the total system uncertainty. Because of this system uncertainty, cloud statistics are binned into 7 discrete cloud thicknesses [19]. The discussion of NER analysis follows from previous work by the author [7]. Conclusion The Tau2 camera had significant challenges associated with its characterization for cloud imaging. However, once the internal compensation algorithms were 52 overcome, a standard FPA temperature-compensation algorithm could be applied to the camera. Since the uncertainty of this calibration is the RSS error of the RSR uncertainty and the radiometric correction uncertainty (source uncertainty is negligible), the total calibration uncertainty can be calculated. For the Tau2 camera, RSR uncertainty error was 0.175 W/(m2 sr) from Chapter 5 and radiometric correction uncertainty was calculated to be 0.50 W/(m2 sr) in this chapter, so total calibration uncertainty was calculated to be 0.53 W/(m2 sr). When calculating the calibration uncertainty for the Photon, the radiometric correction uncertainty was 0.29 W/(m2 sr) [27] and the RSR uncertainty was 0.24 W/(m2 sr) from Chapter 5. This resulted in a total calibration uncertainty of 0.38 W/(m2 sr) for the ICI3 Photon. This shows that while the Photon calibration was still better than the Tau2, the error from the Photon was only 16% better than the error of the Tau2. If the RSR was not measured, but instead the manufacturer RSR was used, this would cause an uncertainty of 33% for the Photon and 15% error for the Tau2. This would cause an error in integrated radiance for a cold scene of 1.65 W/(m2 sr) for a Photon camera, and an error of 0.75 W/(m2 sr) for the Tau2. This shows that if not properly measured, RSR uncertainty can dominate radiometric calibration uncertainty. This is an important result, since before this work the RSR had not been measured and only an assumed RSR was available [27]. The measured RSR for these cameras should be considered camera specific. Different lens configurations, lens manufactures, and camera core production runs will cause error in the RSR. Even though the calibration accuracy might be similar for these two systems, the Photon still will have better radiometric throughput and less noise than a Tau2, making it a better camera core for cloud imaging. TAMARISK CHARACTERIZATION Introduction This chapter discusses the preliminary investigation of a DRS R© thermal camera for use in cloud imaging. This was an uncooled microbolometer camera with a 17- micron-pitch focal plane array. The goal was to investigate this camera as a potential replacement for the similarly configured Tau2 discussed in Chapter 6. A camera was loaned to Montana State University for evaluation with regard to infrared cloud imaging. The first sky image taken with this camera can be seen in Figure 7.1. Figure 7.1: Tamarisk camera first sky image, with tripod and operator in view. The colorbar shows digital number output of the FPA The evaluation camera was a DRS R© Tamarisk camera core packaged by Sierra Olympic R© to include a framegrabber interface. Sierra Olympic R© referred to the 53 54 camera as a Viento. The Viento 640 was a 640×480 30-Hz camera with a f/1.4 lens. The field of view was 90◦ × 67◦ with a 2.45 milliradian iFOV. Calibration Algorithm The Tamarisk core does not have a thermal sensor on the FPA, but uses the digital number from a masked pixel to determine the calibration tables for thermal compensation. Due to the time constraint of the evaluation period, we were only able to calibrate with soak temperatures (with no temperature ramping). Therefore, the legacy focal plane temperature correction could not be used in determining focal plane array temperature response. To ensure the digital number response was linear with camera temperature in this range, output digital numbers were plotted against camera temperatures measured with a thermocouple placed on the camera housing. Figure 7.2: Tamarisk FPA temperature sensor output has a fairly linear response with external camera temperature 55 To compensate for FPA temperature response, a system matrix was developed to determine the response of the camera with respect to scene radiance and focal plane array temperature. The ability to linearize the data in this temperature range was proven by plotting the data on a three-axis scatter plot, as shown in Figure 7.3. Figure 7.3: Tamarisk digital FPA temperature, digital number response, and scene radiance can be solved with a linear regression The plane in Figure 7.3 was then solved for using a linear regression based upon Equation 7.1, where A is camera-temperature-independent gain, B is camera temperature dependent gain, C is a mixing term of A and B, and D is an offset term. In order to reduce calibration error, B held both FPA temperature and case temperature terms. Equation (7.1) was then solved through a matrix inversion. 56 [ Lbb ] = [ A B C D ] ×  M1 M2 M3 M4  (7.1) It should be noted that this inversion may be unstable for temperatures and radiances outside of the bounds of the collected data in the calibration. For instance, low radiance corrections may not be accurate due to extrapolation beyond the bounds measured with our laboratory blackbodies. This problem would be minimized by collecting ramp data and applying temperature difference algorithms discussed in previous chapters. The time it would take to collect these data, however, was beyond the evaluation time given for the camera. Calibration Results The image shown in 7.4 is the standard deviation of the radiance error in each pixel when comparing the calibration results to the blackbody. The high uncertainty in the corners of the image are due to clipping of the window of the blackbody. Since the Tamarisk has a 90◦ × 67◦ FOV for x and y directions, the diagonal FOV of the image is actually 112◦. These large angles are difficult to calibrate with a limited- area blackbody source. The camera is required to be fairly close to the blackbody, which can both cause heating of the lens and reflection of lens emission from the blackbody surface (since the blackbody emissivity is not 1). Also, it has been shown that emissivity of a blackbody source will change with large field angles [32]. However, these effects were ignored for this calibration since their effects are small compared to the uncertainty in the FPA correction. 57 Figure 7.4: Radiance correction standard deviation for Tamarisk camera over FPA Conclusion The initial calibration for the Tamarisk camera proved successful and shows a standard deviation of 1.44 W/(m2sr) with a distribution over the scene shown in Figure 7.4. With more temperature data, this would most likely be reduced due to a more accurate FPA correction, but it shows that calibration of the Tamarisk camera is possible and will give reasonable results. The initial Tamarisk calibration has shown that the camera core performs very similarly to the Tau2. The advantages to the system are that the internal calibration tables and settings are more accessible than the Tau2 system. For instance, the Tamarisk core will report back which internal tables it is using in its non-uniformity corrections. However, the Tamarisk was not tested for temperatures above 32 ◦C. As shown in Figure 7.2, when camera temperature compensation is disabled, the 58 temperature sensor only reported temperatures between 13 ◦C and 32 ◦C. This may be solvable through more serial commands, but it was not investigated for this purpose. Another solution might be to enable the temperature compensation and do separate calibrations for each table. Both the Tau2 and Tamarisk cores have shown that microbolometer arrays are less sensitive to cold scenes (like clear sky) when FPA temperature is above 32 ◦C. A disadvantage to the Tamarisk core is that it only has a temperature sensor on the FPA and not on the case as well. This may mean that even with a more thorough calibration, external sensors will be needed to added to perform the same tasks as the Tau2. However, if future systems will continue to strive for thin cloud detection, external temperature sensors should be used anyway on the case as well as the lens to further characterize self-emission from the camera. This chapter has shown that calibration of the Tamarisk is possible, and the system seems to perform very similarly to the Tau2. However, the Tamarisk cores are currently restricted to have a 17 µm pixel pitch. Since the Tamarisk and Tau2 cores seem to be so close in performance, I would advocate for choosing a Tau2 324 over a Tamarisk 640 in future systems. The Tau2 324 has 25 µm pitch with a 324×256 array, with a 63◦ × 50◦ FOV and a 3.333 iFOV. The larger iFOV will increase throughput, which should mean the Tau2 324 is currently a better camera for cloud detection than either the Tau2 640 or Tamarisk 640 cores. SYSTEM COMPARISONS Introduction This chapter discusses the performance parameters of different ICI systems as the unit under test when compared with a legacy system, the ICI3 cloud imager. The ICI3 system has been validated against visible cloud imaging instruments and other remote sensing equipment at the ARM Research Site in Barrow, AK [27], so it is well understood. This system provides a reliable real-time data set for side-by-side comparisons with newer systems. Photon vs Tau2 The Tau2 Svalbard system was placed beside the Photon ICI3 system for validation in January 2016. The Svalbard system was to deploy in Cleveland, OH, and the calibration needed to be validated against a known system. This comparison also was reported in previous work by the author [7]. Figure 8.1: Svalbard ICI (G02) and ICI3 reported dB compared over time. Source: Reference [7] 59 60 (a) dB Scatter Plot (b) dB difference distribution Figure 8.2: Statistical analysis of instrument comparisons. Source: Reference [7] The Tau2 and Photon cameras were deployed with drastically different fields of view. In order to adequately compare collected data, both fields of view were decreased to 10◦ through software. Pointing error required a time shift in the data and a low pass filter applied to the data. The ICI3 uses a hatch system that closes whenever it rains, so data taken while the hatch was closed were disregarded. Both imagers were compared over a two-day period, as seen in Figure 8.1. Data show good agreement during the comparison period. Figure 8.2 shows that dB values from each instrument had a correlation of 0.978. The standard deviation of error between the two instruments is 1.13 dB. The Tau2 reported less attenuation during transient periods than the ICI3 using a full-field average. This is most likely due to error in completely matching the fields of view. The distribution in Figure 8.2(b) shows a slight bias in small dB measurements, which correlates to radiances of relatively low optical depth. This bias may be due to uncertainty in the spectral responsivity of the two cameras. 61 Photon vs Tamarisk The Photon and Tamarisk cameras were compared side-by-side for half of a day in May 2016. The Tamarisk was at Montana State University under evaluation for a short time, so a short calibration was performed, followed by a short comparison period. The purpose of the evaluation was to determine if the Tamarisk camera could be calibrated and if it had a low enough NER to observe thin clouds. Figure 8.3: Photon (left) and Tamarisk (right) ICI systems deployed side by side In Figure 8.3 the ICI3 enclosure can be seen on the left. An astute observer can see that the sliding hatch is open and the ICI3 is taking data at the time this photograph was taken. The Tamarisk camera was smaller in size to the Photon, but a large legacy container (from the first ICI deployment to Barrow) was used to house the electronics. The Tamarisk enclosure can be seen on the right of Figure 8.3. The Tamarisk lens protruded from a PVC enclosure. A defocus was inadvertently put on 62 the camera as a result of the tight fit between the camera lens and the enclosure. While image quality was degraded during this test period, radiometry still remained within acceptable parameters. Image Comparison Both the Photon and Tamarisk cameras had 90◦ degree fields of view, so the images matched much better than the previous test with the Photon and Tau2. Side- by-side images of the same cloud can be seen in Figure 8.4. There is a bit of an orientation error between the two cameras, but the cloud structure in the images clearly shows the same cloud. While Figure 8.4 shows a fairly thick cloud, Figure 8.5 shows a relatively clear sky. Using scene-to-scene subtraction, clouds can be seen traversing the clear sky. Uncertainty of the radiometric correction of the Tamarisk camera was 1.44 W/(m2 sr), and the uncertainty of the RSR is unknown. Even if the uncertainty of the calibration was 1.44 W/(m2 sr), one can see from the scene-to-scene subtracted image in Figure 8.5d that thin clouds less than this uncertainty can be seen. These clouds would be classified as simply being lower than the uncertainty of the calibration. Radiometric Comparison Scene radiance was compared over time using averaged scene radiance of the images. The images were masked to remove the effect of buildings in the field of view. The standard deviation of average scene difference between the two cameras is shown in Figure 8.6 as 0.71 W/(m2 sr), which is impressive considering the accuracy of the radiometric correction for the Tamarisk camera was 1.44 W/(m2 sr). In other words, the reported standard deviation between the two cameras is half of the 63 (a) ICI3 Photon (b) Viento Tamarisk Figure 8.4: Radiometric image comparison of cloudy sky 64 (a) ICI3 Photon (b) Viento Tamarisk (c) ICI3 Photon scene subtraction (d) Viento Tamarisk scene subtraction Figure 8.5: Thin clouds can be seen above the noise when doing scene-to-scene subtraction 65 Figure 8.6: Distribution of scene comparisons during deployment period uncertainty of the Tamarisk, which means they compare very well. The correlation between the two systems is shown in Figure 8.7 to be 0.996. Conclusion The Tau2 and ICI3 systems compared fairly well over the course of the three- day period in terms of optical depth. Since the FOV was dramatically smaller than the FOV of the ICI3 system, both systems had their FOV reduced to 10◦. The two systems had a correlation of 0.978 for reported dB values. Although the standard deviation of the reported differences was 1.13 dB between the ICI3 and Svalbard ICI, this may have been due to the different RSR values between the Photon and Tau2 imagers. Pointing errors may have also contributed to this error, since both analysis field of views were so narrow. The Svalbard ICI system should be calibrated well enough to give the needed cloud statistics for the Svalbard site. 66 Figure 8.7: Scatter plot showing correlation of 0.996 The Viento and ICI3 data compared better than the Tau2 and the ICI3 in terms of radiance. This may have been due to the large FOV contributing to a better- matching average scene. Since the FOV was so large for both systems, small changes in the scene would not affect the reported radiance as much as it would in comparing two narrow-field-of-view instruments. Even with a 17 µm pitch, the Tamarisk camera had a shorter focal length, contributing to a larger iFOV than the Tau2 system; this produced a more closely matched throughput. These contributing factors may make the Tamarisk appear to be a better system than the Tau2, even with a worse FPA temperature-correction. Another consideration is that the Tamarisk and ICI3 systems were only compared over a few hours, so the data may be biased for certain cloud conditions. Overall, both Tau2 and Tamarisk cameras compared well to the ICI3 and should be considered to be viable options for infrared cloud imaging. CLOUD PHASE DETECTION Introduction A thermal infrared image contains information about cloud optical depth up to a value of approximately 4. Optically thicker clouds simply behave as a blackbody, but optically thinner clouds emit thermal radiation that varies predictably with cloud optical depth [31]. However, recent comparisons of cloud optical depth derived from ICI images and lidar data show that the data tend to accumulate into two clumps. Since the ICI cloud optical depth algorithms were developed for cirrus clouds, it is possible that there is a systematic difference between the results for clouds made of ice or liquid water. In addition to the question of how cloud phase affects the cloud optical depth retrievals from an ICI system, thermodynamic phase is used in atmospheric physics to determine cloud formation mechanics [33,34]. Cloud phase also has a large impact on attenuation and beam polarization. For instance, for KA band propagation, ice clouds have an almost negligible attenuation coefficient and only affect polarization in this band, while suspended water particles (clouds) have a large attenuation coefficient approximated through Rayleigh scattering [35]. This could lead to an instrument that would support KA propagation studies even further at NASA GRC’s NEN site. From our perspective as remote sensing instrument designers and users, this presents an opportunity to explore some possible ways of remotely sensing cloud phase with passive imagers. Therefore, this chapter describes a study that was undertaken to assess the possibility of using passive short-wave infrared (SWIR) imaging to determine cloud phase. The hope would be that this could eventually lead to a new instrument to operate alongside a long-wave infrared ICI to obtain enhanced spatial and optical 67 68 information about clouds. The satellite remote sensing community has developed methods to estimate cloud phase from SWIR radiance measurements [36–41]. There have also been investigations into using the LWIR for cloud phase determination [42]. These methods are based on differences in the refractive index of liquid water and ice, as indicated in Figure 9.1. Figure 9.1: The imaginary part of the index of refraction is plotted against wavelength. Arrows show the differences at 1.64 µm and 1.7 µm. Source: Data are from [43] for liquid water and [44] for ice. In order to classify cloud phase, spectral shape can be used to determine if the cloud consists of ice or water [37]. The imaginary part of the index of refraction is spectrally dependent, as seen in Figure 9.1 [37]. Since attenuation depends on the imaginary part of the index of refraction, two clouds with different phases illuminated by the same source will attenuate the illumination differently. There is a distinct slope difference between ice and water refractive indices. To simulate this effect for different cloud types, a radiative transfer code was used to simulate clouds according to the 69 observation constraints seen in Figure 9.2. The sun is behind the observer at 180◦ azimuth, 45◦ elevation. The illuminated cloud is viewed in front of the observer at 45◦ elevation. Figure 9.2: Observation of clouds simulated in MODTRAN The simulated clouds varied in cloud height and droplet distribution according to six standard cloud types. Extinction was then varied to simulate different concentrations of water or ice in the cloud. Cirrus was simulated as a pure ice cloud. All other clouds contain only liquid water. The clouds shown use MODTRAN simulated cloud profiles for cumulus, altostratus, stratus, stratus&stratocumulus, nimbostratus and cirrus. The optical depth of each cloud type was varied by keeping cloud altitude and thickness the same as a standard cloud and modifying the extinction. The cirrus (ice) clouds show a distinct slope difference from 1500 to 1700 nm, which leads to discrimination of these clouds vs all others simulated. The spectral shape difference vs cloud type for the same optical depth can be seen in Figure 9.3. This slope can be characterized by taking apectral at narrow bandwidths 70 for different regions, integrating them, and taking the ratio between regions for best classification algorithm. Figure 9.3: Sunlight scattered from simulated simulated ice clouds (cirrus) have a much different spectral shape than liquid water clouds There are small spectral features seen in Figure 9.3 near 1600 nm that are due to CO2 absorption. The assumption with the following algorithms are that CO2 content will be constant between measurements of ice clouds and water clouds. Algorithm Development Several different cloud-phase calculation algorithms have been tested using simulated MODTRAN data. Most of these algorithms broke down when clouds reached an optical depth less than 1. The Knap et al (2002) method seemed to be the 71 Figure 9.4: The Knap (2002) algorithm allows ice clouds to be distinguished from liquid clouds in MODTRAN simulations most separable algorithm for phase determination using 150-nm bandwidths for the 1700 and 1640-nm center wavelengths. The ratio of integrated radiances for different cloud types is plotted against optical depth in Figure 9.4 to show the effectiveness of this algorithm with simulated data. Precipitable water vapor (PWV) can cause absorption in the spectral region of interest, so the standard 1976 atmosphere PWV was scaled by 50% to 150% in 10% increments to see the effect of PWV on the algorithm. Figure 9.5 shows the algorithm applied to these different PWV levels. The trend of the ratio does not change with PWV, but some smearing starts to occur at very small optical depths. This is due to both curves converging to clear sky, and the differences in radiance becomes very small. 72 Figure 9.5: PWV has little affect on the algorithm for OD > 1. 73 Figure 9.6: The spectral ratios between ice and water clouds become less discrete after gaussian 150 nm bandwidth filters are used. Algorithm Comparisons The algorithm was tested using simulated data and applying gaussian band- widths from 10 to 500 nm for 1640 and 1700 nm center wavelengths. Figure 9.6 shows how the ratios degrade as filter bandwidth is increased. The data in red are cirrus (ice) clouds. All other clouds are water or mixed phase. The different lines for each color annotate different optical depths. In order to improve separability between ice cloud data and water cloud data, 3 vectors were used; the (1700-1640)/1640 weighted integrated radiance difference, the (1550-1640)/1550 weighted integrated radiance difference, and optical depth. 74 Figure 9.7: Adding a third integrated radiance channel allows for greater separation of data vs optical depth. The data from this algorithm can be visualized in Figure 9.7 This shows improved separability for precipitable water vapor, and also shows a convergence for all cloud types to clear sky (the circles on the far left of the plot). This algorithm shows that by adding a third channel, greater separability between ice and water data can be achieved. This algorithm requires optical depth to be known. Since cloud optical depth is not known without a lidar support instrument or other instrument to measure cloud phase, another numeric needed to be used for a stand-alone radiometer system. Since three integrated bands are being used to determine cloud phase, the integrated value of each band can be evaluated as a vector, and these vectors can be summed in 3D space such as an RGB image. The magnitude of this vector is therefore sqrt(A2 +B2 +C2), where A is integrated radiance of 1550 nm center wavelength, B is integrated radiance of 1640 nm center wavelength, and 75 Figure 9.8: Optical Depth can be replaced by the RMS addition of the radiance from each channel C is integrated radiance of 1700 nm center wavelength. The magnitude of this vector is then used instead of optical depth for determining cloud phase. This data can be visualized using Figure 9.8. If only the magnitude of the 1640 nm channel is known, and the relative magnitude of the 1700 nm channel is known when compared to the 1640 channel, the following algorithm can be used. This is by far the simplest method explored. Since the radiance from a cirrus cloud stays much lower than the radiance from any other cloud type at the same optical depth, this parameter can be used to separate cloud phase for clouds with an optical depth of 0.1 or more. Using simulated data, this has higher separability using magnitude instead of cloud optical depth and does 76 Figure 9.9: Two-channel algorithm using calibrated radiance values not require the use of a support instrument. This algorithm could be used when a third channel is not available. This algorithm is visualized in Figure 9.9. Conclusion Using MODTRAN simulations of different cloud types, it has been shown that cloud phase can be determined using the ratios between two different wavelength channels. Specifically, the center wavelengths that show the most promise are 1550, 1640, and 1700 nm using 150 nm bandwidths. CONCLUSIONS This thesis described how five major challenges were overcome relating to infrared cloud imaging systems. The first challenge involved modifying an existing enclosure to perform in multiple environments. The second challenge was to overcome the loss of data collection imposed by OEM manufacturer firmware. The third challenge was to measure and show the impact of a relative spectral response function for a LWIR camera. The fourth challenge was to calibrate and compare different LWIR cameras for infrared cloud imaging studies. The fifth and final challenge was to analyze how cloud phase could be determined to update cloud optical depth models derived for ICI systems. Chapter two discussed some of the methodology behind infrared cloud imaging, and some of the physical mechanisms that enable the technology. Chapter three de- scribed two ICI systems that integrated OEM components into a research instrument. The constraint of the enclosure used by NASA JPL caused significant problems in terms of routine maintenance, instrument accessibility, and air flow. The JPL tube system was ultimately abandoned by NASA GRC in favor of a legacy environmental enclosure. Chapter four provided a brief introduction to radiometric calibration and the calibration algorithms previously developed at Montana State University. This enabled the reader to have a thorough understanding of later chapters. The effect of relative spectral response uncertainty had mostly been ignored in previous ICI systems due to the belief that the FPA temperature correction was the largest source of error in the calibration. Measurement of instrument RSR and analysis of measurement error produced surprising results. If left unmeasured, calibration error due to uncertainty in manufacturer published spectra is by far 77 78 the largest contributor to error in an ICI system (approximately 35% error). The measurement process and uncertainty analysis was discussed in chapter five. Since research experiments are not typically the normal customer for OEM developers, sometimes optimization for certain customers results in the OEM product failing to perform well for research applications. This was the case for the Tau2 camera discussed in chapter six, which had a firmware upgrade on the preprocessing unit inside the camera core. This upgrade caused the preprocessor to reject the signal from clouds as noise. Through perseverance and collaboration, a method was discovered to not only favorably bias the camera, but possibly enhance the cloud detection capabilities of the Tau2. Due to the difficulties and intricacies involved in calibrating the Tau2, another microbolometer camera was evaluated to see how it compared. The new camera core was called the Tamarisk, and it performed much like the Tau2 in the initial evaluation period. Chapter eight compared three different cloud imaging systems, the ICI3 photon, the Svalbard ICI Tau2, and the Viento Tamarisk. The ICI3 system was used as a benchmark instrument due to its long characterization history and other comparison tests. The Svalbard ICI was deemed ready for deployment; and the Tamarisk camera showed that it not only matched ICI3 data very well, it was also able to detect thin clouds. Chapter nine is an analysis for future radiometer units to use alongside ICI systems in order to improve cloud optical depth retrievals. Three different center wavelengths were chosen as a basis for future instrumentation. Cloud phase will be an important part of cloud classification algorithms, both for optical depth retrieval and expected attenuation estimates. 79 Cloud classification is an important part of propagation studies in the mm and optical wavelengths as it becomes a larger focus for NASA earth-to-satellite communication links. The work in this thesis showed hardware development, system characterization, and algorithm development for cloud classification systems. This work can be used as a baseline for relative spectral response analysis for analyzing ICI atmospheric model sensitivity, as well as future cloud phase radiometer systems. Future ICI systems should investigate the use of filters to reduce the affect of O3 and CO2 emission in integrated atmospheric radiance. Even though MSU has been able to model and correct for FPA temperature fluctuations in microbolometer cameras, future systems should also employ the use of environmental enclosures optimized for temperature stability because this will allow the temperature-stabilization routines to work better over a smaller range of camera temperatures. This will decrease the time involved in the calibration process, and possibly allow for detection of thinner clouds. Environmental enclosures should be large enough to allow access for periodic maintenance and troubleshooting. An optimal configuration would consist of a large panel that allowed access to test points. As systems are miniaturized, care must be taken to ensure that systems remain accessible to minimize maintenance time. 80 REFERENCES CITED [1] D. M. Cornwell, “NASA’s optical communications program for 2015 and beyond,” in SPIE LASE, pp. 93540E–93540E, International Society for Optics and Photonics, 2015. [2] S. Shambayati, D. Morabito, J. S. Border, F. Davarian, D. Lee, R. Mendoza, M. Britcliffe, and E. Weinreb, “Mars Reconnaissance Orbiter Ka-band (32 ghz) demonstration: Cruise phase operations, paper presented at AIAA SpaceOps Conference, Am. Inst. of Aeronaut. and Astronaut,” 2006. [3] D. E. Raible and A. G. Hylton, “Integrated RF/optical interplanetary networking preliminary explorations and empirical results,” in 30th AIAA International Communications Satellite Systems Conference, Ottawa, Canada, 2012. [4] Z. Sodnik, B. Furch, and H. Lutz, “Free-space laser communication activities in Europe: SILEX and beyond,” in LEOS 2006-19th Annual Meeting of the IEEE Lasers and Electro-Optics Society, pp. 78–79, IEEE, 2006. [5] H. E. Green, “Propagation impairment on Ka-band SATCOM links in tropical and equatorial regions,” IEEE Antennas and Propagation Magazine, vol. 46, no. 2, pp. 31–45, 2004. [6] J. Nessel, J. Morse, and M. Zemba, “Results from two years of Ka-band propa- gation characterization at Svalbard, Norway,” in The 8th European Conference on Antennas and Propagation (EuCAP 2014), pp. 3511–3515, IEEE, 2014. [7] D. W. Riesland, P. W. Nugent, J. A. Shaw, M. J. Zemba, and J. Houts, “Infrared cloud imaging in support of Ka-Band progagation studies,” in AIAA Proc. International Communications Satellite Systems Conference, (Cleveland, OH), 2016. [8] C. N. Long, J. M. Sabburg, J. Calbo´, and D. Page`s, “Retrieving cloud characteristics from ground-based daytime color all-sky images,” Journal of Atmospheric and Oceanic Technology, vol. 23, no. 5, pp. 633–652, 2006. [9] B. Thurairajah and J. A. Shaw, “Cloud statistics measured with the infrared cloud imager (ICI),” IEEE Transactions on Geoscience and Remote Sensing, vol. 43, no. 9, pp. 2000–2007, 2005. [10] J. R. Campbell, D. L. Hlavka, E. J. Welton, C. J. Flynn, D. D. Turner, J. D. Spinhirne, V. S. Scott III, and I. Hwang, “Full-time, eye-safe cloud and aerosol lidar observation at atmospheric radiation measurement program sites: Instruments and data processing,” Journal of Atmospheric and Oceanic Technology, vol. 19, no. 4, pp. 431–442, 2002. 81 [11] E. E. Clothiaux, K. P. Moran, B. E. Martner, T. P. Ackerman, G. G. Mace, T. Uttal, J. H. Mather, K. B. Widener, M. A. Miller, and D. J. Rodriguez, “The atmospheric radiation measurement program cloud radars: operational modes,” Journal of Atmospheric and Oceanic Technology, vol. 16, no. 7, pp. 819–827, 1999. [12] V. Mattioli, P. Basili, S. Bonafoni, P. Ciotti, and E. Westwater, “Analysis and improvements of cloud models for propagation studies,” Radio Science, vol. 44, no. 2, 2009. [13] J. A. Shaw, P. W. Nugent, N. J. Pust, B. Thurairajah, and K. Mizutani, “Radiometric cloud imaging with an uncooled microbolometer thermal infrared camera,” Opt. Express, vol. 13, pp. 5807–5817, Jul 2005. [14] J. A. Shaw and P. W. Nugent, “Physics principles in radiometric infrared imaging of clouds in the atmosphere,” European Journal of Physics, vol. 34, no. 6, p. S111, 2013. [15] P. W. Kruse, Uncooled thermal imaging: arrays, systems, and applications, vol. 2003. SPIE press Bellingham, WA, 2001. [16] P. W. Kruse and D. Skatrud, Uncooled Infrared Imaging Arrays and Systems. No. v. 47 in Semiconductors and semimetals, Academic Press, 1997. [17] M. Vollmer and K.-P. Mo¨llmann, Infrared thermal imaging: fundamentals, research and applications. John Wiley & Sons, 2010. [18] Personal Communication. Dr. J.A. Shaw. [19] P. W. Nugent, J. A. Shaw, and S. Piazzolla, “Infrared cloud imager development for atmospheric optical communication characterization, and measurements at the JPL Table Mountain Facility,” InterPlanetary Network Progress Report, vol. 42, no. 192, pp. 1–31, 2013. [20] B. N. Holben, T. Eck, I. Slutsker, D. Tanre, J. Buis, A. Setzer, E. Vermote, J. A. Reagan, Y. Kaufman, T. Nakajima, et al., “AERONET-A federated instrument network and data archive for aerosol characterization,” Remote sensing of environment, vol. 66, no. 1, pp. 1–16, 1998. [21] J. A. Shaw, P. W. Nugent, N. J. Pust, B. J. Redman, and S. Piazzolla, “Cloud optical depth measured with ground-based uncooled infrared imagers,” in Proc. SPIE, vol. 8523, p. 85231D, 2012. [22] A. Berk, G. P. Anderson, P. K. Acharya, L. S. Bernstein, L. Muratov, J. Lee, M. Fox, S. M. Adler-Golden, J. H. Chetwynd Jr, M. L. Hoke, et al., “MODTRAN5: 2006 update,” in Defense and Security Symposium, pp. 62331F– 62331F, International Society for Optics and Photonics, 2006. 82 [23] P. W. Nugent, J. A. Shaw, and N. J. Pust, “Radiometric calibration of infrared imagers using an internal shutter as an equivalent external blackbody,” Optical Engineering, vol. 53, no. 12, pp. 123106–123106, 2014. [24] P. W. Nugent and J. A. Shaw, “Calibration of uncooled LWIR microbolometer imagers to enable long-term field deployment,” in SPIE Defense+ Security, pp. 90710V–90710V, International Society for Optics and Photonics, 2014. [25] P. W. Nugent, J. A. Shaw, and N. J. Pust, “Correcting for focal-plane-array temperature dependence in microbolometer infrared cameras lacking thermal stabilization,” Optical Engineering, vol. 52, no. 6, pp. 061304–061304, 2013. [26] P. W. Nugent, J. A. Shaw, N. J. Pust, and S. Piazzolla, “Correcting calibrated in- frared sky imagery for the effect of an infrared window,” Journal of Atmospheric and Oceanic Technology, vol. 26, no. 11, pp. 2403–2412, 2009. [27] P. W. Nugent, Deployment of the third-generation infrared cloud imager, a two year study of arctic clouds at Barrow Alaska. PhD thesis, Montana State University, 4 2016. Not yet published. [28] J. Peterson, J. Cardon, P. Sevilla, J. Hancock, C. Englert, C. Brown, K. Marr, and J. Harlander, “MIGHTI spectral calibration,” in Proc. Conference on Characterization and Radiometric Calibration for Remote Sensing, (Logan, UT), 2016. [29] J. Peterson, H. Latvakoski, G. Cantwell, J. Champagne, and J. Cardon, “200 nm to 100 um, with extremely low uncertainty requirements: Challenges of the RBI spectral calibration,” in Proc. Conference on Characterization and Radiometric Calibration for Remote Sensing, (Logan, UT), 2016. [30] Personal Communication. P.W. Nugent. [31] K. Sassen and G. G. Mace, Ground based remote sensing of cirrus clouds. Oxford, New York, NY, 2002. [32] P. W. Nugent and J. A. Shaw, “Large-area blackbody emissivity variation with observation angle,” in SPIE Defense, Security, and Sensing, pp. 73000Y–73000Y, International Society for Optics and Photonics, 2009. [33] P. Pilewskie and S. Twomey, “Cloud phase discrimination by reflectance measurements near 1.6 and 2.2 µm,” Journal of the atmospheric sciences, vol. 44, no. 22, pp. 3419–3420, 1987. [34] P. Pilewskie and S. Twomey, “Discrimination of ice from water in clouds by optical remote sensing,” Atmospheric research, vol. 21, no. 2, pp. 113–122, 1987. 83 [35] “Attenuation due to clouds and fog.” ITU Recommendation ITU-R P.840-6, 09 2013. [36] B. A. Baum, D. P. Kratz, P. Yang, S. Ou, Y. Hu, P. F. Soulen, and S.-C. Tsay, “Remote sensing of cloud properties using MODIS airborne simulator imagery during success: 1. data and models,” Journal of Geophysical Research: Atmospheres, vol. 105, no. D9, pp. 11767–11780, 2000. [37] W. H. Knap, P. Stammes, and R. B. Koelemeijer, “Cloud thermodynamic-phase determination from near-infrared spectra of reflected sunlight,” Journal of the atmospheric sciences, vol. 59, no. 1, pp. 83–96, 2002. [38] S. L. Nasiri and B. H. Kahn, “Limitations of bispectral infrared cloud phase determination and potential for improvement,” Journal of Applied Meteorology and Climatology, vol. 47, no. 11, pp. 2895–2910, 2008. [39] S. Platnick, M. D. King, S. A. Ackerman, W. P. Menzel, B. A. Baum, J. C. Rie´di, and R. A. Frey, “The MODIS cloud products: Algorithms and examples from terra,” IEEE Transactions on Geoscience and Remote Sensing, vol. 41, no. 2, pp. 459–473, 2003. [40] P. Chylek, S. Robinson, M. Dubey, M. King, Q. Fu, and W. Clodius, “Comparison of near-infrared and thermal infrared cloud phase detections,” Journal of Geophysical Research: Atmospheres, vol. 111, no. D20, 2006. [41] A. Kokhanovsky, O. Jourdan, and J. Burrows, “The cloud phase discrimination from a satellite,” IEEE Geoscience and Remote Sensing Letters, vol. 3, no. 1, pp. 103–106, 2006. [42] D. D. Turner, S. Ackerman, B. Baum, H. E. Revercomb, and P. Yang, “Cloud phase determination using ground-based AERI observations at SHEBA,” Journal of Applied Meteorology, vol. 42, no. 6, pp. 701–715, 2003. [43] K. F. Palmer and D. Williams, “Optical properties of water in the near infrared∗,” J. Opt. Soc. Am., vol. 64, pp. 1107–1110, Aug 1974. [44] S. G. Warren, “Optical constants of ice from the ultraviolet to the microwave,” Appl. Opt., vol. 23, pp. 1206–1225, Apr 1984.