VALIDATION OF A COUPLED WEATHER AND SNOWPACK MODEL ACROSS WESTERN MONTANA AND ITS APPLICATION AS A TOOL FOR AVALANCHE FORECASTING by Kyle Webb Van Peursem A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Earth Sciences MONTATA STATE UNIVERSITY Bozeman, Montana November 2016 ©COPYRIGHT by Kyle Webb Van Peursem 2016 All Rights Reserved ii ACKNOWLEDGEMENTS I would like to first thank my advisor, Jordy Hendrikx, for all of his support, mentorship, and encouragement through my entire graduate career at Montana State University. From helping me decide to make the big leap to go back to school after leaving the Air Force, to pointing me in the right direction with my research, to always passing along job opportunities and encouraging me to apply, thank you for everything you have done for me and the other graduate students. I would also like to thank my other committee members, Karl Birkeland and Dan Miller, for their guidance, support, and willingness to take on my research amidst their own workloads. I owe a huge debt of gratitude to Chris Gibson of the National Weather Service for his willingness to partner with me in my research and spending countless hours setting-up, maintaining, and troubleshooting the WRF model. Without your hard work, this project would never have been a success. Additional thanks goes out to Pete Maleski and Scott Bohr and the rest of the Bridger Bowl Ski Patrol for assisting with the study plot and weather station installation and for keeping a watchful eye out for poachers. Additionally, my lower back and I are very thankful to all the field assistants who helped me dig 35 full-depth pits over the two seasons, including Leah Sturges, Diana Saly, Chris Bilbrey, Alex Milton, and various members of the Bridger Bowl Ski Patrol. Lastly and most importantly, I would like to dedicate this thesis to my wonderful wife, who, with unconditional love, support, and understanding, followed me from one end of the globe to the other so I could pursue my passion. I couldn’t have done this without you, I love you! iii TABLE OF CONTENTS 1. INTRODUCTION ......................................................................................................... 1 1.1 Background ............................................................................................................. 1 1.2 Snow Cover Modeling ............................................................................................. 7 1.2.1 Crocus .......................................................................................................... 7 1.2.2 SNOWPACK ............................................................................................... 8 1.3 Previous Work with SNOWPACK and Crocus .................................................... 11 1.4 Limitations of Observed Weather Data ................................................................. 14 1.5 Coupling NWP and Snow Cover Models .............................................................. 15 1.6 Previous Work Coupling NWPs with SNOWPACK and Crocus ......................... 18 1.7 Research Questions ............................................................................................... 21 2. METHODS .................................................................................................................. 23 2.1 Study Area ............................................................................................................. 24 2.1.1 2014/15 Winter Season ............................................................................. 24 2.1.2 2015/16 Winter Season ............................................................................. 27 2.2 Field Data .............................................................................................................. 29 2.3 Meteorological Data .............................................................................................. 31 2.3.1. Observed Meteorological Data ................................................................. 31 2.3.2. Forecasted Meteorological Data ............................................................... 36 2.4 SNOWPACK Model Setup ................................................................................... 42 2.5 Data Analysis ........................................................................................................ 44 2.5.1 Comparison of Modeled and Observed Snow Profiles ............................. 44 2.5.2 Statistical Measures of Model Performance .............................................. 46 2.5.2.1 Numerical ................................................................................... 46 2.5.2.2 Categorical .................................................................................. 51 3. RESULTS .................................................................................................................... 57 3.1 Winter 2014/15 ...................................................................................................... 57 3.1.1 WRF Model Performance ......................................................................... 59 3.1.1.1 Temperature ................................................................................ 60 3.1.1.2 Snow Surface Temperature ........................................................ 62 3.1.1.3 Precipitation ................................................................................ 64 3.1.1.4 Relative Humidity ...................................................................... 67 3.1.1.5 Wind Speed and Gust ................................................................. 67 3.1.1.6 Incoming Shortwave Radiation (ISWR) .................................... 69 3.1.1.7 Outgoing Shortwave Radiation (OSWR) ................................... 70 3.1.1.8 Incoming Longwave Radiation (ILWR) .................................... 71 3.1.2 SNOWPACK Simulation Results ............................................................. 72 3.1.2.1 Verification of Snow Pit Data .................................................... 74 iv TABLE OF CONTENTS - CONTINUED 3.1.2.2 Verification of Other Observed Data ......................................... 83 3.1.2.2.1 Snow Depth .............................................................. 83 3.1.2.2.2 SWE .......................................................................... 85 3.1.2.2.3 50 cm Snowpack Temperature ................................. 87 3.1.2.2.4 24-Hour and Total Snowfall ..................................... 88 3.1.2.2.5 New Snow Density ................................................... 91 3.1.2.3 Overall Accuracy ........................................................................ 91 3.2 Winter 2015/16 ...................................................................................................... 92 3.2.1 WRF Model Performance ......................................................................... 97 3.2.2 SNOWPACK Simulation Results ........................................................... 102 3.3 2015/16 Case Studies ......................................................................................... 108 3.3.1 Bridger Bowl .......................................................................................... 108 3.3.2 NW Montana .......................................................................................... 124 4. DISCUSSION ........................................................................................................... 131 4.1 2014/15 WRF Model Performance .................................................................... 131 4.2 2014/15 SNOWPACK Performance .................................................................. 137 4.2.1 Bias Corrections ....................................................................................... 146 4.3 2015/16 WRF Model Performance .................................................................... 150 4.4 2015/16 SNOWPACK Performance .................................................................. 156 4.5 2015/16 Case Studies ......................................................................................... 157 4.6 Usefulness and Application of a Coupled SNOWPACK-WRF Model in an Operational Setting ........................................................................ 165 5. CONCLUSIONS ....................................................................................................... 171 5.1 Key Findings ...................................................................................................... 172 5.2 Limitations and Future Work ............................................................................. 175 REFERENCES CITED ................................................................................................. 178 APPENDICES .............................................................................................................. 184 APPENDIX A: SNOWPACK Initialization Set-ups ...................................... 185 APPENDIX B: Snowpit Profiles from the 2014/15 Season ............................ 195 APPENDIX C: Snowpit Profiles from the 2015/16 Season ............................ 214 v LIST OF TABLES Table Page 2.1 Observed Meteorological Input Parameters for the 2014/15 SNOWPACK-Obs Simulation ......................................................... 35 2.2 WRF Meteorological Input Parameters for the 2014/15 SNOWPACK-WRF Simulation ...................................................... 41 2.3 WRF Temperature Corrections for the 2015/16 Season ................................ 42 2.4 Statistical Validation of Modeled Snow Density for a Hypothetical Snow Profile ............................................................................. 51 2.5 Grain Type Agreement Matrix ....................................................................... 53 2.6 Validation of Grain Type using the Exact Match Method for a Hypothetical Snow Profile ....................................................... 55 2.7 Validation of Grain Type Using the Agreement Score Method for a Hypothetical Snow Profile ....................................................... 56 3.1 Bridger Bowl Weather Summary for the 2014/15 Season ............................. 58 3.2 2014/15 WRF Temperature Verification Results .......................................... 60 3.3 2014/15 WRF Snow Surface Temperature Verification Results ................... 63 3.4 2014/15 WRF Total Precipitation Verification Results .................................. 64 3.5 2014/15 WRF 24-Hour Precipitation Verification Results ............................. 66 3.6 2014/15 WRF Relative Humidity Verification Results .................................. 67 3.7 2014/15 WRF Wind Speed and Gust Verification Results ............................. 68 3.8 2014/15 WRF Categorical Wind Speed Verification Results ........................ 68 3.9 2014/15 WRF Categorical Wind Gust Verification Results .......................... 68 3.10 2014/15 WRF Incoming Shortwave Radiation Verification Results ............ 70 vi LIST OF TABLES CONTINUED Table Page 3.11 2014/15 WRF Outgoing Shortwave Radiation Verification Results ............ 71 3.12 2014/15 WRF Incoming Longwave Radiation Verification Results ............ 72 3.13 2014/15 SNOWPACK Verification Results for Density, Temperature, Grain Size, and Hand Hardness ............................................ 74 3.14 2014/15 SNOWPACK Grain Type Verification Results Using the Exact Match Method .................................................................. 74 3.15 2014/15 SNOWPACK Grain Type Verification Results Using the Agreement Score Method ........................................................... 75 3.16 2014/15 SNOWPACK Near Surface Temperature Verification Results .... 78 3.17 2014/15 SNOWPACK Grain Type Verification Contingency Tables ......... 81 3.18 2014/15 SNOWPACK Snow Depth Verification Results ........................... 85 3.19 2014/15 SNOWPACK SWE Verification Results ...................................... 87 3.20 2014/15 SNOWPACK 50 cm Temperature Verification Results ................ 88 3.21 2014/15 SNOWPACK 24-Hour Snowfall Verification Results .................. 89 3.22 2014/15 SNOWPACK Total Snowfall Verification Results ....................... 90 3.23 2014/15 SNOWPACK New Snow Density Verification Results ................ 91 3.24 2014/15 SNOWPACK Overall Accuracy .................................................... 92 3.25 2015/16 Temperature, Precipitation, SWE, and Snowfall Departures from Average for Five Locations Across Western Montana ......................................................................................... 93 3.26 2015/16 WRF Total Precipitation Verification Results ............................... 99 3.27 2015/16 WRF 24-Hour Precipitation Verification Results ........................ 101 3.28 2015/16 WRF Temperature Verification Results ...................................... 101 vii LIST OF TABLES CONTINUED Table Page 3.29 2015/16 WRF Bias Corrections .................................................................. 103 3.30 2015/16 SNOWPACK Snow Depth Verification Results ......................... 105 3.31 2015/16 SNOWPACK SWE Verification Results .................................... 107 4.1 2014/15 Snow Surface Temperature Verification Results for the WRF, SNOWPACK, and SNOWPACK Filtered Models ............................................................................................ 137 4.2 Comparison of Validation Results Between Lundy et al. (2001) and the 2014/15 Study ................................................................................. 144 4.3 Comparison of Snow Depth Verification Results for the 2014/15 SNOWPACK-WRF Model Using Different Bias Corrections .................... 147 4.4 Overall Accuracy Comparison for the 2014/15 SNOWPACK-WRF Model Using Different Bias Corrections ..................................................... 149 viii LIST OF FIGURES Figure Page 1.1 Graph of U.S. Avalanche Fatalities .................................................................. 2 1.2 Graph of U.S. Annual Death Rate due to Lightning, Tornadoes, Floods, and Hurricanes .................................................................................... 3 1.3 Three Classes of Data Used in Avalanche Forecasting ................................... 4 1.4 Avalanche Forecasting Data Flow ................................................................... 6 1.5 Example Output from SNOWPACK Using the SnopViz Tool ..................... 10 1.6 Terrain Resolution Difference for Four Numerical Weather Models ............ 16 1.7 Precipitation Resolution Difference for Four Numerical Weather Models ... 17 2.1 Map Showing Location of Bridger Bowl Ski Area ....................................... 24 2.2 Aerial Photo of Bridger Bowl Ski Area with Locations of the Two Study Plots Labeled .................................................... 26 2.3 Photo of the Alpine Study Plot ...................................................................... 27 2.4 Photo of the Ramp Study Plot ........................................................................ 28 2.5 Map Showing Five Study Sites Used During the 2015/16 Season ................ 29 2.6 Map of Avalanche Forecast Advisory Areas for Western Montana .............. 31 2.7 Photo of the MSU Weather Station ............................................................... 33 2.8 2014/15 WRF Model Domains ...................................................................... 38 2.9 Map Comparing Location of 2014/15 WRF Model Grid Point and Alpine Study Plot .................................................................. 39 2.10 2015/16 WRF Model Domains .................................................................... 41 2.11 Example of Normalization Method Used to Compare Modeled and Observed Snow Profiles ......................................................... 46 ix LIST OF FIGURES CONTINUED Figure Page 2.12 Example Density Comparison of a Modeled and Observed Snow Profile ....................................................................... 51 2.13 Example Grain Type Comparison of a Modeled and Observed Snow Profile ....................................................................... 55 3.1 Comparison of Average and Observed Annual Snowfall and Precipitation for the 2014/15 Season at Bridger Bowl Ski Area .................................................................................... 58 3.2 2014/15 Scatter Plot of Observed vs. WRF Temperatures ............................. 61 3.3 2014/15 Scatter Plot of Observed vs. WRF Snow Surface Temperatures ...... 63 3.4 2014/15 Observed vs. WRF Total Precipitation ............................................. 65 3.5 2014/15 SNOWPACK Snow Grain Simulations ........................................... 73 3.6 2014/15 Scatter Plot of Observed vs. SNOWPACK Modeled Density ......... 75 3.7 2014/15 Boxplot of Observed vs. SNOWPACK Modeled Density .............. 76 3.8 2014/15 Scatter Plot of Observed vs. SNOWPACK Modeled Temperature .................................................................................... 76 3.9 2014/15 Boxplot of Observed vs. SNOWPACK Modeled Temperature .................................................................................... 77 3.10 2014/15 Scatter Plot of Observed vs. SNOWPACK Modeled Grain Size ...................................................................................... 79 3.11 2014/15 Boxplot of Observed vs. SNOWPACK Modeled Grain Size ...................................................................................... 79 3.12 2014/15 Boxplot of Observed vs. SNOWPACK Modeled Hand Hardness ............................................................................. 80 3.13 2014/15 Observed vs. SNOWPACK Modeled Snow Depth ....................... 84 3.14 2014/15 Observed vs. SNOWPACK Modeled SWE .................................. 86 x LIST OF FIGURES CONTINUED Figure Page 3.15 2014/15 Observed vs. SNOWPACK Modeled 50 cm Temperature ............ 88 3.16 2014/15 Observed vs. SNOWPACK Modeled Total Snowfall ................... 90 3.17 Average vs. Observed Values of Precipitation and SWE for four SNOTEL stations in Western Montana for the 2015/16 Season ................................................................................. 94 3.18 2015/16 Seasonal Weather Timeline for Bridger Bowl and Noisy Basin ............................................................................................ 96 3.19 SNOWPACK Simulation Date Range for Each Location for the 2015/16 Season ................................................................ 97 3.20 2015/16 Observed vs. WRF Total Precipitation ........................................ 100 3.21 2015/16 SNOWPACK Simulations ........................................................... 104 3.22 2015/16 Observed vs. SNOWPACK Modeled Snow Depth ..................... 106 3.23 2015/16 Observed vs. SNOWPACK Modeled SWE ................................. 107 3.24 Photo of Saddle Peak ................................................................................. 109 3.25 Comparison of Observed vs Modeled Snow Profiles on November 25, 2015 for the Ramp Study Plot ........................................... 110 3.26 Comparison of Observed vs Modeled Snow Profiles on December 13, 2015 for Saddle Peak ......................................................... 111 3.27 Photo of Three Avalanches on Saddle Peak on December 20, 2015 ......... 112 3.28 Photo of Crown Line of December 20, 2015 Saddle Peak Avalanche ...... 112 3.29 Modeled Snow Profile for Saddle Peak on December 20, 2015 ................ 113 3.30 Comparison of Observed vs Modeled Snow Profiles on January 4, 2015 for the Ramp Study Plot .................................................. 115 xi LIST OF FIGURES CONTINUED Figure Page 3.31 Photo of January 14, 2016 Saddle Peak Avalanche ................................... 116 3.32 Photo of Crown Line and Track of the January 14, 2016 Saddle Peak Avalanche ............................................................................. 117 3.33 Modeled Snow Profile for Saddle Peak on January 14, 2016 .................... 118 3.34 Photo of the Early April 2016 Slab Avalanche in the Northern Bridger Mountains ........................................................... 119 3.35 Map of Bridger Range showing the Location of the Early April Avalanche .................................................................... 120 3.36 April 3, 2016 SNOWPACK Modeled Snow Profile and Liquid Water Content for the Ramp Study Plot .................................................... 122 3.37 Comparison of Observed vs Modeled Snow Profiles on April 8, 2016 for the Ramp Study Plot ...................................................... 123 3.38 Photo of Dec 9, 215 Red Meadow Avalanche in NW Montana ................ 125 3.39 Modeled Snow Profile for Noisy Basin on December 9, 2015 .................. 126 3.40 Comparison of Observed vs Modeled Snow Profiles on December 14, 2014 for Big Mountain Summit .......................................... 127 3.41 Photo of January 14, 2016 Avalanche in Noisy Basin ............................... 128 3.42 Modeled Snow Profile for Noisy Basin on January 14, 2016 .................... 129 4.1 Scatter Plot of Observed vs. SNOWPACK Modeled Snow Surface Temperatures During the 2014/15 Season ....................................... 136 4.2 Comparison of SNOWPACK-Obs and SNOWPACK-WRF Modeled Snow Grain Type for the First Part of the 2014/15 Season ......................... 139 4.3 March 16, 2015 Observed Snow Profile at the Alpine Study Plot .............. 141 4.4 Comparison of SNOWPACK-Obs and SNOWPACK-WRF Modeled Snow Grain Type for the Later Part of the 2014/15 Season ........................ 142 xii LIST OF FIGURES CONTINUED Figure Page 4.5 Snow Depth Comparison for the SNOWPACK-WRF Model using Four Different Bias Corrections During the 2014/15 Season .......................................................................... 147 4.6 Map Comparing the Location of the Stuart Mountain WRF Model Grid Point and the Stuart Mountain SNOTEL Station .......................................................................................... 154 4.7 Map Comparing the Location of the Noisy Basin WRF Model Grid Point and the Noisy Basin SNOTEL Station .......................................................................................... 155 4.8 Saddle Peak SNOWPACK Grain Type Simulation During the 2015/16 Season Showing Three Surface Hoar Layers .................................................................................... 159 xiii ABSTRACT Predicting avalanche danger depends on knowledge of the existing snowpack structure and the current and forecasted weather conditions. In remote and data sparse areas this information can be difficult, if not impossible, to obtain, increasing the uncertainty and challenge of avalanche forecasting. In this study, we coupled the Weather Research and Forecasting (WRF) model with the snow cover model SNOWPACK to simulate the evolution of the snow structure for several mountainous locations throughout western Montana, USA during the 2014-2015 and 2015-2016 winter seasons. We then compared the model output to manual snow profiles and snow and avalanche observations to assess the quantitative and qualitative accuracy of several snowpack parameters (grain form, grain size, density, stratigraphy, etc.) during significant avalanche episodes. At our study sites, the WRF model tended to over- forecast precipitation and wind, which impacted the accuracy of the simulated snow depths and SWE throughout most of the study period. Despite this, the SNOWPACK- WRF model chain managed to approximate the snowpack stratigraphy observed throughout the two seasons including early season faceted snow, the formation of various melt-freeze crusts, the spring transition to an isothermal snowpack, and the general snowpack structure during several significant avalanche events. Interestingly, the SNOWPACK-WRF simulation was statistically comparable in accuracy to a SNOWPACK simulation driven with locally observed weather data. Overall, the model chain showed potential as a useful tool for avalanche forecasting, but advances in numerical weather and avalanche models will be necessary for widespread acceptance and use in the snow and avalanche industry.     1 1. INTRODUCTION 1.1 Background Avalanches are a significant worldwide natural hazard that threaten transportation corridors, cause property damage, and lead to injury and death. Though individual avalanche incidents do not have the same impact as other natural hazards in terms of direct cost and potential for loss of life (e.g. earthquakes, volcanic hazards, tropical storms, etc.), avalanches occur on a much more frequent basis and can have significant impacts in areas where this phenomenon is common (McClung and Schaerer, 2006). Between 1950 and 2016, 1,044 people were killed in avalanches in the United States (Colorado Avalanche Information Center, 2016). Between 1991 and 2003, avalanche deaths in the U.S. dramatically increased, with the 5-year running average rising from 12 deaths per year in 1990 to 30 deaths per year by 2003 (Fig. 1.1). This uptick in avalanche fatalities can be attributed to the rapid rise in popularity of backcountry recreation in conjunction with increased performance of backcountry ski and snowmobile equipment. One recent trend noted in Fig. 1.1 is the plateau of avalanche fatalities after 2003, with the 5 year running average staying at or below 30 deaths per year. This is in direct contrast to the rapid increase in backcountry use over the last 10 years. It has been estimated that there are nearly 500,000 backcountry skiers in the U.S. (Dostie, 2010) with tens of thousands more participating in other forms of backcountry recreation (e.g. split boarding, snowmobiling, snowshoeing, etc.). A reason for the plateau in annual deaths   2 can likely be attributed to progress in avalanche forecasting, communication, awareness, education, and advances in snow and avalanche science across the U.S. Figure 1.1. Avalanche fatalities in the United States by season from 1951 through 2015 (Source: CAIC, 2016) When we consider the death rate due to weather related hazards, we observe a similar downward trend. Since 1940, the death rate due to lightning, tornados, floods, and hurricanes has fallen from five people per million to about one person per million (Fig. 1.2). To put this change into perspective, a death rate of five per million would have resulted in approximately 1600 deaths in 2015, compared to the actual number of 271, an 83% decrease. Many factors, including increased warning time due to improved communications and advances in weather forecasting and technology, can be attributed to   3 this decrease. It is feasible that these same trends and correlations can be extrapolated over to avalanche fatalities, with advances in technology and forecasting leading to decreasing death rates in the future, even as backcountry use grows exponentially. Figure 1.2. Annual death rate due to lightning, tornadoes, floods and hurricanes in the U.S. from 1940 – 2015. Data courtesy of the U.S. National Weather Service (http://www.nws.noaa.gov/om/hazstats/resources/weather_fatalities.pdf) and the U.S. Census Bureau (http://www.census.gov/population/estimates/nation/popclockest.txt). As the number of people exposed to avalanche danger increases each year, accurately forecasting the avalanche threat and clearly disseminating key information to the public will be vital. Across the U.S., 22 regional avalanche forecast centers, six Department of Transportation avalanche forecasting programs, numerous ski patrol/snow 0" 1" 2" 3" 4" 5" 6" 7" 1940" 1945" 1950" 1955" 1960" 1965" 1970" 1975" 1980" 1985" 1990" 1995" 2000" 2005" 2010" 2015" De at hs 'p er 'M ill io n' U.S.'Annual'Death'Rate'due'to'Lightning,' Tornadoes,'Floods'and'Hurricanes'' 1940'A'2015''' "Annual"Death"Rate"per"Million" "5;Year"Moving"Average"   4 safety teams, and private backcountry/heli-skiing guiding companies work throughout the winter season to predict, warn, and mitigate avalanche danger in their respective area of responsibility to protect the public or their customers. To do this, avalanche forecasters gather and utilize an assortment of data to make informed decisions about the current and future stability of the snowpack. Avalanche danger is dependent on a wide variety of factors including past, present and future meteorological conditions, the underlying snow structure and stratigraphy, and terrain (e.g. slope angle, aspect, and elevation) (LaChapelle, 1980). This data can be broken down into three classes, as described by McClung (2002) in figure 1.3 below. Figure 1.3. Diagram of different classes of data used to make an avalanche forecast. (Source: LaChapelle, 1980) Data in the lowest class (Class I) provides the most direct evidence of avalanche potential with Class III data providing the least direct evidence. The direction of the diagram   5 shows the typical path an avalanche forecaster takes when creating a forecast. First, forecasters must understand how past, present, and future weather conditions will influence the formation and evolution of the snowpack. Next, they must apply their understanding of the snowpack to determine the stability of the snowpack, using direct observations of stability and previously observed avalanches. Finally, they can make a determination of the avalanche potential for their area of interest. Avalanche forecasts are also made on different spatial scales. Spatial scales can be broken up into three sizes: the synoptic (e.g. regional or mountain range scale), mesoscale (e.g. a specific part of a mountain range, like a ski resort or highway avalanche area), and microscale (e.g. a specific slope, avalanche path, or terrain feature). As the spatial scale decreases, (i.e., from the synoptic to microscale) the difficulty of the forecast increases due to the complex spatial variability of the snowpack and terrain features. The forecast problem goes from more general in nature to much more specific in nature. Figure 1.4 below shows the typical path an avalanche forecaster will take depending on the spatial scale they are forecasting for. This shows that forecasters operating on a microscale (e.g. ski guides) must rely more upon Class I data to make informed decisions as this data provides direct evidence of avalanche potential. On the other hand, regional forecasters rely more upon weather data and forecasts, combined with direct observations gathered from the field, to determine regional scale avalanche forecasts.   6 Figure 1.4. Typical avalanche forecast data flow according to the forecast scale. (Source: McClung, 2002) Data from the three classes is gathered in different ways. Past and present meteorological data is usually gathered from automatic weather stations and can typically be accessed remotely and in near real time at an interval of an hour or less. Forecasted weather data is also accessed remotely in the form of a general area or point weather forecast, or directly from numerical weather models. Conversely, knowledge of the snowpack structure is derived from manual snow profile data and stability tests gathered in the field. This data is much more difficult and dangerous to collect as it may require a human observer to travel through avalanche terrain in poor weather conditions. Manual observations are usually low in spatial and temporal resolution meaning the highly variable nature of the snowpack is often not captured (Schweizer et al., 2006). Many locations where snowpack data is needed can be inaccessible during the winter months, like high-alpine wind-loaded starting zones above busy transportation corridors (e.g. Snoqualmie Pass in Washington, Seward Highway in Alaska, and Little Cottonwood Canyon in Utah) or remote slopes accessed by heli-ski guide operators. Additionally,   7 stability tests and snow grain classification includes a certain level of human interpretation and subjective dissemination, which may decrease the reliability of the data. 1.2 Snow Cover Modeling In order to augment information obtained from manual snowpack observations, numerical snow cover models have been developed which simulate the evolution of seasonal mountain snow cover. These physically based models use meteorological inputs to drive calculations of processes in the snowpack and provide a simulated, or predicted, snowpack as output. Effectively, these models use Class III data to simulate Class I and II data to predict avalanche danger. Numerous snow cover models have been developed in recent years for various purposes and applications including use in global or regional climate models, forecasting water runoff, understanding snow processes, and operational avalanche forecasting (Armstrong and Brun, 2008). The two most advanced snow cover models used in avalanche forecasting are the French model Crocus (Brun et al., 1989 and 1992) and the Swiss snow cover model SNOWPACK (Bartelt and Lehning, 2002, Lehning et al., 2002a, b). 1.2.1 Crocus Crocus is a one-dimensional numerical snow cover model developed in France by the National Center for Meteorological Research (Brun et al., 1989). The model is driven via meteorological forcing and is able to simulate the evolution and mass-transfer   8 between the snowpack and the atmosphere (e.g. radiative balance, turbulent heat fluxes, and moisture fluxes) and the snowpack and the ground (ground heat fluxes). The model simulates the time evolution of the morphological properties of snow grains and snow metamorphism to predict the formation of individual layers within the snowpack. These layers are described through the variables of thickness, temperature, density, liquid water content, and grain type, which is described semi-quantitatively using dendricity, sphericity, grain size, and the physical history of the snow layer. Crocus is typically coupled with the MEPRA model in the French Alps, which is an expert system for avalanche forecasting. It calculates the snow mechanical properties from Crocus snowpack simulations (e.g. shear strength and rammsonde resistance) and assesses the natural mechanical stability of the snowpack in order to predict the natural and human triggered avalanche risk (Durand et al., 1999). Crocus can simulate snow cover for many different slopes, elevations, and aspects representative of typical French massifs covering an area of about 500 km2. The results are projected onto virtual pyramids in 300 m elevation bands covering multiple aspects. Crocus is currently used operationally for avalanche forecasting in the French Alps by Météo-France. 1.2.2 SNOWPACK SNOWPACK is a one-dimensional snow cover model developed by the Swiss Federal Institute for Snow and Avalanche Research (SLF) in Davos, Switzerland. SNOWPACK uses Lagrangian Finite Element methods to numerically solve partial differential equations which govern mass, energy, and momentum conservation in order to model the heat and mass transfer, stresses, and strain within the snowpack (Bartelt and   9 Lehning, 2002). As with Crocus, SNOWPACK uses meteorological input data to drive the numerical model, which can simulate snow cover for hours, days, weeks, months, or even years. At a minimum, SNOWPACK requires the following meteorological input parameters: air temperature, relative humidity, wind speed, incoming or reflected shortwave radiation, incoming longwave radiation or snow surface temperature, and either measured snow height or new snow water equivalent. Optional meteorological inputs include wind direction (if simulating wind loading/erosion on different slopes), maximum wind speed, ground temperature, and snow temperature at various depths (Lehning et al., 2002b). Once SNOWPACK has completed a simulation run, a variety of output data is available. Among other things, SNOWPACK calculates and determines snow grain type, grain size, density, liquid water equivalent, temperature profile, temperature gradient, hand hardness, and even different stability parameters. Additionally, SNOWPACK determines the 24-hour and 3-day new snowfall, the new snow density, as well as the total snow water equivalent (SWE) of the snowpack. An important aspect of SNOWPACK is that it has the ability to simulate the formation of various weak layers like surface hoar, facets, depth hoar, and melt freeze crusts, which are vital for avalanche forecasting. The output can be viewed via a Java application called SNOWPACK Graphical User Interface (SNGUI), which visually displays the evolution of different snowpack parameters over the entire time period. Recently, a web-based interactive snow profile visualization tool, called Snowpack Visualizer (SnopViz), was developed by the SLF and allows users to view SNOWPACK simulations and meteorological time   10 series. Visualization options include mouse roll over view of the point-in-time profile (Fig. 1.5) and simulated pit profiles from SNOWPACK output. Figure 1.5. Example of graphical output from the SnopViz tool. On the left is the time evolution of grain type (color coded) and snow height for the entire winter season. On the right, the point in time profile with grain type and hand hardness is shown, which corresponds to the location (date) of the vertical black line in the seasonal profile. The temperature profile (red line) and several stability indices (black arrow) are displayed on the right image. Mousing over a specific layer displays the grain type, grain size, bottom and top heights of the layer, as well as the temperature at the specific point in the snowpack (top left). To complement the one-dimensional aspect of SNOWPACK, the SLF has developed a three-dimensional model called Alpine3D. This is a spatially distributed model that analyzes and predicts snow-dominated surface processes in mountainous topography. Alpine3D extrapolates surface weather data onto a virtual grid and uses SNOWPACK to predict the snow cover evolution for each grid point (Lehning et al., 2006). Though the model is used primarily for snow hydrology and climate change applications, it does have potential for use in regional avalanche prediction.   11 1.3 Previous Work with SNOWPACK and Crocus Both SNOWPACK and Crocus have been used operationally in avalanche forecasting programs since the 1990s and have shown promising results for further use and application. As stated previously, Crocus has primarily been used for operational avalanche forecasting in France but a few studies have utilized the model in different mountain climates. Mingo and McClung (1998) ran Crocus in two separate climatic regions in the southern Canadian Rocky Mountains. They used in-situ weather data from remote weather stations to drive the model over the course of two winters. They found good performance of the model’s ability to simulate snow depth, temperature, density, and crust formation. They noted when snow temperature gradients fell between 5° and 9° C/m the model produced faulty simulations of near-surface facets, though grain type was predicted well when the temperature gradient was outside of that range. Additionally, they found good predictions of surface hoar with a 70% accuracy rate over the entire study. One of the first operational uses of SNOWPACK was during an intense avalanche cycle in Switzerland in February 1999. During the month long event, over 3000 avalanches were observed with nearly 1000 of these causing damage to infrastructure, killing 17 people. Some avalanches were climax, 100-year events, with crown lines exceeding 5 m (Bartelt and Lehning, 2002). The SLF had recently implemented a new avalanche warning system with SNOWPACK being an integral part of the system. SNOWPACK provided forecasters with a ‘nowcast’ of the predicted snowpack settlement and stratigraphy across the Swiss Alps, which was driven with data from a   12 network of 50 high alpine weather stations. The simulations augmented manual field observations and provided supplementary information for forecasters where it otherwise would have been impossible or too dangerous to collect the data. Due in part to the implementation of SNOWPACK into operational use, forecasters were able to provide advanced warning of extremely dangerous avalanche conditions and mitigate the impact of the event (Lehning et al., 1999). SNOWPACK has also been evaluated in numerous studies for various applications across the world. Lundy et al. (2001) performed a statistical validation study of SNOWPACK in southwest Montana using input from an AWS. They assessed the accuracy of SNOWPACK’s predictions of temperature, density, grain size, and grain type against weekly field data dug in vicinity of the AWS. They found a relatively strong statistical correlation between observed vs. predicted temperature (r=0.90), a moderate correlation for density (r=0.85), and poor correlations for grain size (r=0.30) and grain shape (Cramer’s V = 0.38). From their results they concluded that SNOWPACK had the potential to be a useful tool for avalanche forecasters but had some limitations that future versions of the model could improve upon, including surface hoar development and wet snow metamorphism. To date, this has been the only study in the U.S. where SNOWPACK was used and evaluated for the purpose of avalanche prediction. Hirashima et al. (2010) used SNOWPACK to predict full-depth wet-slab avalanche activity in the temperate, wet climate of Nagaoka, Japan. They noted water transport in the original SNOWPACK model was oversimplified and did not adequately estimate free water percolation through the snowpack, which is important for predicting   13 the release of full-depth avalanches. They developed a modified water transport model and incorporated it into SNOWPACK and compared the results from the modified version to the original. During a mid-winter snowmelt period, the modified version accurately predicted an increase of liquid water content in the snowpack as well as a water saturated layer near the bottom of the snowpack, while the unmodified version did not show a significant change in the liquid water content of the snowpack. Additionally, the modified version showed a corresponding decrease in the snowpack stability at the same layer, which was coincident to a nearby observed full depth avalanche. Their work showed the applicability of using SNOWPACK for wet-slab avalanche prediction but the need for a more sophisticated water transport model. Schweizer et al. (2006) and Schirmer et al. (2010) evaluated SNOWPACK’s stability indices and predictions at local and regional scales against field stability tests and regional avalanche forecast advisories in Switzerland. They found relatively positive results, with predicted stability ratings being 75-85% accurate when compared to field data. There were issues with the model properly identifying observed critical weak layers leading to instabilities within the snowpack but, when the weak layer was identified, its weakness and potential for failure were, for the most part, accurately simulated. Lastly, Schmucki et al. (2013) evaluated SNOWPACK simulations of modeled snow depth and snow water equivalent (SWE) at three different locations across Switzerland. They used meteorological input from AWSs to drive the SNOWPACK model and compared the modeled snow depth and SWE to observed values. Results showed SNOWPACK successfully predicted seasonal snow depth and SWE in high   14 alpine locations, though they noted the importance of calibrating precipitation data in order to achieve the most accurate results. 1.4 Limitations of Observed Weather Data In each of the studies discussed above, in-situ weather data from remote weather stations was used as input to drive a snow cover model. Since the accuracy of a snow cover model is highly dependent on the quality of the meteorological input data driving it, having the most accurate and precise weather data is paramount to adequately simulate seasonal snow cover and avalanche potential. In an ideal situation, a vast network of remote high alpine weather stations, connected to an external power source, with the ability to measure all necessary (and optional) meteorological inputs, would be available. In a few cases, this type of infrastructure is available (e.g. Lehning et al., 1999), but in most avalanche forecasting operations across the globe, this is not feasible. Most centers forecast for large regions that are mostly remote with little observed data. For example, the Canadian Avalanche Center (CAC) forecasts for an area totaling about 345,000 km2 with only 250 AWS available, meaning the average area that each AWS covers is about 1,345 km2. This pales in comparison when compared to other countries with a much denser network of AWS, like in Switzerland where each AWS covers around 100 km2 (Bellaire et al., 2011). Though no exact figures are available, the AWS density in high alpine regions in the U.S. is likely similar to that in Canada, with forecast centers covering large areas of remote backcountry terrain with few high alpine weather stations available. In even more remote places, like Alaska’s Chugach range, which covers an   15 area of 50,448 km2, only a few AWS are available, with even less data coming from high alpine avalanche starting zones. The cost of an AWS can also be a prohibiting factor. An AWS that has the necessary instrumentation to meet the minimum required inputs for a snow cover model can be between $25-50K USD. This limits the number of stations available, especially in large remote regions. Additionally, AWS can have significant maintenance challenges, especially during inclement weather where strong winds can break anemometers, rime can cover instrumentation, and a lack of direct power supply with reliance on solar power can prevent ventilation and heating of important sensors. 1.5 Coupling NWP and Snow Cover Models An alternative to using observed weather data to drive a snow cover model is to instead use forecasted weather data from a NWP model. A NWP model is a mathematical model of the atmosphere, which uses current weather conditions and complex equations to predict the future state of the atmosphere. NWP models are broken up into 3D grids with varying horizontal and vertical spacing (depending on the spatial resolution of the model) and can cover the entire globe (global models) or a smaller portion of the earth (regional models). Global models typically have the coarsest resolution, with horizontal grid spacing around 0.5° (e.g. GFS), while mesoscale (local) models typically have the finest resolution, with some of the most recent models having horizontal grid spacing of less than 2 km and a time step as short as 10 minutes. High- resolution NWP models are typically more accurate in mountainous areas, where smaller   16 grid spacing can better resolve major mesoscale topographic features (Fig. 1.6) and their corresponding atmospheric circulations (Fig. 1.7) (Mass et al. 2002). Recently, Schirmer and Jamieson (2015) found increasing forecast accuracy of wintertime precipitation over complex terrain using a 2.5 km NWP model versus a 15 km NWP model. Figure 1.6. Example of differing terrain resolution for 4 different NWP models over the NW US. Top left and right images are two global models with 36 km and 27 km resolution (respectively), bottom left image is a regional model with 12 km resolution, and the bottom right image is a mesoscale model with 3 km resolution. Individual mountain ranges are not resolved in the global models while the mesoscale model clearly delineates individual terrain features.   17 Figure 1.7. Precipitation forecast for the same area in figure 1.6. Areas of precipitation are more coarsely resolved in the two global models while the mesoscale model is able to resolve individual precipitation bands, which tend to form over mountainous terrain. Using a NWP model to force a snow cover model is advantageous over using observed weather data due to the continuous spatial and temporal coverage of NWP models. NWP models are capable of providing all required input parameters for a snow cover model without having to purchase, install, and maintain a network of AWS. Data from NWP models can be pulled for as many locations as there are grid points in the model domain, meaning there could be thousands of ‘virtual weather stations’ that could drive an equal number of snow cover simulation points. Additionally, gridded data   18 output from NWP models are typically available several times a day and can be set-up to have time-steps as short as 10 minutes. 1.6 Previous Work Coupling NWPs with SNOWPACK and Crocus The Crocus-MEPRA model chain has traditionally been coupled in France with the numerical weather and analysis model, SAFRAN, to create the SAFRAN-Crocus- MEPRA (SCM) model chain. SAFRAN is a meteorological analysis system which computes the relevant meteorological variables that affect snowpack evolution. The system analyzes weather data from AWS, synoptic weather stations, upper-air soundings, and regional NWPs from around the area of interest. The data is then downscaled and extrapolated to different elevation bands and aspects at the scale of a typical French Massif (~500 km2) to estimate the meteorological parameters necessary to drive Crocus (Durand et al., 1993). The SCM model chain was validated by Durand et al. (1999) in the French Alps and the Pyrenees mountains. They found good agreement between modeled and observed snow depths and snowpack stratigraphy at most sites but noticed glaring discrepancies due to inaccurate forecasts of rain/snow transitions and cloud cover, which have a narrow window of accuracy. Evaluating the stability predictions of the mode chain, they found an agreement score of 74% between the forecasted stability rating and the occurrence of observed avalanches on the Vanoise massif. Overall, the study pointed out the usefulness of the model chain to provide forecasters a reliable source of information on the snowpack structure at a regional scale to supplement field   19 observations, but noted serious limitations when applying the results to smaller spatial scales. Though most work with SNOWPACK is typically done with in-situ weather data as input, there have been a few studies that evaluated the results of coupling NWP models with SNOWPACK. Bellaire et al. (2011) coupled the 15 km resolution version of Canadian weather forecasting model GEM15 (Global Environmental Multiscale model) with SNOWPACK for 5 seasons in Columbia Mountains in southwest Canada. They found the GEM15 model tended to over forecast precipitation amounts leading to erroneous 24-hour new snow totals and simulated snow depths. Bias correcting the forecasted precipitation led to more accurate snow depths and 24-hr new snow totals but they found the model still struggled with low density snow events (< 5%), where new snow amounts were too dense leading to an under-prediction of new snow amounts, especially for large storms. A quantitative analysis of the snowpack stratigraphy and critical weak layer formation was not performed in the Bellaire et al. (2011) study. Bellaire and Jamieson (2013a) also coupled SNOWPACK with the GEM15 model but focused on the formation of critical weak layers (e.g. surface hoar and melt freeze crusts) during 7 winters in the Columbia Mountains in southwest Canada. They found it necessary to bias correct some of the forecasted weather parameters in order to produce somewhat accurate simulations of the snowpack. SNOWPACK had trouble simulating rain crusts as the default threshold for rain is set at +1.2 °C, which is too warm for transitional snow climates and doesn’t allow for the simulation of freezing rain. Decreasing the rain threshold to -0.5 °C increased the probability of detection for rain   20 crusts from 35% to 65%, but likely increased the false alarm ratio at the same time, though this metric was not provided in the paper. They also found good prediction of surface hoar events, where between 89% and 100% of surface hoar layers were simulated. Horton and Jamieson (2016) coupled SNOWPACK with a 2.5 km resolution version of the GEM model to evaluate the ability of the coupled model system to predict surface hoar formation at a regional scale across western Canada. The model chain accurately predicted the increase in observed critical surface hoar layers in the more continental climate of the Columbia Mountains versus the more maritime climate of the Coast Mountains. They did find difficulty in accurately predicting surface hoar size and coverage for specific locations. They concluded that the small-scale processes present in complex terrain greatly impacted the ability of the model chain to accurately simulate the site-specific formation of surface hoar and is currently best used for a more regional scale forecasting approach. Lastly, though not published in any peer-reviewed journals, some work using NWPs and SNOWPACK has been tested in the U.S. The Colorado Avalanche Information Center (CAIC) used output from a NWP model that was run in-house to drive the SNOWPACK model for numerous points across the state of Colorado. During this experiment, a SNOWPACK simulation was produced using raw output data from each grid point in the NWP model domain for several seasons. Results from this experiment were not realistic and provided forecasters with little to no useful   21 information, and the experiment was discontinued after two seasons (J. Snook, personal communication, August, 2015). 1.7 Research Questions This thesis continues to build on previous work in assessing the feasibility and effectiveness of coupling a NWP model with a snow cover model, specifically SNOWPACK. SNOWPACK was chosen over Crocus because of its availability, ease of use, and its highly sophisticated numerical procedures, including the Lagrangian Gauss- Seidel finite element method, to simulate different snowpack and avalanche formation processes. To date, only one published study has focused on using SNOWPACK in the U.S. (Lundy et al., 2001) and none have used a coupled NWP-SNOWPACK model chain. This research will analyze the efficacy of using an American developed high- resolution NWP model to drive SNOWPACK in a U.S. intermountain snow climate in order to predict avalanche danger. This study will have a two-pronged approach, where I will first quantitatively evaluate the accuracy of a coupled NWP-SNOWPACK model for a specific location over the course of a winter season. Additionally, I will assess the difference in performance of SNOWPACK using forecasted versus locally observed weather data to determine how much, if any, accuracy is lost by using NWP model data as input. Secondly, I perform a qualitative assessment of the ability of a coupled NWP-SNOWPACK model to predict snowpack stratigraphy and avalanche potential at a regional scale, focusing more on   22 significant events and case studies. Fundamentally, my research boils down to two main questions: 1. How accurately does SNOWPACK simulate observed snow structure when it is driven with forecasted weather data and how does the model performance change when using forecasted weather data instead of locally observed weather data? 2. How well can a coupled NWP-SNOWPACK model help forecast avalanche potential and how useful would this system be for operational avalanche forecasting in the future?   23 2. METHODS I conducted this study over the course of two Northern Hemisphere winter seasons from 2014-2016 in several areas across western Montana. During the 2014/15 winter season, I ran the SNOWPACK model for a single location with meteorological data from a NWP model as well as a co-located weather station. This first season acted as my learning phase as I figured out the best way to generate a point forecast from a NWP model and learned what output parameters were most useful to drive SNOWPACK. The focus was centered primarily on determining the quantitative accuracy of a SNOWPACK simulation validated against field observations for as many output parameters as possible, as well as evaluating the difference between using forecasted and locally observed weather data. For the 2015/16 winter season, I applied what I learned from the previous season and expanded the study to 6 sites located in various mountain ranges across western Montana. Only NWP forecasted weather data was used to drive SNOWPACK and a simulation was produced for each site for the entire winter. The second season focused on utilizing SNOWPACK for avalanche prediction at a regional scale and evaluating the accuracy of the predicted stratigraphy and stability during significant avalanche episodes. This study was broken down into a quantitative model assessment and an event-based qualitative assessment.   24 2.1 Study Area 2.1.1 2014/15 Winter Season The study area for the 2014/15 season was at the Bridger Bowl ski area in southwest Montana. The ski area is located in the Bridger Range, about 19 km northeast of the city of Bozeman, and is a prominent north south oriented mountain range about 40 km long and 10 km wide (Fig. 2.1). The range tops out at 2946 m with Bridger Bowl lying near the center of the range at an elevation of 1868 m at the base and 2687 m at the top. Figure 2.1. Location of Bridger Bowl ski area in southwest Montana   25 Bridger Bowl is classified as an intermountain snow and avalanche climate with climatic and snowpack characteristics ranging year to year from coastal (higher precipitation and snow densities, warmer temperatures, and less faceted crystal growth) to continental (lower precipitation and snow densities, colder temperatures, and more faceted crystal growth) (Mock and Birkeland, 2000). The average annual snowfall is around 650 cm with average annual precipitation around 120 cm. The heaviest snowfall typically occurs during periods of strong northwesterly flow at the backside of a longwave trough with an embedded shortwave trough simultaneously moving through. Moisture is able to move through the low elevation gaps to the northwest while the surrounding mountain ranges to the west and southwest tend to shield moisture from reaching the Bridgers (Birkeland and Mock, 1996). A study plot was constructed at the ski area near the edge of a meadow at the top of the Alpine lift, adjacent to the Alpine weather station, situated at an elevation of 2255 m (Fig. 2.2).   26 Figure 2.2. Aerial photo, looking west, of the Bridger Bowl ski area showing the location of the study plots used during the 2014/15 (red circle) and the 2015/16 (blue diamond) seasons. A 17x30 m portion of the meadow was roped off before the start of the season to ensure the snowpack remained undisturbed for meteorological and snowpack measurements. The study plot was on a 5°, east facing (100°) slope surrounded on the south and west sides by tall pine trees (Fig. 2.3). The location was chosen due to its accessibility and its vicinity to an already established weather station. Additionally, ski patrollers could routinely access the area to perform maintenance if needed.   27 Figure 2.3. Photo, looking southeast, of the Alpine study plot used in the 2014/15 winter season. The two weather stations as well as the location of the weekly snow pits can also be seen. 2.1.2 2015/16 Winter Season Though the focus of 2015/16 season was more regional in nature, a study plot was still constructed in the Bridger Mountains in order to gather snow pit data from an undisturbed location. In order to avoid the effects of shading and wind sheltering from nearby vegetation that impacted the Alpine study plot during the previous season, the 2015/16 study plot was moved to a more exposed location. The chosen location was a large open meadow outside of the ski area boundary, about 475 m west of the Alpine study plot, just below the popular backcountry skiing area named ‘The Ramp’ (Fig. 2.2).   28 The meadow was situated at an elevation of 2392 m (137 m higher than the Alpine study plot) on a 12°, southeast facing (120°) slope. A 15x15 m section in the middle of the meadow was roped off and used for manual snow profiles throughout the season (Fig. 2.4). Figure 2.4. Photo, looking west, showing the location of the Ramp study plot (center) used for the 2015/16 winter season. Other locations used in the 2015/16 season included Saddle Peak in the Bridger Mountains just south of Bridger Bowl (2793 m), Lolo Pass in the Bitterroot Range (1597 m), Stuart Mountain in the Rattlesnake Range (2255 m), Noisy Basin in the Swan Range (1841 m), and Big Mountain in the Whitefish Range (2053 m) (Fig. 2.5). These locations were chosen due to their proximity to pre-existing weather and SNOTEL stations as well as their centrality to popular backcountry skiing areas.   29 Figure 2.5. Map showing locations of the study sites used during the 2015/16 season. Nearby cities are included for reference. 2.2 Field Data During the 2014/15 season, snowpits were excavated at the Alpine study plot on a weekly basis between November 30th and April 3rd. Observations were gathered at 10 cm intervals, unless the total snow depth was 1 m or less, where observations were then gathered every 5 cm. Data included density, hand hardness, temperature, grain size, grain type, and total snow depth and was measured according to Greene et al. (2010). Additionally, a series of stability tests, typically an Extended Column and Compression Test (Greene et al. 2010), were performed each week in order to assess the stability of the snowpack. Density was measured using the Wasatch Touring density kit, which uses a 100 cm3 stainless-steel tube and balance-weighing device to measure the total SWE of the snow in the tube. Measurements were made by placing the center of the tube halfway in between each layer and inserting the tube horizontally into the snowpack until the tube   30 was entirely filled without compressing the snow inside. Hand hardness measurements and stability tests were performed according to industry standards outlined in Greene et al. (2010), snowpack temperatures were made using a digital thermometer, and snow grain type and size were estimated using a 8x30 monocular hand lens and snow crystal card. A primary and, if necessary, secondary grain type, classified according to the ISCI system (Colbeck et al., 1990), was identified and the average grain size in each layer was recorded. In addition to snowpit data, measurements of total SWE were made on a weekly basis at the Alpine study plot. Measurements were made using a U.S. Federal snow sampler kit and followed guidelines according to the USDA Snow Survey Sampling Guide (USDA, 1959). For the 2014/15 season, a total of 19 snowpits were dug, which provided 283 individual layers, and a total of 21 SWE measurements for the entire snowpack were gathered. Field data from the 2015/16 season included manual snow profiles from the Ramp study plot and Saddle Peak. The profiles were dug on an as-needed basis and were focused around the presence/formation of persistent weak layers and/or significant avalanche episodes. The snow profiles typically concentrated on the snowpack stratigraphy and included hand hardness, grain type/size, stability, and occasionally included temperature measurements, if deemed important. A total of 12 snowpits were dug at the Ramp study plot and three at Saddle Peak. Additionally, public and professional snow and avalanche observations from three regional avalanche forecasting centers across western Montana were gathered and used to   31 validate SNOWPACK simulations. The centers included the Gallatin National Forest Avalanche Center (GNFAC) located in Bozeman, MT, whose forecast area includes Bridger Bowl and Saddle Peak, the West Central Montana Avalanche Center (WCMAC) located in Missoula, MT, whose forecast area includes Stuart Peak and Lolo Pass, and the Flathead Avalanche Center (FAC) located in Hungry Horse, MT, whose forecast area includes Noisy Basin and Big Mountain (Fig 2.6). Figure 2.6. Map of avalanche forecast advisory areas for western Montana 2.3 Meteorological Data 2.3.1 Observed Meteorological Data During the 2014/15 winter season, a weather station with a Campbell Scientific CR1000 data logger was installed at the edge of the Alpine study plot, about 12 m north   32 of Bridger Bowl’s Alpine weather station (Fig. 2.3). Instrumentation included a CS215 temperature and relative humidity sensor, a SR50A sonic ranging sensor that measured total snow depth, a 109SS temperature probe, and a Kipp and Zonen CNR4 pyranometer that measured incoming and outgoing short and longwave radiation (Fig. 2.7). The weather station was powered continuously via a solar panel but was not heated or ventilated. Data was automatically collected at 60 min time intervals between October 30, 2014 and June 2, 2015. Additionally, hourly wind speed, wind gust, and liquid precipitation (rain or melted snow), measured via a heated tipping bucket rain gauge, were collected from the Alpine weather station, operated and maintained by Bridger Bowl Ski Area, for the same time period.   33 Figure 2.7. Photo of the MSU weather station constructed at the Alpine study plot during the 2014/15 season showing the (a) pyranometer, (b) temperature and relative humidity sensor, (c) temperature probe, (d) CR1000 data logger, (e) solar panel, and (f) snow depth sensor. A temperature probe was initially used to record the ground temperature next to the MSU weather station but was moved on December 29, 2015 to measure the snow temperature at 50 cm above the ground. This data was used as an additional validation point to compare the 50 cm temperature predicted by SNOWPACK to that measured by the probe. There were likely issues with precipitation under-catch at the Alpine weather station due to the presence of thick vegetation canopy nearby (Fig. 2.3) and the lack of a wind shield around the precipitation gauge, as recommended in the WMO Solid   34 Precipitation Intercomparison Report (Goodison et al., 1998). To evaluate the probability of wind under-catch, accumulated precipitation amounts between October 30, 2014 and June 2, 2015 were compared to the Brackett Creek SNOTEL station, which lies just 8km north of the Alpine station at a similar aspect (SE) and elevation (only 24m lower). During this time period, 766.32 mm of precipitation was recorded at Alpine versus 873.76 mm at Brackett Creek; a difference of 107.44 mm or about 12.3%. Though this difference may be due in part to the spatial variability of precipitation patterns often present in complex, mountainous terrain, wind-induced under-catch likely played a role in the reduced precipitation amounts at the Alpine station. Therefore, the precipitation data was adjusted for wind-induced under-catch before being used as input into SNOWPACK. A precipitation correction scheme developed by Hamon (1973) and used by Schmucki et al. (2014), where solid precipitation measurement is affected by air temperature and wind speed, is given as: Pcor = Pmeas ! exp(b(TA) ! VW) (1) where Pcor is the corrected precipitation in mm, Pmeas is the measured raw precipitation from the rain gauge in mm, VW is the wind speed in m/s, and b(TA) is a temperature dependent coefficient given as 0.0294 for 1.2 > TA > 0 °C, 0.0527 for 0 > TA > -5 °C and 0.0889 for -5 °C > TA. The correction factor was applied to each time-step and was only applicable for solid precipitation. After applying the correction factor, the total precipitation from October 30th to June 2nd at the Alpine weather station increased 50.29 mm from 766.32 mm to 816.61 mm, or an increase of 6.6%. The corrected precipitation also correlated better with the measured precipitation at Brackett Creek for the same time   35 period, decreasing the difference between the two stations from 107.44 mm to 57.15 mm, or about 53%. After reviewing the data for errors and correcting the precipitation data, the observed meteorological dataset, which was sampled at an hourly time-step, was formatted to match SNOWPACK requirements and used as input to drive a SNOWPACK simulation for the 2014/15 winter season, as listed in Table 2.1 below. Table 2.1. Observed Meteorological parameters used as input into the SNOWPACK model for the 2014/15 winter season. Additional observed meteorological data included the Bridger Bowl end of day weather observations. These were recorded manually each day by ski patrol at the Alpine weather station and were gathered for both seasons. The observations were taken around 1600 hours and included 24hr new snow, 24hr snow water equivalent, 24hr rain, and 24hr new snow density. This data was used as an additional dataset to validate the coupled NWP-SNOWPACK model, but was not used as input to drive SNOWPACK. Observed Meteorological Parameters Sensor Air Temperature (°C) CS215 (MSU Weather Station) Relative Humidity (%) CS215 (MSU Weather Station) Snow Depth (cm) SR50A (MSU Weather Station) Incoming Shortwave Radiation (w/m2) CNR4 (MSU Weather Station) Outgoing (reflected) Shortwave Radiation (w/m2) CNR4 (MSU Weather Station) Incoming Longwave Radiation (w/m2) CNR4 (MSU Weather Station) Outgoing (emitted) Longwave Radiation (w/m2) CNR4 (MSU Weather Station) Ground Temperature (Oct 30 – Dec 29, °C) 109SS (MSU Weather Station) Snow Temperature @ 50cm (after Dec 29, °C) 109SS (MSU Weather Station) Wind Speed and Gust (m/s) 03002/03101 (Alpine Weather Station) Precipitation (mm) TE525WS (Alpine Weather Station)   36 2.3.2 Forecasted Meteorological Data The emphasis of this study was on evaluating the effectiveness of coupling numerical weather prediction models (NWPs) with the SNOWPACK model. To do this, the Weather Research and Forecasting (WRF) model, specifically the Advanced Research WRF (WRF-ARW), version 3.4.1.15.16, was used as the NWP of choice for this study. The WRF-ARW is a mesoscale NWP and atmospheric simulation system designed for both research and operational forecasting operations (Skamarock et al., 2008). The WRF-ARW was developed in the 1990s primarily by the National Centers for Atmospheric Research (NCAR) but is maintained and supported as a community model with a large worldwide user group. The WRF model is customizable and users can manually delineate local model domains, allowing the model to run for specific areas of interest at higher resolutions. For this study, the National Weather Service Forecast Office in Missoula, Montana set-up and ran the WRF model for both seasons. To be able to predict and simulate surface weather conditions in complex and mountainous terrain, as needed for this study, a high-resolution WRF domain was set-up over the study area and point forecast data was downloaded and used to drive SNOWPACK simulations. For both seasons, a nesting procedure was used to initialize the highest resolution WRF model domain where forecasted surface meteorological conditions for the study plot were extracted and used as input into SNOWPACK. Nesting is a method where a smaller/finer resolution model domain is embedded inside a larger/coarser model domain   37 in order to minimize the amount of computing power needed to run the model (Gill and Pyle, 2012). For the 2014/15 winter, nested modeling began with the Global Forecast System (GFS) model. The GFS initially had a horizontal resolution of 0.5° (~27 km at mid latitudes) but was upgraded in January 2015 to 13 km horizontal resolution. However, the 0.5° resolution GFS was used throughout the 2014-2015 winter for consistency purposes. Grids from each 0000 UTC operational run of the GFS provided lateral and initial boundary conditions to a 1200 km x 1200 km, 18 km resolution WRF-ARW grid (~4,445 grid cells) centered over Missoula, MT. A 700 km x 700 km inner WRF-ARW nest of 6 km resolution (10,000 grid cells), which covered western Montana, was driven with one-way nesting from the 18 km resolution grid. Upon completion of the WRF- ARW forecast cycle each day, the output grids from the 6 km resolution WRF-ARW runs were used for lateral boundary conditions and initialization of a 100 km x 100 km, 2 km resolution WRF-ARW grid (2,500 grid cells) centered over the Bridger mountain range in southern Montana (Fig. 2.8).   38 Figure 2.8. Map showing the WRF model domains used during the 2014/15 season A grid point that was most representative of the study plot in terms of location and elevation was chosen from the 2 km resolution domain. The nearest grid point was centered 1.4 km west of the Alpine Study Plot, on the west facing side of the ridgeline of the Bridger Range, opposite of that of the study plot (Fig. 2.9).   39 Figure 2.9. Map showing the location of the 2014/15 WRF grid point (red circle with X) and the Alpine study plot (blue circle) The grid point elevation (2086 m a.s.l.), which is the average elevation of the chosen grid cell according to the WRF terrain model, was lower than that of the study plot (2255 m a.s.l.). Therefore, an atmospheric lapse rate correction was applied to the forecasted temperature. Minder et al. (2010) found that atmospheric lapse rates in complex terrain deviated from the typical moist adiabatic lapse rate value of 6.5°C km-1 and were actually found to fall between 3°C km-1 and 5.5°C km-1. Hendrikx et al. (2012) also used a lower atmospheric lapse rate correction of 5°C km-1 to extrapolate gridded temperature values in complex terrain. Therefore, I applied a 5°C km-1 temperature correction to the forecasted temperature, which decreased the air temperature for each time step by 0.85°C. Unless otherwise noted, all other forecasted meteorological parameters remained uncorrected. From the grid point, a 36-hr point forecast at a 1-hr time-step was produced each day, valid beginning at 0000 UTC. The first 24 hours of   40 each forecast were then used to create the meteorological input file to drive the SNOWPACK model (Table 2.2). For the 2015/16 winter season, the upgraded 13 km resolution GFS model was used to initialize a 1200 km x 1200 km, 9 km resolution WRF-ARW grid (~17,778 grid cells) centered over Missoula, MT extending across the entire Northwest US. An inner 600 km x 600 km, 3 km resolution WRF-ARW grid (40,000 grid cells), centered over Missoula, MT and extending from NE Washington to NW Wyoming was driven with one way coupling by the outer 9 km WRF-ARW grid. A 72-hr point forecast, valid beginning 1200 UTC, was produced from the 3 km WRF-ARW grid for several locations across western Montana (Fig. 2.10). As in the 2014-2015 season, the forecasted temperature for each location was corrected using a 5°C km-1 lapse rate and the first 24 hours of each forecast were used to create the meteorological input file to drive the SNOWPACK model (Table 2.2). Temperature corrections for each location are listed in Table 2.3.   41 Figure 2.10. Map showing the WRF model domains and study sites used during the 2015/16 season. Table 2.2. WRF Meteorological parameters used as input into the SNOWPACK model for the 2014/15 and 2015/16 winter seasons WRF Meteorological Parameters 2 m Air Temperature (°C) Relative Humidity (%) 10 m Wind Speed and Gust (m/s) Incoming Shortwave Radiation (w/m2) Outgoing (reflected) Shortwave Radiation (w/m2) Incoming Longwave Radiation (w/m2) Outgoing (emitted) Longwave Radiation (w/m2) Precipitation (mm)   42 Table 2.3. Temperature corrections for each WRF grid point location used during the 2015/16 season. 2.4 SNOWPACK Model Setup SNOWPACK can be driven with various combinations of input parameters and initialization set-ups. Importantly, SNOWPACK has numerous options to parameterize different atmospheric-snowpack processes including surface heat and turbulent fluxes, albedo, incoming solar radiation, outgoing longwave radiation, snow surface temperatures, and new snow density. This allows simulations to run even when certain input parameters are not available. Since most remote weather stations have limited radiation data, SNOWPACK can estimate the missing parameters based on the provided data. For example, if incoming shortwave radiation is not available or reliable due to lack of heating of the instrumentation, SNOWPACK can utilize the measured reflected shortwave radiation and use an empirical estimation of the snow surface albedo to back- calculate the incoming shortwave radiation. The opposite can be done if only incoming shortwave is available. Additionally, SNOWPACK can parameterize the incoming longwave radiation and/or snow surface temperatures by estimating the atmospheric emissivity and the Stefan-Boltzmann law (Lehning et al., 2002b). Location WRF Grid Point Elevation (m) Actual Elevation (m) Difference (m) Temperature Correction (°C) Big Mountain Summit 1792 2053 -261 -1.3 Noisy Basin 1368 1841 -473 -2.3 Stuart Mountain 1958 2256 -298 -1.5 Lolo Pass 1733 1597 +136 +0.7 Bridger Bowl (Ramp Study Plot) 2068 2392 -324 -1.6 2015/16 WRF Temperature Corrections   43 The following describes the set-up and initialization for all SNOWPACK simulations used in this thesis, unless otherwise noted. For the purpose of this thesis, I used SNOWPACK version 3.2, released in August 2014. SNOWPACK was initialized at the beginning of the season with no pre-existing snow cover on the ground. Though some study sites may have had snow cover before initialization, the amounts were minimal and not enough to impact the simulation. All input parameters listed in Table 2.1 were initially used to drive the SNOWPACK simulation with observed weather data, though different combinations of input parameters were later tested and compared to evaluate the impact on the simulation. For the SNOWPACK-WRF simulations, all input parameters listed in Table 2.2 were used to drive the model and were initially left unbiased corrected, with the exception of the temperature data, as discussed in the previous section. Different combinations of input parameters and bias corrections were tested later on in the study to evaluate the impact on the simulations. Atmospheric stability was parameterized using the Monin-Obukhov bulk formulation with an aerodynamic roughness length of 2x10-4 m, common for high alpine sites where mechanical turbulence created by the topography leads to a near neutral stratification. The measured snow surface temperature, which is derived in SNOWPACK directly from the measured outgoing longwave radiation data, is used as a Direchlet boundary condition as long as the snow surface temperature is below -1 °C. Once above that threshold, SNOWPACK assumes a Neumann boundary condition and the net longwave radiation is calculated from the snow surface temperature using an estimated atmospheric emissivity and the Stefan-Boltzman constant (Lehning et al., 2002b).   44 Other default SNOWPACK parameterizations used in this study include a fixed new snow grain size of 0.3 mm, a snow/rain temperature threshold of +1.2 °C, a minimum buried surface hoar size of 2 mm with a density of 125 kg/m3, an unburied minimum surface hoar size of 0.5 mm with a density of 100 kg/m3, and a maximum wind velocity allowed for surface hoar formation of 3.5 m/s. All other parameterizations can be found in the SNOWPACK initialization files in Appendix A. 2.5 Data Analysis The main goal of this thesis is to evaluate the accuracy and usefulness of a coupled SNOWPACK-WRF model for application in avalanche forecasting. One of the ways to do this is through a quantitative verification of the model output against observed field data. This approach provides an objective assessment of model performance and allows for a direct comparison to results from other simulations. This section details the methods and statistical tools used to assess the performance of the WRF and SNOWPACK models. 2.5.1 Comparison of Modeled and Observed Snow Profiles Two issues that arise when attempting to compare a modeled and observed snow profile are the differences in layering and snow depth. SNOWPACK profiles tend to have much higher resolution than human-observed profiles making it difficult to match and compare specific layers. Additionally, there are usually differences in total snow depth between the two profiles making analysis troublesome. For example, during the 2014/15 season, there was significant spatial variation in snow depth at the Alpine study   45 plot as snow depths tended to be higher in the area where snow pits were dug than around the weather station (see Fig. 2.3). Since the SNOWPACK-Obs simulation used measured snow depth as input, there were discrepancies in snow depth between the modeled and observed profiles. This was also the case for the SNOWPACK-WRF simulations as snow depth was derived from forecasted meteorological parameters, which led to significant differences between the modeled and observed snow depths. In order to match and compare layers, a normalization method was employed, similar to what was recommended by Lehning et al. (2001). First, the modeled profiles with the same date and time as the observed profiles were downloaded. Next, the layer heights for both the modeled and observed profiles were adjusted accordingly so that they were expressed as a percentage of the total snow depth. Next, with the observed profile kept as the master, each layer in the observed profile was matched to the nearest corresponding layer in the modeled profile (Fig. 2.11). This method compares layers that are located in the same relative position in the snowpack and was used to validate forecasted values of temperature, density, grain size, grain shape, and hand hardness.   46 Figure 2.11. Example showing the normalization method used to compare modeled and observed snowpacks. 2.5.2 Statistical Measures of Model Performance 2.5.2.1 Numerical. Model verification is typically accomplished with statistical tools that evaluate a model’s ability to match an observational dataset. These tools usually fall into one of two categories: scale and non-scale dependent based. Scale dependent methods are where error measurements are on the same scale as the reference data and therefore cannot be compared to other datasets with different scales (Hyndman and Athanasopoulos, 2013). The mean bias (B), mean absolute error (MAE), and the root mean square error (RMSE) are common scale dependent measures used for model verification. The mean bias is defined as:   47 (2) where Ximod and Xiobs are a set of predicted (ie: modeled) and observed data points at the same point in time for a measured parameter, and t is the number of paired data points in the dataset. The mean bias shows the difference between the modeled and observed data and the tendency of the model to either underestimate or overestimate the parameter being predicted. The mean absolute error is defined as: (3) where all variables are the same as in equation 2 and the absolute value of the error is averaged over the dataset. The MAE shows how close a prediction is to an observation but does not show whether the model is under or over forecasting the variable of interest. To achieve a complete picture of model performance, MAE should always be used in conjunction with the mean bias. The root mean square error is defined as: (4) and shows the total magnitude of the model error. Unlike the MAE, the RMSE tends to amplify large errors while decreasing the magnitude of small errors. Willmott and Matsuura (2005) note that the RMSE is a function of three characteristics of a set of errors and therefore can provide highly variable results with no consistent functional Β = xmodi i=1 i=t ∑ − xiobs t MAE = ximod − xiobs i=1 i=t ∑ t RMSE = (ximod − xiobs )2 i=1 i=t ∑ t   48 relationship between RMSE and average error. They surmise that RMSE is an inappropriate indicator of average error and advocate the use of MAE as the most natural measure of average model performance. Scale independent methods are not based on the same scale as the source data and therefore can be compared across different parameters and datasets. R2, mean absolute percentage error (MAPE), accuracy ratio (AR), and relative frequency (RF) are common scale independent methods used for model verification. R2 is defined as: (5) This statistical tool is a measure of correlation between two variables, in this case the predicted and observed data. R2 is bounded between 0 and 1 where 1 indicates a perfect agreement between the observed and predicted data and 0 indicates one of a variety of complete disagreements (Willmott, 1981). The MAPE is defined as: MAPE = i modx − iobsx i obsx " # $$ % & ''*100% i=1 i=t ∑ t (6) and is a measure of model prediction accuracy, expressed as a percentage. The results can easily be compared with other datasets but also have major drawbacks. MAPE cannot be used if an observed data point is equal to zero, which is common when dealing with meteorological and snowpack parameters. MAPE can also be highly unstable, especially when a model tends to over predict. In this situation, MAPE can theoretically r2 = ximod − i=1 i=t ∑ xiobs (ximod )2 i=1 i=t ∑ (xiobs )2 i=1 i=t ∑ # $ % % % % % & ' ( ( ( ( ( 2   49 have no upper bounds, which can lead to superficially large errors, especially for data with small values. At the same time, models that tend to underpredict will have a MAPE bounded by no more than 100%. Because of this, MAPE tends to favor models with underprediction errors which can skew results. In this thesis, I propose another option to measure model error by using the accuracy ratio, which is defined as: 𝐴𝑅 =   !!𝑚𝑜𝑑!!𝑜𝑏𝑠!!!!!! ×!""%! , 𝑖𝑓  𝑥!𝑚𝑜𝑑  ≤   𝑥!𝑜𝑏𝑠 (7) and 𝐴𝑅 =   !!𝑜𝑏𝑠!!𝑚𝑜𝑑!!!!!! ×!""%! , 𝑖𝑓  𝑥!𝑚𝑜𝑑  >   𝑥!𝑜𝑏𝑠 (8) Additionally, in equations 7 and 8, if either the denominator or numerator is zero while the other is greater than zero, the AR will then be equal to zero (i.e. an accuracy of 0%). For example, if comparing the accuracy of a snow depth prediction with a forecasted value of 50 cm and an observed value of 10 cm, according to equations 6 and 8, the MAPE would be 400% while the while the AR would be 20%. Then if the snowpack melted out a few days later, and say the forecasted snow depth was 30 cm, the MAPE would be undefined but the AR would be 0%. It can be seen that the AR provides an intuitive assessment of model accuracy without dealing with the issues stated above for MAPE. One issue with using a percentage-based method is dealing with parameters on a scale like Fahrenheit or Celsius, since values can be either negative or positive. Additionally, if converting a temperature forecast to Kelvin, the accuracy would increase   50 unrealistically due to the scale change (Hyndman and Athanasopoulos, 2013). To deal with this problem, the relative frequency (RF) method can be used to verify certain snowpack and meteorological parameters. The RF method is a measure of the number of occurrences a forecast falls within a certain bounded error (NWS, n.d.). It is defined as:  𝑅𝐹 =   !!!!!!!!!   (9) where the numerator equals the number of forecasts that have an error within the pre- defined threshold and 𝑡 equals the total number of forecasts. The National Weather Service uses this approach to determine their forecast accuracy. For example, a temperature forecast is typically classified as accurate if the prediction falls within + 1.5 °C of the observed value (NWS personal communication, 2015). Thus, if out of 100 temperature forecasts, 85 were within + 1.5 °C, the RF would be 0.85, or 85%. This method enables the user to tailor the error threshold to what is deemed operationally significant. In this thesis, I will use B, MAE, and either AR or RF, or both, to quantify SNOWPACK’s predictive performance of numerical layer data as well as the accuracy of the WRF model. To demonstrate the application of the statistical methods, a hypothetical observed and modeled snow profile with density measurements is provided in Figure 2.12. Here, the four observed layers are compared against the matching layers in the modeled profile and 4 statistical measures are calculated in Table 2.4.   51 Figure 2.12. Hypothetical observed and modeled snow profiles showing measured and modeled values of density. Table 2.4. Calculation of four statistical measures for snow density from the hypothetical snow profiles provided in Figure 2.12. 2.5.2.1 Categorical. The aforementioned statistical methods are applicable to numerical parameters such as temperature and density but not to categorical parameters such as grain type. In this thesis, I will employ several statistical measures to quantify model performance of grain type prediction. 150$ 60$ 15$ 0$ 112$ 100$ 40$ 10$ 0$ 75$ Depth$ (cm)$ Rela5ve$ Depth$(%)$ 0$ 0$ Depth$ (cm)$ Rela5ve$ Depth$(%)$ 200$100$ 11$ 39$ 75$ 22$ 78$ 150$ Observed(Snowpack( Modeled(Snowpack( 50 kg/m3 150 kg/m3 250 kg/m3 300 kg/m3 100 kg/m3 210 kg/m3 290 kg/m3 230 kg/m3 280 kg/m3 Density$ Density$ 1( 2( 3( 4( # Layer B (kg/m3) MAE (kg/m3) AR RF (+/- 50 kg/m3) 1 100% +50 50 50% yes 2 75% +60 60 71% no 3 40% +40 40 86% yes 4 10% -20 20 93% yes Average +32.5 42.5 75% 75%   52 SNOWPACK assigns each layer within a predicted snow profile a primary (F1) and secondary (F2) grain type. If only one grain type is present, the secondary is categorized the same as the primary. SNOWPACK uses eight grain type classifications, per Colbeck et al. (1990), which are based on various combinations of calculated dendricity and sphericity (Lehning et al., 2002a). These grain types include precipitation particles, decomposing precipitation particles, rounded grains, faceted crystals, depth hoar, melt forms (including melt freeze crusts), surface hoar, and mixed forms (rounding faceted particles). After performing the normalization method to match modeled and observed layers, predicted and observed grain types were then compared. The first comparison method used was an exact match method, where the percentage of modeled layers that had the same grain type as in the corresponding observed layer was calculated. This provided a simple hit/no-hit analysis of grain type prediction and was performed separately for primary and secondary grain types. An issue with this method is that it only assigns a successful prediction if the grain types match exactly and ignores any type of agreement between grain types that may be related (e.g. precipitation particles and decomposing precipitation particles, or faceted crystals and depth hoar) and only compares primary and secondary grain types separately. One solution to this is to use a grain type agreement score method, as proposed by Lehning et al (2001). This method utilizes a grain type agreement matrix which was developed by the SLF and provides an agreement score, bounded between 0   53 and 1, for all possible combinations of the eight grain types used in SNOWPACK (Table 2.5). Table 2.5. Grain type agreement matrix for all grain type combinations, as proposed by Lehning et al. (2001). The outer rows contain the standard grain type symbology with their corresponding SLF numerical equivalent used in SNOWPACK. The inner rows contain the agreement score for the different grain type combinations. The total agreement score for each layer is calculated by first determining the agreement d for all combinations of grain types F, via Table 2.5: d11 = d(F1obs, F1mod) = agreement between the observed and modeled primary grain type d22 = d(F2obs, F2mod) = agreement between the observed and modeled secondary grain type d12 = d(F1obs, F2mod) = agreement between the observed primary and modeled secondary grain type d21 = d(F2obs, F1mod) = agreement between the observed secondary and modeled primary grain type Next, the average score for the primary and secondary straight matches is calculated: dstraight = !!!!  !!!! (10) Next, the average score for the cross matches is calculated: + / !    " 1 2 3 4 5 6 7 9 + 1 1 0.8 0.5 0.2 0 0 0 0.2 / 2 0.8 1 0.8 0.4 0 0 0 0.4 ! 3 0.5 0.8 1 0.4 0.1 0 0 0.5  4 0.2 0.4 0.4 1 0.8 0 0 0.8  5 0 0 0.1 0.8 1 0 0 0.7  6 0 0 0 0 0 1 0 0 " 7 0 0 0 0 0 0 1 0 9 0.2 0 0.5 0.8 0.7 0 0 1   54 dcross = 𝑚𝑎𝑥 0, 𝑑12+  𝑑21! − 0.1 (11) where 0.1 is subtracted from the score to account for the primary-secondary mismatch. Lastly, the final agreement score for the layer is determined by finding the maximum value between dstraight and dcross: d = 𝑚𝑎𝑥 𝑑!"#$%&!! ,𝑑!"#$$ (12) The agreement score for each layer is then averaged over all layers to obtain the total grain type agreement score for the profile. This method is advantageous as it accounts for all relationships between grain forms as well as the primary and secondary grain forms. In this thesis, I will report both the exact match and agreement score methods to obtain a more robust measure of grain type prediction. A hypothetical modeled and observed profile is provided in Figure 2.13 to demonstrate the exact match and agreement score methods with results displayed in Tables 2.6 and 2.7.   55 Figure 2.13. Hypothetical observed and modeled snow profiles showing measured and modeled primary (F1) and secondary (F2) grain types. Table 2.6. Results using the exact match method for determining modeled grain type accuracy, per the hypothetical snow profiles in Figure 2.13. The following agreement score calculation is performed fully for the top layer (100%) from Figure 2.13: d11 = d(F1obs, F1mod) = d(+, +) = 1 d22 = d(F2obs, F2mod) = d(+, /) = 0.8 d12 = d(F1obs, F2mod) = d(+, /) = 0.8 150$ 60$ 15$ 0$ 112$ 100$ 40$ 10$ 0$ 75$ Depth$ (cm)$ Rela5ve$ Depth$(%)$ 0$ 0$ Depth$ (cm)$ Rela5ve$ Depth$(%)$ 200$100$ 11$ 39$ 75$ 22$ 78$ 150$ Observed(Snowpack( Modeled(Snowpack( + + + / / / / ! ! ! ! ! $ $ $ $ F1$ F2$ F1$ F2$ 1( 2( 3( 4( # Layer F1 F2 1 100% yes no 2 75% yes no 3 40% yes no 4 10% no yes Overall 75% 25% Exact Match   56 d21 = d(F2obs, F1mod) = d(+, +) = 1 dstraight = !!!!  !!!! = !!  !.!! = 0.9 dcross = 𝑚𝑎𝑥 0, 𝑑12+  𝑑21! − 0.1 =  𝑚𝑎𝑥 0, 0.8  +  1! − 0.1 = 0.8 d = 𝑚𝑎𝑥 𝑑!"#$%&!! ,𝑑!"#$$ =  𝑚𝑎𝑥 0.9, 0.8 = 𝟎.𝟗 Final results are displayed for all four layers in Table 2.7 below. It can be seen that the agreement score method provides a higher overall score than the exact match method as the relationship between primary and secondary grain types as well as related grain types are taken into account. For example, the exact match did not recognize the similarities between facets, rounding facets, and depth hoar while the agreement score method provides some measure of agreement between those grain types. Table 2.7. Results using the agreement score method for determining modeled grain type accuracy from the hypothetical snow profiles in Figure 2.13. # Layer Agreement Score 1 100% 0.9 2 75% 0.9 3 40% 0.9 4 10% 0.85 Overall 0.89   57 3. RESULTS 3.1 Winter 2014/15 Below average snowfall, above average precipitation, and above average temperatures characterized the 2014/15 winter (November – April) at Bridger Bowl (Table 3.1 and Fig 3.1a and 3.1b). While the season started off active with a vigorous wintertime storm pattern producing above average snowfall and precipitation through the beginning of January, later months were much warmer than normal with below average snowfall. In November and December, several significant cold spells led to the formation of buried weak layers in the snowpack, increasing the avalanche danger. The synoptic scale weather pattern shifted around mid January as a persistent ridge of high pressure settled in over the western U.S. This led to much warmer than normal conditions through the end of March as temperatures from January thru March were 3.3 °C above the 30-year average at Bridger Bowl. This also led to well below average snowfall during this time period even though total precipitation (rain and SWE) was slightly above normal. Snowpack conditions stabilized throughout the period with the snowpack becoming isothermal by mid March. April saw increased storminess with several systems bringing significant snowfall through the month, though not entirely reflected in the data since most of the storms occurred after Bridger Bowl closed for the season on April 4th, 2015.   58 Table 3.1. Bridger Bowl weather summary for the 2014/15 winter, showing observed, average, and percent of average values of snow, precipitation and, temperature. Data was gathered from the end of day measurements at the Alpine weather station between November 1, 2014 and April 4, 2015. a. 145.4 187.7 -7.2 111.9 97.5 -3.4 130% 192% Bias (°C) -3.8 143.5 101.3 -4.4 144.7 95.3 -6.4 99% 106% Bias (°C) +2.0 113.0 138.4 -2.2 142.3 103.6 -6.5 79% 133.6 Bias (°C) +4.3 90.2 117.6 -3.3 121.2 85.3 -5.5 74% 138% Bias (°C) +2.2 50.2 69.3 0.6 143.5 111.8 -2.8 35% 62% Bias (°C) +3.4 33.0 29.2 -1.7 51.7 52.8 -0.4 64% 55% Bias (°C) -1.3 575.3 643.5 -3.0 715.3 546.3 -4.2 80% 118% Bias (°C) +1.2 2014-2015 Weather Summary Snow (cm) Precip (mm) Average Temp (°C) January Observed Average (since 1969) Observed Average (since 1984)November December Observed Average (since 1968) April (thru 4th) Observed Average (since 2003) % of Avergage % of Avergage % of Avergage % of Avergage % of Avergage % of Avergage February Observed Average (since 1968) March Observed Average (since 1968) 2014-2015 Winter Observed Average (since 2003) % of Avergage 0" 100" 200" 300" 400" 500" 600" 700" 800" November" December" January" February" March" April" Sn ow %(c m )% Bridger%Bowl%2014/15%Snow%Totals% %2014/15%Winter% %Average%   59 b. Figure 3.1. Average and observed total snowfall (a) and precipitation (b) at Bridger Bowl during the 2014/15 winter.     3.1.1 2014/15 WRF Model Performance Before assessing the performance of the SNOWPACK-WRF model, it is important to understand the strengths and weaknesses of the WRF model and the accuracy of the meteorological inputs into SNOWPACK. To do this, forecasted values were validated against measured values from nearby weather stations. For the 2014/15 season, forecasted values of temperature, relative humidity, incoming and reflected shortwave radiation, incoming longwave radiation, and snow surface temperature were compared to values measured at the Alpine study plot from the MSU weather station. Forecasted wind speed and gusts was compared to wind recorded at the Alpine weather station. Forecasted precipitation was compared to values measured automatically at the Alpine weather station and the nearby Brackett Creek SNOTEL station, as well as the 24- 0" 100" 200" 300" 400" 500" 600" 700" November" December" January" February" March" April" Pr ec ip ita )o n, (m m ), Bridger,Bowl,2014/15,Precip,Totals, ,2014/15,Winter, ,Average,   60 hour values from the Bridger Bowl end of day observations. The following section will present the results of this analysis for each variable. 3.1.1.1 Temperature. Hourly 2m temperature forecasts from the WRF output were compared to those measured at the Alpine study plot by the MSU weather station between 31-Oct-2014 and 2-Jun-2015. Table 3.2 shows the results of the unbiased and biased corrected WRF temperature forecast verification. As mentioned in section 2.3.2, a 5 °C/km, or -0.85 °C correction was applied to each time step for the bias corrected temperature to account for elevation differences between the WRF grid point and Alpine study plot. Figure 3.2 visually shows the relationship between the observed and bias corrected temperature forecast. Statistical measures included the mean bias (B), mean absolute error (MAE), and the relative frequency (RF) of the forecast within +/- 1.5 °C of the observed value and are presented in Table 3.2. Table 3.2. 2014/15 WRF uncorrected and bias corrected temperature verification results B MAE RF (+/- 1.5 °C) Uncorrected 0.02$°C 1.81$°C 50.1% Bias Corrected +0.83$°C 1.98$°C 44.3%   61 Figure 3.2. Scatter plot of observed vs bias corrected forecasted temperatures for the 2014/15 winter. The red line indicates the 1:1 line, with all points below the line representing a colder than observed forecast and points above representing a warmer than observed forecast. The R2 value for this dataset was 0.90, showing strong correlation between observed and forecasted temperatures. The results show the WRF model provided a relatively accurate forecast throughout the winter, with an uncorrected average error of less than 2 °C, an overall neutral bias, and a forecasted temperature within 1.5 °C of the actual value over half the time (Table 3.2). Applying a -0.85 °C correction slightly decreased the forecast accuracy, with the MAE increasing by 0.17 °C and the RF decreasing by 5.8%. Additionally, the bias-corrected temperature forecast also included a slight cold bias (Table 3.2). Figure 3.2 does show the WRF’s tendency to struggle with cold temperatures, especially for observed temperatures below -20 °C. In fact, the average uncorrected forecast error more than doubles when the observed temperature drops below -40 -30 -20 -10 0 10 20 30 40 -40 -30 -20 -10 0 10 20 30 40 Observed( WRF( 2014/15(Observed(vs(Forecasted(Air(Temperature((°C)(   62 -20 °C, from 1.81 °C to 4.17 °C, with a nearly 100% warm bias. Applying the temperature correction slightly improves the forecast in this range, but only by the -0.85 °C correction amount. 3.1.1.2. Snow Surface Temperature. The skin temperature parameter from the WRF model output was used as the snow surface temperature for SNOWPACK input. This parameter is derived directly from the forecasted outgoing longwave radiation values in WRF using the Stefan-Boltzman Law: 𝑇!"#$ °C   =   𝑂𝐿𝑊𝑅𝜎 !! − 273.15 (13) where OLWR is the forecasted outgoing longwave radiation in w/m2 and σ is the Stefan- Boltzman constant, which equals 5.6703x10-8 w/m2 K4. Forecasted snow surface temperatures were validated against the observed snow surface temperatures, which were converted to Celsius from the outgoing longwave radiation values measured at the MSU weather station, using equation 13. In order to validate the snow surface temperature instead of the ground surface temperature, forecasted and observed values were compared only when both the observed and simulated snow depths were greater than 0 cm. Additionally, as performed automatically in SNOWPACK, if either the observed or forecasted snow surface temperature was greater than 0 °C while the snow depth was greater than 0 cm, the temperature was set equal to 0 °C. Results show a tendency for the WRF model to over-predict snow surface temperatures, with a 2 °C warm bias (Table 3.3). Snow surface temperatures were more difficult to accurately predict than air temperatures, with a MAE about 0.5 °C higher,   63 though a similar percentage of forecasts fell within the 1.5 °C error threshold. Figure 3.3 shows a significant warm bias during cold observed snow surface temperatures, especially when observed values fell below -20 °C. In fact, the average error nearly quadrupled when observed values dropped below -20 °C, increasing from 2.5 °C to 9.7 °C. Figure 3.3. Scatter plot of observed vs forecasted snow surface temperatures for the 2014/15 winter. The red line indicates the 1:1 line, with all points below the line representing a colder than observed forecast and points above representing a warmer than observed forecast. The R2 value for this dataset was 0.76, showing moderate correlation between observed and forecasted snow surface temperatures. Table 3.3. Forecasted snow surface temperature verification results for the WRF model -35.00 -30.00 -25.00 -20.00 -15.00 -10.00 -5.00 0.00 -35.00 -30.00 -25.00 -20.00 -15.00 -10.00 -5.00 0.00 Observed (°C) W R F (°C ) B MAE RF (+/- 1.5 °C) B MAE RF (+/- 1.5 °C) +2.0 °C 2.5 °C 49.9% +9.7 °C 9.7 °C 1.7% vs All Observed Values vs Observed Values < -20 °C   64 3.1.1.3 Precipitation. Forecasted precipitation values were compared to several different observed datasets, including automatic and manually measured values from the Alpine study plot as well as automatic measurements from the nearby Brackett Creek SNOTEL station. This was done to evaluate differences in observed values stemming from errors common with precipitation measurements (e.g. wind-induced precipitation under-catch). The forecasted and observed precipitation sums, as well as the 24-hour precipitation sums (24HNW), were compared to provide an assessment of the total precipitation during the season as well as the timing of precipitation events. Table 3.4 and Figure 3.4 show the forecasted seasonal precipitation sum verification. Values shown are total accumulated precipitation between 31-October 1800 MST and 2-June 1800 MST, with the exception of the Alpine manual observations, which represents total precipitation between 1-Nov and 4-April. The values shown for the Alpine auto station represent the precipitation after being corrected for wind-induced under-catch, as mentioned in section 2.3.1. Table 3.4. WRF precipitation forecast verification for the 2014/15 winter, compared against three different weather stations Total Precip B AR Alpine Manual 643.6%mm // // WRF 954.3 mm +310.7 mm 65% Alpine Auto 799.1 mm // // WRF 1347.5 mm +548.4 mm 59% Brackett Creek 873.8 mm // // WRF 1347.5 mm +473.7 mm 65% Through'April'4th Through'June'2nd   65 Figure 3.4. Comparison of forecasted and observed accumulated precipitation during the 2014/15 season. Figure 3.4 shows that precipitation was relatively consistent between the 3 weather stations, though the Alpine Auto station trailed behind the other two, possibly indicating additional issues with solid precipitation under-catch. Regardless, the results show a significant over-prediction of precipitation by the WRF model throughout the entire season. Besides total precipitation amount, another important variable of a precipitation forecast is timing. Accumulated totals may hide timing issues as precipitation sums may match observed values over the course of a season but may not match the timing of the individual events. To validate precipitation timing of the WRF model, 24-hour forecasted and observed precipitation sums were calculated throughout the season, valid 0 200 400 600 800 1000 1200 1400 1-N ov 16 -N ov 1-D ec 16 -D ec 31 -D ec 15 -Ja n 30 -Ja n 14 -F eb 1-M ar 16 -M ar 31 -M ar 15 -A pr 30 -A pr 15 -M ay 30 -M ay Pr ec ip ita tio n (m m ) Date 2014/15 Accumulated Precipitation WRF Alpine Auto Alpine Manual Brackett Creek SNOTEL   66 1600-1600 MST each day matching the timing of the Alpine manual station measurements. A 24-hour window was used because shorter summation periods would put an emphasis on rather insignificant timing differences between the model and observations (e.g. comparing hourly precipitation totals) but was long enough to capture the episodic pattern of typical wintertime precipitation events in the intermountain west. The B, MAE, and AR were used as statistical measures to validate the 24HNW. Table 3.5. WRF 24HNW verification for the 2014/15 winter, compared against 3 different weather stations Results in Table 3.5 show a fairly poor performance of the WRF 24HNW forecast with an accuracy of around 50% for all days and around 45% for only days with observed precipitation. Additionally, I compared the forecast using a 24-hour roaming window, plus or minus 6 hours from the original 1600-1600 time frame (e.g. 1000-1000 and 2200- 2200), but found even worse WRF performance. Digging deeper into the analysis, the WRF model did forecast individual precipitation events in close proximity to their actual occurrence, but tended to over-forecast their actual amounts. For example, precipitation was forecasted on 91% of the days when precipitation was observed, while precipitation was forecasted on 43% of days when none was observed. This shows the tendency for the WRF model to forecast precipitation during periods where conditions may be marginally conducive for precipitation, but when precipitation does not occur. B MAE AR (all days) AR (only days with observed precip) Alpine Manual (Through April 4th) +2.12 mm 4.47 mm 52% 46% Alpine Auto +2.79 mm 4.15 mm 50% 44% Brackett Creek +2.57 mm 3.75 mm 49% 47%   67 3.1.1.3 Relative Humidity. Forecasted relative humidity was compared to observed values measured at the MSU weather station between October 31st and June 2nd. Validation measures included the B, MAE, AR, and the RF with an error threshold of +/- 10%, as seen in Table 3.6. Table 3.6. 2014/15 WRF relative humidity verification results Overall, the WRF model provided a reasonably accurate forecast of relative humidity throughout the winter, with a slight dry bias and an average error of 11%. 3.1.1.4 Wind Speed and Gust. Wind velocity is an important factor in avalanche forecasting that can rapidly change the snowpack through loading and erosion. Wind velocity can vary greatly over small spatial scales in complex terrain due to the effects of friction, sheltering, and acceleration. The SNOWPACK model is also fairly sensitive to surface wind speed inputs as the formation of surface hoar, turbulent fluxes, and new snow density all rely heavily on the provided wind speed. To evaluate the accuracy of the WRF forecasted winds, hourly wind speed and gusts were compared against wind data measured at the Alpine weather station throughout the 2014/15 season. Verification results are shown in Table 3.7 and include the B, MAE, AR, and RF with an error threshold of +/- 50% of the observed value. Results show the WRF model heavily over- forecasted winds with a positive bias and an average error around 4 m/s. Though the average error was higher for wind gusts, this was likely due to the higher magnitude B MAE RF (+/- 10%) AR !2.15% 11.2% 57% 85%   68 values, which would inflate the MAE. The RF and AR both show a proportionate increase in wind gust accuracy over wind speed, though still showing relatively poor overall performance. Table 3.7. Verification results for WRF forecasted wind speed and gusts during the 2014/15 season. In Tables 3.8 and 3.9, verification results were broken down categorically into different speeds in order to evaluate which conditions the WRF model had the most difficulty forecasting. Overall, observed wind speeds were generally very light at the Alpine study plot, with wind speeds under 2 m/s and gusts under 5 m/s observed a majority of the time while the forecasted wind speeds and gusts were typically greater than 5 m/s. Table 3.8. Categorical WRF wind speed verification results. The first two columns display the percentage of observations and forecasts within each category. The results show the error/accuracy of the WRF model when the measured wind was within each respective category. Table 3.9. Categorical WRF wind gust verification results. B MAE RF (+/- 50%) AR Wind Speed +3.5%m/s 3.6%m/s 13% 32% Wind Gust +3.5%m/s 4.6%m/s 33% 48% % of Obs % of WRF B MAE RF (+/- 50%) AR < 2 m/s 84% 12% +3.8 m/s 3.8 m/s 7% 27% > 2 to < 4 m/s 13% 27% +2.3 m/s 2.6 m/s 38% 57% > 4 to < 6 m/s 2% 29% +1.3 m/s 1.6 m/s 78% 77% > 6 m/s 0.2% 31% +1.0 m/s 2.3 m/s 78% 74% % of Obs % of WRF B MAE RF (+/- 50%) AR < 5 m/s 74% 31% +4.6 m/s 4.9 m/s 15% 43% > 5 to < 10 m/s 22% 41% +1.0 m/s 3.6 m/s 23% 64% > 10 to < 15 m/s 3% 21% -4.0 m/s 4.5 m/s 41% 62% > 15 m/s 0.3% 6% -7.2 m/s 7.2 m/s 67% 56%   69 3.1.1.5 Incoming Shortwave Radiation (ISWR). ISWR, or solar radiation, is the most important energy source for snow cover in most situations (Armstrong and Brun, 2008). ISWR also impacts the evolution of seasonal snow cover and snowpack stability during certain times of the year. It is a major player in the surface energy budget and can be a significant source of heat to the snowpack. WRF calculates ISWR using a daylight- mean cosine solar zenith angle over the time interval, accounting for clear-air scattering, water vapor absorption, and cloud albedo and absorption (Skamarock et al., 2008). Hourly forecasted ISWR values were compared against measured values from the MSU weather station over the course of the 2014/15 winter season. Challenges arose from the observed data due to the fact that ISWR was measured with an upward facing sensor on an unheated/ventilated pyranometer (see Figure 2.6). This allowed snow to accumulate on top of the sensor limiting the radiation received, decreasing the amount of ISWR that was measured. The Bridger Bowl Ski Patrol periodically cleared snow off the sensor, though not frequently enough to ensure the sensor was clear 100% of the time. In order to provide the most accurate comparison, the ISWR data was manually filtered to remove observations where it was believed the sensor was covered by snow. To do this, snow depth data at the site was used in conjunction with values of outgoing (reflected) shortwave radiation (OSWR) and ISWR to determine the likelihood of the sensor being covered. For example, if the snow depth was increasing and OSWR was greater than ISWR, it was determined that snow was likely covering the sensor and those observations were subsequently removed. Then, if the ISWR values dramatically increased and were larger than the OSWR values,   70 especially during the hours when ski patrol was on duty, it was determined that the sensor had been cleared and those observations were kept. In total, about 25% of the observations were removed due to the likelihood of snow covering the pyranometer. Only observed ISWR values that were determined to be snow free during the daytime hours were used to validate the WRF ISWR predictions, which provided 1828 hours of data. Validation results can be seen in Table 3.10 and include the B, MAE, AR, and the RF with an error threshold of 50% of the observed value. Results show, even after removing the snow-covered observations, the tendency for the WRF to significantly over- predict ISWR at the study plot. Table 3.10. Verification results for WRF forecasted ISWR during the 2014/15 season. Forecasted values were only compared against observations during the daytime when it was determined the sensor was not covered by snow. 3.1.1.6 Outgoing Shortwave Radiation (OSWR). Outgoing, or reflected, shortwave radiation is an important piece of the surface radiation budget and determines how much of the ISWR is absorbed into the snowpack. WRF derives OSWR directly from the albedo estimation within the model. Albedo is parameterized based on the percent of snow cover in the WRF Land-Surface model (LSM) and takes into account snow density, temperature, and age of snow (Skamarock et al., 2008). Forecasted values were compared to observed values from the MSU weather station only during daytime hours. Since OSWR was measured using the downward facing sensor on the pyranometer, data was not impacted by snow covering the sensor. Results in Table 3.11 B MAE RF (+/- 50%) AR +172.4'w/m2 199.9'w/m2 19% 46%   71 include the B, MAE, AR, and the RF with an error threshold of 50% of the observed value. The WRF tended to under-forecast OSWR throughout the season, which meant less solar radiation was reflected and more was absorbed into the snowpack. Table 3.11. Verification results for WRF forecasted OSWR during the 2014/15 season. Forecasted values were only compared against observations during daytime hours. 3.1.1.7 Incoming Longwave Radiation (ILWR). ILWR is another source of heat to the snowpack, which is emitted in the form of latent heat from the atmosphere and the surrounding terrain (Armstrong and Brun, 2008). WRF estimates ILWR based on atmospheric water vapor, ozone, CO2, and cloud cover and optical depth (Skamarock et al., 2008). As with ISWR, ILWR was measured with an upward facing sensor on the pyranometer and was impacted by snow covering the sensor. The observations that were identified to be compromised by snow cover were also left out of the ILWR forecast validation. For comparative purposes, forecasted and observed values of ILWR were converted from w/m2 to °C using equation 13 from section 3.1.1.2. Leaving the units in w/m2 would have artificially decreased the forecast error due to the increase in scale magnitude and the relatively small range of ILWR values. Results are listed in Table 3.12 below and include the B, MAE, and RF with an error threshold of +/- 1.5 °C of the observed value. Throughout the winter, the WRF tended to under-forecast the amount of ILWR at the study plot, with an average error just below 10 °C with a nearly equal negative bias. Comparing results to the forecasted air and snow surface temperatures B MAE RF (+/- 50%) AR !41.0&w/m2 64.2&w/m2 51% 56%   72 using the same RF error threshold reveals decreased performance in forecasted ILWR, with the RF decreasing from about 50% to 12%. Table 3.12. Verification results for WRF forecasted ILWR during the 2014/15 season. In summary, the WRF model provided fairly accurate predictions of air temperature and relative humidity while over-predicting precipitation, winds, snow surface temperature, and ILWR and under-predicting OSWR and ILWR. Some of these forecast discrepancies were likely due to the sheltered location of the study plot, with lower observed winds, and solar radiation. Additionally, the location of the chosen model grid point may have led to higher precipitation amounts over the winter. A more thorough explanation of the forecast errors is discussed in chapter 4. 3.1.2. 2014/15 SNOWPACK Simulation Results Figures 3.5a and b show the modeled snow grain type over the entire season for both the SNOWPACK-Obs and SNOWPACK-WRF simulations. The evolution of the observed snowpack is apparent in both simulations with the formation of early season weak layers, several significant snow storms in late December and early January, melt- freeze crusts associated with a mid-winter warm up in late January and early February, near-surface facet development in late February, the transition to an isothermal snowpack in mid to late March, and several late season snow storms in April. Substantial differences can be seen between the two simulations as new snow in the SNOWPACK- WRF model tended to decompose and transition to rounded grains much quicker and the B MAE RF (+/- 1.5 °C) !8.8$°C 9.5$°C 12%   73 isothermal transition occurred much more rapidly in mid-March, with liquid water percolating through the entire snowpack in less than a day. Additionally, the mid-season peak snow depths were higher and the spring melt-off occurred earlier in the SNOWPACK-WRF simulation. a. SNOWPACK-Obs b. SNOWPACK-WRF Figure 3.5. Snow grain simulation for SNOWPACK using locally observed (a) and WRF (b) weather input data for the 2014/15 winter at the Alpine study plot.   74 3.1.2.1. Verification of Snow Pit Data. Measured and simulated snow pit parameters were compared for both SNOWPACK simulations using the methods discussed in section 2.5.1. A total of 303 layers were sampled from the 19 snow pits dug throughout the season, which included 283 density measurements, 273 temperature measurements, 303 grain size measurements, 786 primary and secondary grain type determinations, and 302 hand hardness measurements, totaling 2733 data points. Table 3.13 summarizes the statistical analysis results for the numerical parameters density, temperature, grain size, and hand hardness. Tables 3.14 and 3.15 show the results for two different grain type analysis methods. Because these parameters were subject to observer error, caution must be taken when interpreting the quantitative results. Table 3.13. Statistical results comparing measured and simulated numerical snowpack parameters. Table 3.14. Exact match method results for grain type comparison Bias MAE AR SNP-Obs +9.8 43.3 85% 68% SNP-WRF +39.2 55.7 83% 52% SNP-Obs +0.5 0.7 // 77% SNP-WRF +0.7 0.9 // 76% SNP-Obs -0.32 0.49 62% 63% SNP-WRF -0.33 0.47 64% 65% SNP-Obs -0.7 0.8 // 47% SNP-WRF -0.4 0.7 // 52% RF +/- 50 kg/m3 +/- 1 °C Grain Size (mm) (n = 303) +/- 0.5 mm Hand Hardness Index (n = 302) +/- 1/2 index Parameter Density (kg/m3) (n = 283) Temperature (°C) (n = 273) Primary Secondary SNP-Obs 53% 50% SNP-WRF 57% 56% Grain Type (Exact Match)   75 Table 3.15. Agreement score method results for grain type comparison Results in Tables 3.13-3.15 show surprisingly similar results/trends between both simulations, even though different input data was used. Both simulations tended to overestimate density, though the SNOWPACK-Obs simulation provided an overall more accurate prediction. Figures 3.6 and 3.7 both show relatively good agreement between measured and modeled snowpack density for both simulations, though both had several instances where low-density snow was predicted and high density snow was observed, most likely due to wind drifted snow observed at the surface not which was not simulated by SNOWPACK. Figure 3.6. Scatter plot of measured vs. SNOWPACK-WRF (left) and SNOWPACK- Obs (right) densities. Overall Aggreement Score SNP-Obs 0.71 SNP-WRF 0.71 Grain Type (Agreement Score) 0" 50" 100" 150" 200" 250" 300" 350" 400" 450" 500" 0" 50" 100" 150" 200" 250" 300" 350" 400" 450" 500" SN O W PA CK )W RF ,D en si ty ,(k g/ m 3 ) , Measured,Density,(kg/m3), 0" 50" 100" 150" 200" 250" 300" 350" 400" 450" 500" 0" 50" 100" 150" 200" 250" 300" 350" 400" 450" 500" SN O W PA CK )O bs ,D en si ty ,(k g/ m 3 ) , Measured,Density,(kg/m3),   76 Figure 3.7. Side-by-side boxplot of measured vs. SNOWPACK-Obs and SNOWPACK- WRF densities. In these boxplots, the center box represents the interquartile range (IQR), with the bottom and top of the box representing the first and third quartiles and the center line representing the median. The bottom and top whisker endlines represent the maximum and minimum values. For snowpack temperature, both models had around a 0.5 °C warm bias, but were very similar in terms of accuracy, with between 77% and 76% of the predictions falling within 1 °C of observed. Figures 3.8 and 3.9 show relatively good agreement between observed and modeled temperatures, though SNOWPACK-WRF did show larger deviations during periods of observed cold near-surface temperatures. Figure 3.8. Scatter plot of measured vs. SNOWPACK-WRF (left) and SNOWPACK- Obs (right) temperatures. !20$ !18$ !16$ !14$ !12$ !10$ !8$ !6$ !4$ !2$ 0$ !20$ !18$ !16$ !14$ !12$ !10$ !8$ !6$ !4$ !2$ 0$ SN O W PA CK )W RF ,T em pe ra tu re ,(° C) , Measured,Temperature,(°C),, !20$ !18$ !16$ !14$ !12$ !10$ !8$ !6$ !4$ !2$ 0$ !20$ !18$ !16$ !14$ !12$ !10$ !8$ !6$ !4$ !2$ 0$ SN O W PA CK )O bs ,T em pe ra tu re ,(° C) , Measured,Temperature,(°C),,   77 Figure 3.9. Side-by-side boxplot of measured vs. SNOWPACK-Obs and SNOWPACK- WRF temperatures. In these boxplots, the center box represents the interquartile range (IQR), with the bottom and top of the box representing the first and third quartiles and the center line representing the median. The bottom and top whisker endlines represent the maximum and minimum values excluding outliers. The circles represent outliers more than 1.5 times the lower quartile range Since temperatures fluctuate more in the top part of the snowpack than the middle and bottom, near-surface temperatures (upper 10% of the snowpack) were additionally validated, as shown in Table 3.16. Both models showed an increased warm bias and overall error, as well as a decreased RF accuracy, showing a tendency of the models to struggle more with predicting near-surface temperature fluctuations. The SNOWPACK- WRF simulation also performed comparatively worse when only evaluating near surface temperatures, with the RF difference between the two models increasing from 1% in Table 3.13 to 10% in Table 3.16. This is likely due to the fact that the SNOWPACK-Obs simulation was forced with measured snow surface temperatures, so near-surface temperatures measured in the snow pits likely correlated closely to those measured by the nearby MSU weather station at the same time, increasing the accuracy.   78 Table 3.16. Statistical results comparing measured and simulated near-surface temperatures (top 10% of snowpack). Both models under-predicted snowpack grain size, with a nearly identical bias and MAE, though SNOWPACK-WRF slightly outperformed SNOWPACK-Obs throughout the season. Both models also had trouble simulating the largest grain sizes associated with depth hoar near the bottom of the snowpack, with the average error for both models around -1.35 mm when the observed grain size was greater than 2 mm. This is most likely due to the simulated thickness of the depth hoar layer, where only a 10-20 cm thick layer was modeled through most of the season while 50 cm of depth hoar was typically observed, especially after mid January. SNOWPACK was then simulating smaller rounded grains where larger depth hoar was actually observed. Additional difficulty arises from the fact that SNOWPACK assigns a default size of 0.3 mm to new snow particles, which is typically much smaller than what is observed in the field. This led to a decrease of the mean grain size adding to the negative bias seen in the simulations. Figure 3.10 shows little correlation between observed and predicted grain size and Figure 3.11 shows the negative bias present in both models. Bias MAE SNP-Obs +1.1 1.5 54% SNP-WRF +2.1 2.4 44% Temperature (°C) (Top 10% of snowpack) Parameter +/- 1 °C RF   79 Figure 3.10. Scatter plot of measured vs. SNOWPACK-WRF (left) and SNOWPACK- Obs (right) grain size. Figure 3.11. Side-by-side boxplot of measured vs. SNOWPACK-Obs and SNOWPACK- WRF grain size. In these boxplots, the center box represents the interquartile range (IQR), with the bottom and top of the box representing the first and third quartiles and the center line representing the median. The bottom and top whisker endlines represent the maximum and minimum values excluding outliers. The circles represent outliers more than 1.5 times the upper quartile range Results for hand hardness showed both models tended to have a lower index (i.e., softer) when compared to field observations, though SNOWPACK-WRF did show some improvement over SNOWPACK-Obs (Fig. 3.12). Discrepancies likely arose from numerous observed ice layers, especially in later winter and spring, as well as harder surface snow due to wind depositions that were not simulated. 0" 0.5" 1" 1.5" 2" 2.5" 3" 0" 0.5" 1" 1.5" 2" 2.5" 3" SN O W PA CK )W RF ,G ra in ,S iz e, (m m ), Measured,Grain,Size,(mm), 0" 0.5" 1" 1.5" 2" 2.5" 3" 0" 0.5" 1" 1.5" 2" 2.5" 3" SN O W PA CK )O bs ,G ra in ,S iz e, (m m ), Measured,Grain,Size,(mm),   80 Figure 3.12. Side-by-side boxplot of measured vs. SNOWPACK-Obs and SNOWPACK- WRF hand hardness index. In these boxplots, the center box represents the interquartile range (IQR), with the bottom and top of the box representing the first and third quartiles and the center line representing the median. The bottom and top whisker endlines represent the maximum and minimum values excluding outliers. The circle represents outliers more than 1.5 times the lower quartile range One of the most useful tools for avalanche forecasters is SNOWPACK’s ability to simulate the formation and evolution of snow grain forms, allowing forecasters to track important weak layers throughout the season. Simulated grain type showed only somewhat of an agreement using the exact match method, with a little over 50% of the simulated grain types matching the corresponding observations (Table 3.14). Using the agreement score method provided a better result with no difference between the two models (Table 3.15). These comparisons only give an average score over the whole season, but don’t provide any details as to where errors occurred or how well certain grain types were simulated. A grain type contingency table for each model (Table 3.17) was produced to provide additional details for the grain type simulations. Here, grain types were grouped   81 into related types in order to focus on the ability of the model to simulate grains formed under similar conditions as well as those that have an analogous impact on the stability of the snowpack. The groups consisted of precipitation particles (PP) and decomposing precipitation particles (DF) in one group, and facets (FC), depth hoar (DH), and rounding depth hoar (FCxr) in a second group. Rounded grains (RG), surface hoar (SH), and melt forms (MF), including melt-freeze crusts, were left separate. Table 3.17. Primary and secondary grain type contingency tables for the (a) SNOWPACK-Obs model and the (b) SNOWPACK-WRF model. The columns contain the number of observed grain types in each group and the rows contain the number of modeled grain types in each group. POD is the probability of detection for each grain type group. It is calculated as the total number of correct predictions of a grain type group (shaded) divided by the total number of times the grain type group was observed. a. PP/DF RG FC/DH/FCxr SH MF PP/DF 36 12 6 0 7 RG 0 85 6 0 19 FC/DH/FCxr 5 52 65 0 21 SH 0 0 0 0 0 MF 2 7 0 0 70 POD 0.84 0.54 0.84 // 0.60 PP/DF RG FC/DH/FCxr SH MF PP/DF 36 6 2 3 5 RG 1 66 19 0 20 FC/DH/FCxr 6 65 58 0 27 SH 0 0 0 0 0 MF 0 6 1 0 72 POD 0.84 0.46 0.73 0 0.58 SN O W PA C K -O bs SN O W PA C K -O bs Observed Primary Grain Type Secondary Grain Type Observed SNOWPACK-Obs   82 b. Results from Table 3.17 show how accurately each grain type was predicted. The SNOWPACK-WRF model struggled with predicting snow grains associated with weak layers (FC/DH/FCxr group) with a significant decrease in accuracy when compared with SNOWPACK-Obs. In summary, both models tended to overestimate density and temperature while underestimating grain size and hand hardness. The SNOWPACK-Obs model outperformed the SNOWPACK-WRF model in density and temperature while actually performing equally or slightly worse in grain size, hand hardness, and grain shape. PP/DF RG FC/DH/FCxr SH MF PP/DF 27 3 1 0 2 RG 9 107 19 0 17 FC/DH/FCxr 5 31 30 0 7 SH 0 0 0 0 0 MF 2 15 27 0 91 POD 0.63 0.69 0.39 // 0.78 PP/DF RG FC/DH/FCxr SH MF PP/DF 17 2 2 1 2 RG 14 96 27 2 20 FC/DH/FCxr 11 27 31 0 6 SH 0 0 0 0 0 MF 1 18 20 0 96 POD 0.40 0.67 0.39 0 0.77 Observed SN O W PA C K -W R F SNOWPACK-WRF Primary Grain Type Observed SN O W PA C K -W R F Secondary Grain Type   83 Overall, forcing SNOWPACK with WRF data instead of in-situ data did not lead to any discernible differences in performance when validating against the observed pit data. 3.1.2.2. Verification of Other Observed Data. Besides manual snow profile data, observed snowpack data from the Alpine study plot also included snow depth from the MSU, Alpine, and Brackett Creek SNOTEL weather stations (5137 hours each), manual SWE measurements from the study plot (19) and automatic SWE measurements from the Brackett Creek SNOTEL station (5137 hours), 50 cm temperature from the MSU weather station (3543 hours), and the 24 hour new snow and new snow density from the Bridger Bowl ski patrol end of day measurements (156 days), totaling 24422 unique data points for validation against the two SNOWPACK models. 3.1.2.2.1 Snow Depth. Figure 3.13 compares measured and modeled snow depth between the MSU weather station and the SNOWPACK-WRF simulation. Snow depth measured at the Alpine weather station and the Brackett Creek SNOTEL station was also included for comparative purposes. The modeled snow depth matched the observed snow depths fairly closely through the end 2014. Between January and March, the modeled snow depth began to deviate from the observations and was between 50-75 cm deeper than the two weather stations (circled area in Figure 3.13). The modeled snow depth melted off much more rapidly once the spring melt began in mid-March and remained below the observed snow depths through the rest of the spring, with the snow melting out about two weeks earlier than the three weather stations. Also of interest is the difference in snow depth between the co-located MSU and Alpine weather stations,   84 showing the high spatial variability of snow depth in the small study plot. The Alpine weather station was located closer to the trees where it received less direct sunlight than the MSU weather station, slowing snowmelt. Additionally, wind drifted snow would accumulate along the tree line, especially during strong NE wind events, further increasing the snow depth discrepancy between the two weather stations. Figure 3.13. Comparison of observed and SNOWPACK-WRF modeled snow depth during the 2014/15 season, with the area between the two dashed line showing the period of greatest model deviation. Statistical results are listed in Table 3.18 and show the overall significant error compared to all three weather stations through the winter. The model had a positive bias compared to the MSU and Brackett Creek stations but actually had a slight negative bias 0 20 40 60 80 100 120 140 160 180 200 1-N ov 1-D ec 1-J an 1-F eb 1-M ar 1-A pr 1-M ay 1-J un Sn ow D ep th (c m ) Date Observed vs Modeled Snow Depth SNOWPACK-WRF MSU Weather Station Alpine Weather Station Brackett Creek SNOTEL   85 compared to the Alpine station. This is mostly due to the higher observed depths and slower melting at the station later in the season. Table 3.18. 2014/15 SNOWPACK-WRF snow depth verification results compared against the snow depth measured at 3 similar weather stations. 3.1.2.2.2 SWE. Modeled SWE for both SNOWPACK simulations was compared against the 19 manual observations made at the Alpine study plot, as seen in Figure 3.14. Additionally, the Brackett Creek SWE, as measured on a snow pillow, was included for comparative purposes. As with snow depth, SWE in the SNOWPACK-WRF model was in generally good agreement with the observations through the end of 2014, but began to deviate significantly into January with much higher SWE values (area between dashed lines in Figure 3.14). The spring melt was not as abrupt as snow depth but still melted out earlier than Brackett Creek and the SNOWPACK-Obs simulation. SNOWPACK- Obs showed very good agreement with the measured data points and followed the Brackett Creek SWE fairly close as well. SNP-WRF vs. Bias MAE AR MSU +15.1 31.8 69% Alpine -1.4 30.1 72% Brackett Creek +8.3 17.9 81% Snow Depth (cm)   86 Figure 3.14. Comparison of observed and modeled SWE during the 2014/15 season, with the area between the two dashed line showing the period of greatest model deviation. Statistical results are listed in Table 3.19 and show the poor prediction of SWE by the SNOWPACK-WRF model when compared to the measured data, with a large positive bias and MAE. Performance also differed greatly compared to the SNOWPACK-Obs model, with a much higher errors and lower overall accuracy. SWE from the SNOWPACK-WRF model did correlate better with the Brackett Creek SWE compared to SWE measured at the Alpine study plot, with a much smaller bias and average error and a higher overall accuracy. 0 100 200 300 400 500 600 700 800 1-N ov 1-D ec 1-J an 1-F eb 1-M ar 1-A pr 1-M ay 1-J un SW E (m m ) Date Observed vs Modeled SWE SNOWPACK-WRF SNOWPACK-Obs Measured Brackett Creek SNOTEL   87 Table 3.19. 2014/15 SWE verification results compared against the Alpine study plot manual measurements and the Brackett Creek SNOTEL station. 3.1.2.2.3 50 cm Temperature. Snowpack temperatures at 50 cm above the ground were measured via the MSU weather station between Dec 29th and May 26th. 50 cm temperatures for both models were validated against the observed dataset through May 13th, which was the last date that 50 cm temperatures were available in the SNOWPACK- WRF model. Results in Figure 3.15 and Table 3.20 show both models had a warm bias that was more pronounced in the beginning of the time period. As the snow depth increased through the season, modeled temperatures began to match the observed temperature better, though the SNOWPACK-WRF model did not simulate the sinusoidal fluctuations seen between late January and mid March, which were simulated quite well by the SNOWPACK-Obs model. Both models did simulate the timing of the isothermal transition (when the temperature increased to, and stayed at, 0 °C due to the presence of melt water percolating through the snowpack) within a couple of days of observed. Bias MAE AR Bias MAE AR SNP-Obs +17.5 24.0 93% -9.0 55.1 81% SNP-WRF +175.1 175.1 66% +59.7 93.7 74% vs Alpine Manual SWE vs Brackett Creek SWE SWE (mm)   88 Figure 3.15. Comparison of observed and modeled 50 cm snowpack temperature during the 2014/15 season. Table 3.20. 2014/15 50 cm snowpack temperature verification results compared against the MSU weather station for both SNOWPACK model runs. 3.1.2.2.4 24-Hour and Total Snowfall. Daily snowfall totals (24HN) were measured by ski patrol each day from November 1st through April 4th at 1600 MST using a 24-hour snow board located next to the Alpine weather station. 24HN values were also calculated in SNOWPACK using either precipitation sums or changes in snow depth. New snow amounts were calculated in the SNOWPACK-WRF model using the provided precipitation data and a parameterized new snow density, which calculates the density, -7.00 -6.00 -5.00 -4.00 -3.00 -2.00 -1.00 0.00 29 -D ec 13 -J an 28 -J an 12 -F eb 27 -F eb 14 -M ar 29 -M ar 13 -A pr 28 -A pr 13 -M ay T em pe ra tu re (° C ) Date Measured vs Modeled 50 cm Temperature MSU Weather Station SNOWPACK-Obs SNOWPACK-WRF Bias MAE RF (+/- 1 °C) SNP-Obs +0.32 0.36 93% SNP-WRF +0.48 0.52 79% 50 cm Temperature (°C)   89 and hence new snow amount, using different meteorological factors including temperature and wind speed. 24HN was calculated in the SNOWPACK-Obs model by using changes in the provided snow depth and a parameterized new snow settlement regime (Lehning and Bartlet, 2002). Measured and modeled 24HN amounts were compared using the same procedure for 24HNW, as discussed in section 3.1.1.3. Results are provided below in Table 3.21. Table 3.21. 2014/15 24HN verification results for both SNOWPACK model runs. Both models tended to over-predict 24HN, though SNOWPACK-Obs provided a more accurate prediction compared to SNOWPACK-WRF. When only looking at days with observed snowfall, SNOWPACK-WRF’s performance decreased even more, with the AR dropping by 10%. SNOWPACK-Obs stayed fairly accurate, only dropping 5% during days with observed snow. This shows the strength of the new snow determination using snow depth and the settlement regime while highlighting the weaknesses of determining new snow using only precipitation data. Next, 24HN values were added up over the entire season to determine total accumulated snowfall and modeled and observed values were compared. Though this measure is less important to avalanche forecasters than 24HN, seasonal sums may still be useful to other user groups like ski resort managers, climatologists, and hydrologists. Bias MAE AR (all days) AR (only days with observed snow) SNP-Obs +0.41 1.36 68% 63% SNP-WRF +0.63 3.30 58% 48% 24HN (through April 4th, cm)   90 Results in Figure 3.16 and Table 3.22 show fairly good agreement between the measured and modeled snow totals, especially through most of January. By the end of the season, modeled snow totals only differed between 60 and 106 cm from the observed totals, with the overall accuracy varying between 90% and 84%. Surprisingly, after running the simulations into early June, total snow accumulations were nearly equal between the two simulations. Figure 3.16. Comparison of modeled and observed accumulated snowfall during the 2014/15 season. Table 3.22. Comparison of modeled and observed accumulated snowfall during the 2014/15 season. 0 100 200 300 400 500 600 700 800 1-N ov 1-D ec 1-J an 1-F eb 1-M ar 1-A pr 1-M ay 1-J un Sn ow (c m ) Date Measured vs Modeled Accumulated Snow SNOWPACK-WRF SNOWPACK-Obs Measured Total (cm) Bias (cm) AR Measured 575.3 // // SNP-Obs 640.3 +65 90% SNP-WRF 681.2 +105.9 84% Total Snowfall (through April 4th)   91 3.1.2.2.5 New Snow Density. As mentioned in the previous section, SNOWPACK calculates new snow density via an empirical solution that takes into account, among other factors, temperature and wind speed, which both have a direct relationship to density. Because of timing differences between observed and modeled precipitation, new snow density was averaged over each month during the season allowing the data to be compared more easily. Results in Table 3.23 show the tendency for SNOWPACK-Obs to under-predict new snow density and SNOWPACK-WRF to over-predict it, though by similar amounts with nearly equal overall accuracies. Table 3.23. Comparison of modeled and observed monthly averaged new snow density during the 2014/15 season. 3.1.2.3 Overall Accuracy. The overall accuracy for both simulations was calculated by averaging ARs (or RFs if AR was not calculated) for each snowpack parameter (Table 3.24). Results show only an 8% accuracy decrease when using WRF forecasted weather data versus locally observed weather data to drive SNOWPACK. When looking specifically at pit data, there is little discernible difference in accuracy Measured SNP-Obs SNP-WRF Bias (kg/m3) Accuracy Bias (kg/m3) Accuracy November 129 87 150 -42 68% +21 86% Decemeber 71 80 135 -9 88% +65 52% January 123 87 131 -36 71% +8 94% February 130 83 131 -47 64% +1 99% March 138 109 146 -29 79% +8 94% April (Through 4th) 89 96 173 +8 92% +84 51% 2014/15 Avg 113 90 144 -23 80% +31 78% SNOWPACK-Obs SNOWPACK-WRFNew Snow Density (kg/m3)   92 between the two simulations. Most of the inaccuracies in the SNOWPACK-WRF model come from HS and SWE, which stems from the over-forecasted precipitation. Table 3.24. Overall accuracy comparison of the SNOWPACK-Obs and SNOWPACK- WRF models during the 2014/15 season. In summary, though some forecasted meteorological parameters were routinely inaccurate throughout the season, there was only a small decrease in the overall accuracy of the SNOWPACK-WRF model. 3.2 Winter 2015/16 The following section provides results of the NWP forced SNOWPACK simulations for five locations across western Montana. Analysis was event based and SNOWPACK-Obs SNOWPACK-WRF Density 85% 83% Temperature 77% 76% Grain Size 63% 65% Hand Hardness 47% 52% Grain Type (Agreement Method) 71% 71% HS (vs MSU) 98% 69% SWE (vs Alpine Manual) 93% 66% 50 cm Temperature 93% 79% 24HN 68% 58% Total Snowfall 90% 84% New Snow Density 80% 78% Overall 79% 71% SNOWPACK Accuracy Comparison   93 focused primarily on the model’s overall ability to characterize the snow cover and avalanche danger at a regional scale with focus on specific events and case studies. The 2015/16 winter across western Montana was characterized by above average temperatures, near normal precipitation, and slightly below average snowfall (Table 3.25 and Figure 3.17). The dominant feature that impacted the weather over the season was an El Nino pattern, which typically brings warmer and drier weather to the NW U.S. The warm temperature anomaly reflected the El Nino pattern but precipitation was more than what was expected at the beginning of the season, per NOAA predictions (Climate Prediction Center, 2015). Table 3.25. Winter 2015/16 temperature, precipitation, SWE, and snowfall departure from averages for 5 sites across W. Montana used during the 2015/16 study. Brackett Creek and the Alpine weather station are both representative of the Bridger Mountains. Not listed is the Big Mountain weather station atop Whitefish Mountain in NW Montana, which was installed only 2 years ago meaning no climate data was available for that site. Temperature (Departure from Average) Precipitation (% of Average) SWE (Average Daily % of Median) Noisy Basin SNOTEL +2.04 93.6% 89.6% Stuart Mountain SNOTEL +1.64 97.1% 85.9% Lolo Pass SNOTEL +1.93 97.2% 70.6% Brackett Creek SNOTEL +1.27 95.4% 95.9% Temperature (Departure from Average) Precipitation (% of Average) Snowfall (% of Average) Alpine Weather Station +1.68 105.1% 80.4% 1-Nov 2015 to 31-May 2016 1-Nov 2015 to 31-Mar 2016   94 Figure 3.17. Observed versus average values for precipitation and SWE during the 2016 water year for the four SNOTEL stations listed in Table 3.21. (Source: NRCS, 2016) The season started off below normal in terms of precipitation for most locations as a persistent ridge of high pressure formed over the western U.S. during mid to late November bringing cold and dry conditions across most of NW Montana. This led to the formation of widespread facets and depth hoar across most of the region and laid the foundation for dangerous avalanche conditions through mid-winter. In December, the ridge broke down and a wintertime pattern brought in a steady stream of storms to the region. A powerful system with significant moisture brought 4 – 8 cm of SWE up to 2100 m, with nearly 5 cm falling as rain, to the mountains in NW Montana, leading to widespread natural avalanche activity (Fig. 3.18b, #4). In SW Montana, several storms   95 brought significant snowfall to the Bridger Mountains throughout the month of December and early January that led to numerous natural and human triggered avalanches failing on the November weak layer (Fig. 3.18a, #1). In mid January, the largest storm of the season in NW Montana dropped 13.7 cm of snow water equivalent and 114 cm of snow over a 5-day period, causing widespread natural avalanche activity (Fig. 3.18b, #5). A steady stream of storms continued into February over NW Montana while drier and warmer conditions prevailed in SW Montana. March brought below average precipitation and above average temperatures to the region with stabilizing snowpack conditions. Early April brought warm temperatures and a quick transition to a springtime snowpack throughout the region with some wet slab avalanche activity observed in SW Montana (Fig. 3.18a, #3). Several late season storm systems brought significant snow to the mountains including 65 cm of snow to the Bridger Mountains in mid May and over 90 cm of snow to the Whitefish Mountains in late May.   96 a. Bridger Bowl (2255 m) b. Noisy Basin (1841 m) Figure 3.18. Seasonal weather timelines for (a.) Bridger Bowl and (b.) Noisy Basin showing 24HNW, daily average temperature, and notable avalanche cycles during the 2015/16 winter. -20 -15 -10 -5 0 5 10 15 0 5 10 15 20 25 30 35 1-D ec 16 -D ec 31 -D ec 15 -Ja n 30 -Ja n 14 -Fe b 29 -Fe b 15 -M ar 30 -M ar 14 -A pr 29 -A pr 14 -M ay Tem perature)(°C))2 4H NW )(m m ))) Date 24HNW Temperature *))Avalanche Cycle *)) *) *)) 1) 2) 3) -15 -10 -5 0 5 10 15 20 0 10 20 30 40 50 60 70 80 90 100 3-N ov 18 -N ov 3-D ec 18 -D ec 2-J an 17 -Ja n 1-F eb 16 -Fe b 2-M ar 17 -M ar 1-A pr 16 -A pr 1-M ay 16 -M ay 31 -M ay Tem perature (°C ) 24 H N W (m m ) Date 24HNW Avg Daily Temperature 4" 5" * *" *""Avalanche Cycle   97 3.2.1 WRF Model Performance During the 2015/16 season, the WRF model was run for five locations across western Montana, corresponding to locations with pre-existing weather stations in or near avalanche terrain. Though the focus of this year’s study was not focused on the quantitative accuracy of the WRF and SNOWPACK models, several quantitative measures of accuracy were calculated to provide some measure of model performance. For forecasted weather data, total precipitation, 24HNW, and temperature were validated against available observed data. These parameters were validated for the same time period as the corresponding SNOWPACK simulation for each location, which went through the day the snowpack melted out (Fig. 3.19). Figure 3.19. Date range that WRF meteorological validation was performed for each location. The end date represents the day the measured snowpack completely melted out. Accumulated precipitation was over-forecasted at all sites, though the predictions for Noisy Basin, Lolo Pass, and Bridger Bowl were all fairly accurate with ARs ranging between 83-96% over the whole season (Table 3.26, Figure 3.20). Precipitation was 1"O ct" 15 ' 31 "O ct" 15 ' 30 "N ov "15 ' 30 "D ec "15 ' 29 "Ja n"1 6' 28 "Fe b"1 6' 29 "M ar" 16 ' 28 "A pr" 16 ' 28 "M ay "16 ' 27 "Ju n"1 6' Big'Mountain'Summit' Noisy'Basin' Stuart'Mountain' Lolo'Pass' BrackeI'Creek' Bridge'Bowl'"'Alpine' 20"Jun'2"Nov' 11"Jun'2"Nov' 11"Jun'2"Nov' 6"May'2"Nov' 1"Jun'30"Oct' 7"Jun'29"Oct'   98 heavily over-forecasted at Stuart Mountain and, to a lesser extent, Big Mountain Summit. Even though observed precipitation at Big Mountain Summit was corrected for wind- induced under-catch (eqn. 1), additional issues with under-catch may have been present due to the use of an unshielded heated tipping bucket rain gauge, similar to the one used at the Alpine weather station. Evaluating precipitation timing with the 24HNW results (Table 3.27), all locations had a positive bias and similar MAEs and ARs. Additionally, the ARs were similar to those from last year’s WRF run at Bridger Bowl (see Table 3.5). Even the sites with accurate accumulated precipitation forecasts showed decreased 24HNW performance, revealing systemic issues associated with the timing of forecasted precipitation events. Forecasted temperatures were bias corrected, according to Table 3.29 before being validated against observed temperatures (Table 3.28). With the exception of Big Mountain Summit, all locations showed a cold bias as well as decreased accuracy compared to the 2014/15 WRF temperature verification (see Table 3.2).   99 Table 3.26. WRF precipitation forecast verification for the 2015/16 winter for five locations across western Montana. (Note: Big Mountain Summit measured precipitation was adjusted for wind-induced under-catch using eq. 1 in section 2.3.1 and was only available through the 13th. Also, data from the Brackett Creek and Bridger Bowl stations were both used to verify the same WRF model grid point at Bridger Bowl.) Total Precip B AR Measured 949.9 mm // // WRF 1252.3 mm +302.4 mm 76% Measured 1353.8 mm // // WRF 1452.6 mm +98.8 mm 93% Measured 858.5 mm // // WRF 1390.1 mm +531.6 mm 62% Measured 792.5 mm // // WRF 825.0 mm +32.6 mm 96% Measured 759.5 mm // // WRF 912.7 mm +153.2 mm 83% Measured 467.4 mm // // WRF 548.1 mm +80.7 mm 85% Bridger Bowl - Alpine (1 Dec - 31 Mar) Big Mountain Summit (2 Nov - 13 Jun) Noisy Basin (2 Nov - 11 Jun) Stuart Mountain (2 Nov - 11 Jun) Lolo Pass (2 Nov - 6 May) Brackett Creek ( 30 Oct - 1 Jun)   100 a. Big Mountain Summit b. Noisy Basin c. Stuart Mountain d. Lolo Pass e. Bridger Bowl Figure 3.20. Comparison of forecasted and observed accumulated precipitation during the 2015/16 season. (Note: Date ranges and precipitation scales are different for each location). 0 200 400 600 800 1000 1200 1400 2-N ov 2-D ec 2-J an 2-F eb 2-M ar 2-A pr 2-M ay 2-J un Pr ec ip ita tio n (m m ) Date WRF Big Mountain Summit 0 200 400 600 800 1000 1200 1400 1600 2-N ov 2-D ec 2-J an 2-F eb 2-M ar 2-A pr 2-M ay 2-J un P re ci pi ta ti on (m m ) Date WRF Noisy Basin SNOTEL 0 200 400 600 800 1000 1200 1400 1600 2-N ov 2-D ec 2-J an 2-F eb 2-M ar 2-A pr 2-M ay 2-J un Pr ec ip ita tio n (m m ) Date WRF Stuart Mountain SNOTEL 0 100 200 300 400 500 600 700 800 900 2-N ov 2-D ec 2-J an 2-F eb 2-M ar 2-A pr 2-M ay Pr ec ip ita tio n (m m ) Date WRF Lolo Pass SNOTEL 0 100 200 300 400 500 600 700 800 900 1000 1-N ov 1-D ec 1-J an 1-F eb 1-M ar 1-A pr 1-M ay 1-J un Pr ec ip ita tio n (m m ) Date WRF Brackett Creek SNOTEL Bridger Bowl Alpine Manual Obs   101 Table 3.27. WRF 24HNW verification for the 2015/16 winter for five locations across western Montana. (Note: Big Mountain Summit measured precipitation was adjusted for wind-induced under-catch using eq. 1 in section 2.3.1 and was only available through the 13th. Also, data from the Brackett Creek and Bridger Bowl stations were both used to verify the same WRF model grid point at Bridger Bowl.) Table 3.28. WRF temperature verification for the 2015/16 winter for five locations across western Montana. WRF temperatures were first bias corrected for elevation, as listed in Table 3.29 B MAE AR (all days) AR (only days with observed precip) Big Mountain Summit (2 Nov - 13 Jun) +1.40 mm 3.80 mm 50% 43% Noisy Basin (2 Nov - 11 Jun) +0.46 mm 3.95 mm 56% 49% Stuart Mountain (2 Nov - 11 Jun) +2.39 mm 3.90 mm 49% 48% Lolo Pass (2 Nov - 6 May) +0.18 mm 3.21 mm 51% 48% Brackett Creek ( 30 Oct - 1 Jun) +0.71 mm 3.00 mm 57% 49% Bridger Bowl - Alpine (1 Dec - 31 Mar) +0.67 mm 3.90 mm 47% 42% B MAE RF (+/- 1.5 °C) Big Mountain Summit (2 Nov - 20 Jun) !1.46&°C 1.21&°C 48% Noisy Basin (2 Nov - 11 Jun) !2.57&°C 3.07&°C 23% Stuart Mountain (2 Nov - 11 Jun) !3.27&°C 3.46&°C 21% Lolo Pass (2 Nov - 6 May) !1.79&°C 3.10&°C 29% Bridger Bowl - Alpine (29 Oct - 7 Jun) !1.70&°C 2.27&°C 41%   102 3.2.2. SNOWPACK Simulation Results As performed in 2014/15, WRF data for all five locations was used as input to drive the SNOWPACK model to simulate the seasonal snow cover for the 2015/16 season. Because qualitative comparisons against snow and avalanche observations were the main focus of this season, routine snowpit data and in-depth quantitative comparisons were not available. However, quantitative comparisons of snow depth and, where available, SWE, were made using data from nearby SNOTEL and weather stations for each of the five WRF grid points. As seen in the previous season, due to horizontal resolution limitations, the WRF model tends to over-predict wind speeds in mountainous terrain, especially for sheltered locations typically associated with SNOTEL locations. In addition to the aforementioned temperature bias corrections, corrections of wind speed for each location were implemented prior to running SNOWPACK to ensure a more realistic simulation. Corrections were determined using available observed data at the site, or, if no data was available, were estimated using results from previous simulations as well as recommendations from the NWS. Observed wind data was available for Big Mountain Summit, Lolo Pass, and Bridger Bowl (Alpine weather station). Bias corrections used for the five SNOWPACK simulations are listed in Table 3.29.   103 Table 3.29. Bias corrections used for the 2015/16 SNOWPACK simulations. The wind speed multiplier indicates the number each hourly wind speed was multiplied by before running through SNOWPACK. SNOWPACK-WRF modeled snow grain type for each location is presented in Figure 3.21. The main features discussed in seasonal weather summary were generally modeled in each simulation. Each simulation showed the formation of weak layers during the November cold spell with facets and depth hoar (blue) forming and subsequently being buried by new snow (green). The early December rain event in NW Montana can be seen with the formation of melt-forms/crusts (red) in the Big Mountain Summit and Noisy Basin simulations. Additionally, the mid January storms across all of W Montana are seen in each simulation with large amounts of new snow accumulating on top of previously formed near-surface facets. As expected, the transition to a springtime snowpack occurrs earlier at the lower elevation site of Lolo Pass, while the transition occurs later in April at the other locations. Potent late season storms were also simulated relatively accurately at Big Mountain, Noisy Basin, Stuart Mountain, and Bridger Bowl, represented as sharp increases in snow depth in late April and May. Temperature Offset (°C) Wind Speed Multiplier Big Mountain Summit -1.3 0.7 Noisy Basin -2.3 0.3 Stuart Mountain -1.5 0.3 Lolo Pass +0.7 0.1 Bridger Bowl (Brackett Creek and Alpine) -0.9 0.2 2015/16 WRF Bias Corrections   104 Figure 3.21. SNOWPACK-WRF simulated snow grain type for each location during the 2015/16 winter season. Note different snow depth scales for each location. Comparisons of simulated vs observed snow depths and SWE are seen in Tables 3.30 and 3.31 as well as Figures 3.22 and 3.23. Snow depth and SWE accuracies were 300 240 180 120 60 0 Sn ow D ep th (c m ) Big Mountain Summit (2053 m) Noisy Basin (1841 m) Stuart Mountain (2255 m) 350 280 210 140 70 0 Sn ow D ep th (c m ) G ra in S ha pe Lolo Pass (1597 m) 200 160 120 80 40 300 240 180 120 60 0   105 generally as, or more, accurate than the previous season at Bridger Bowl. Results show increasing accuracies for locations with more accurate precipitation forecasts, though some contrary results can be seen. For example, Lolo Pass had an accumulated precipitation accuracy of 96% but a snow depth accuracy of only 68%, with a generally much shallower early season snowpack and a slower melting spring snowpack. As expected, the heavily over-forecasted precipitation at Stuart Mountain led to higher than observed snow depths and SWE throughout the entire season. Overall, the simulations provided a good representation of the general snowpack characteristics across the region with seemingly improved results compared to the previous season for most locations, though the addition of wind speed corrections likely added to the improvements, as discussed in section 4.4. Table 3.30. SNOWPACK-WRF snow depth verification for the 2015/16 winter for 5 locations across western Montana. Data from the Brackett Creek and Bridger Bowl Alpine stations were both used to verify the same SNOWPACK-WRF simulation at Bridger Bowl. SNOWPACK-WRF vs. Bias MAE AR Big Mountain Summit (2 Nov - 20 Jun) -24.7 cm 26.1 cm 76% Noisy Basin (2 Nov - 11 Jun) -20.6 cm 25.2 cm 80% Stuart Mountain (2 Nov - 11 Jun) +50.5 cm 52.5 cm 72% Lolo Pass (2 Nov - 6 May) -6.4 cm 25.7 cm 68% Brackett Creek (29 Oct - 1 Jun) +2.0 cm 10.8 cm 86% Bridger Bowl - Alpine (29 Oct - 7 Jun) -18.4 cm 22.0 cm 73% Snow Depth (cm)   106 a. Big Mountain Summit b. Noisy Basin c. Stuart Mountain d. Lolo Pass e. Bridger Bowl –Alpine f. Brackett Creek Figure 3.22. Comparison of modeled and observed snow depth for five locations during the 2015/16 season. (Note: Data from the Bridger Bowl Alpine and Brackett Creek stations were both used to verify the same SNOWPACK-WRF simulation at Bridger Bowl) 0 50 100 150 200 250 300 1-N ov 1-D ec 1-J an 1-F eb 1-M ar 1-A pr 1-M ay 1-J un 1-J ul Sn ow D ep th ( cm ) Date SNOWPACK-WRF Big Mountain Summit 0 50 100 150 200 250 300 2-N ov 2-D ec 2-J an 2-F eb 2-M ar 2-A pr 2-M ay 2-J un Sn ow D ep th (c m ) Date SNOWPACK-WRF Noisy Basin SNOTEL 0 50 100 150 200 250 300 350 2-N ov 2-D ec 2-J an 2-F eb 2-M ar 2-A pr 2-M ay 2-J un Sn ow D ep th (c m ) Date SNOWPACK-WRF Stuart Mountain SNOTEL 0 20 40 60 80 100 120 140 160 180 2-N ov 2-D ec 2-J an 2-F eb 2-M ar 2-A pr 2-M ay 2-J un Sn ow D ep th (c m ) Date SNOWPACK-WRF Lolo Pass SNOTEL 0 50 100 150 200 1-N ov 1-D ec 1-J an 1-F eb 1-M ar 1-A pr 1-M ay 1-J un Sn ow D ep th (c m ) Date SNOWPACK-WRF Bridger Bowl - Alpine 0 50 100 150 200 1-N ov 1-D ec 1-J an 1-F eb 1-M ar 1-A pr 1-M ay 1-J un Sn ow D ep th (c m ) Date SNOWPACK-WRF Brackett Creek SNOTEL   107 Table 3.31. SNOWPACK-WRF SWE verification for the 2015/16 winter for four locations across western Montana. a. Noisy Basin b. Stuart Mountain c. Lolo Pass d. Brackett Creek Figure 3.23. Comparison of modeled and observed SWE for four locations during the 2015/16 season. SNOWPACK-WRF vs. Bias MAE AR Noisy Basin (2 Nov - 11 Jun) -55.6 mm 67.4 mm 84% Stuart Mountain (2 Nov - 11 Jun) +239.0 mm 244.6 mm 66% Lolo Pass (2 Nov - 6 May) +26.3 mm 46.6 mm 86% Brackett Creek (29 Oct - 1 Jun) +14.6 mm 35.3 mm 85% SWE (mm) 0 200 400 600 800 1000 1200 1400 2-N ov 2-D ec 2-J an 2-F eb 2-M ar 2-A pr 2-M ay 2-J un SW E (m m ) Date SNOWPACK-WRF Noisy Basin SNOTEL 0 200 400 600 800 1000 1200 1400 2-N ov 2-D ec 2-J an 2-F eb 2-M ar 2-A pr 2-M ay 2-J un SW E (m m ) Date SNOWPACK-WRF Stuart Mountain SNOTEL 0 100 200 300 400 500 600 700 2-N ov 2-D ec 2-J an 2-F eb 2-M ar 2-A pr 2-M ay 2-J un SW E (m m ) Date SNOWPACK-WRF Lolo Pass SNOTEL 0 100 200 300 400 500 600 700 29 -O ct 29 -N ov 29 -D ec 29 -Ja n 29 -F eb 31 -M ar 30 -A pr 31 -M ay SW E (m m ) Date SNOWPACK-WRF Brackett Creek SNOTEL   108 3.3. 2015/16 Case Studies SNOWPACK-WRF simulations were analyzed during several significant avalanche events during the 2015/16 season, including three events near Bridger Bowl and two in the Whitefish and Swan mountains of NW Montana. Though I also ran simulations for west-central Montana (Lolo Pass and Stuart Mountain), no significant events occurred in those areas during the season. The simulated snowpack stratigraphy, including grain type/size and hand hardness, and the simulated stability were analyzed against snow and avalanche observations as well as the avalanche advisories from the respective avalanche center during each event. 3.3.1 Bridger Bowl The first two case studies evaluate a series of natural and human triggered avalanches which occurred in mid/late December 2015 (Fig. 3.18a, #1) and mid January 2016 (Fig. 3.18a, #2) on Saddle Peak (2793 m), a popular backcountry skiing area just outside the Bridger Bowl ski area boundary (Fig. 3.24). This event was simulated using SNOWPACK, which was forced with a combination of forecasted and observed weather data. Forecasted weather data included hourly precipitation, ISWR, OSWR, and ILWR from the WRF model grid point at Bridger Bowl. Observed weather data was gathered from the nearby Schlasman’s weather station (Fig. 3.24) and included hourly temperature, wind speed and direction, and relative humidity. Snow surface temperatures were not provided and were calculated internally by SNOWPACK’s surface energy balance.   109 Figure 3.24. Photo, looking west, showing the location of Saddle Peak in relation to the Bridger Bowl ski area. Leading up to the mid-late December avalanche cycle, a prolonged early season cold and dry spell from mid November through early December turned an already shallow snowpack into weak layers consisting of facets and depth hoar. These weak layers were prevalent in several observed snow pits at the Ramp study plot, just a few km north of Saddle Peak, and on Saddle Peak itself, in late November and mid December and were simulated relatively accurately by SNOWPACK (Figures 3.25 and 3.26).   110 Figure 3.25. Comparison of the observed and simulated snow profile at the Ramp study plot for November 25th showing the presence of early season facets in the bottom half of the snowpack. The Bridger Bowl SNOWPACK-WRF simulation also modeled the presence of these weak layers, though some differences can be seen in terms of grain size and hand hardness. Bridger Bowl SNOWPACK-WRF Simulation Ramp Study Plot 25-Nov 13:00 MST Grain Type Size (mm)   111 Figure 3.26. Comparison of an observed and simulated snow profile for Saddle Peak on December 13th showing the presence of facets and depth hoar in the bottom half of the snowpack. The Saddle Peak SNOWPACK simulation also modeled the presence of these weak layers, though some differences can be seen in terms of size and hand hardness. (Observed profile courtesy of the GNFAC) In mid December, the persistent ridge of high pressure broke down leading to an active storm pattern that lasted through the end of the month, depositing 137 cm of snow and 82 mm of SWE at Bridger Bowl between December 14th and the 25th. The avalanche danger spiked during this time and was rated HIGH for the Bridger Mountains between December 17th – 22nd, with an avalanche warning issued for the 19th and 20th. Numerous natural avalanches were observed during this time on or near Saddle Peak, failing on the Saddle Peak SNOWPACK Simulation Saddle Peak 13-Dec 12:00 MST Grain Type Size (mm)   112 weak facets and depth hoar observed near the bottom of the snowpack (Figures 3.27 and 3.28). Figure 3.27. Photo of 3 natural avalanches on or near Saddle Peak that failed on buried facets and depth hoar around December 20th. (Photo: GNFAC, 2015) Figure 3.28. Close-up photo of the crown line from the Football Field avalanche shown in Figure 3.27. (Photo: GNFAC, 2015)   113 During this event, the WRF model accurately predicted the amount of SWE and snow that was observed, with 95 mm of SWE and 134 cm of snow forecasted, compared to 82 mm of observed SWE and 137 cm of observed snow. The Saddle Peak SNOWPACK model simulated the event rather realistically showing over 50 cm of new snow sitting on top of 1.5-2.0 mm facets and depth hoar near the bottom of the snowpack (Fig. 3.29). Figure 3.29. Dec 20th snow profile from the Saddle Peak SNOWPACK simulation. The late December storm and avalanche cycle is circled in red. The mid-Dec storm snow and late November facets and depth hoar were modeled, which led to an active natural avalanche cycle around the same time. A thin layer of small surface hoar (0.8 mm) was also modeled below the new snow, though the actual presence of this layer was doubtful as surface hoar was not observed in the area during this time. The estimated stability on the 20th, as determined in SNOWPACK via the Skier Stability Index (SK38) (Jamieson and Johnston, 1998), was rated as “Fair” (from Good,   114 Fair, and Poor). Using an improved stability classification method for SNOWPACK profiles, which determines stability of a simulated profile using the original SK38 value, 24-hr new snow, and/or 3-day new snow totals (Bellaire and Jamieson, 2013b), classified the stability of the same simulated profile as CONSIDERABLE. This was one avalanche rating below the forecasted avalanche danger of HIGH on the same day. After the end of December, another ridge of high pressure built over the area bringing cold and dry conditions to most of western Montana. This led to the formation of near- surface facets and turned the snow from the mid/late December storms into mostly weak faceted snow. Near-surface facets were observed at the Ramp study plot in early January and were simulated in the Bridger Bowl SNOWPACK-WRF model (Fig. 3.30).   115 Figure 3.30. Comparison of an observed and simulated snow profile at the Ramp study plot on January 4th showing the presence of near-surface facets and the general faceting of the previous week’s snowfall, which was also modeled in the Bridger Bowl SNOWPACK-WRF simulation. A steep temperature gradient was also seen in the upper portion of the snowpack (red lines) reflecting the conditions leading to near-surface facet development (> -10 °C/m), though the simulated near-surface temperature gradient was - 25 °C/m, a bit less than the -38 °C/m that was measured in the snow pit. The ridge of high pressure broke down around mid January allowing another round of storms to track through western Montana in the later half of the month. The avalanche danger did not increase as much as in late December, mostly due to lower amounts of new snow spread over a longer period of time, but nevertheless, several natural and human triggered avalanches were observed during this time. After the exit of the ridge, the first significant snowfall occurred around the 13th and 14th, depositing 10-   116 18 cm at Bridger Bowl, though this was accompanied with some wind loading on exposed leeward slopes. On the morning of the 14th, a remotely triggered avalanche occurred on a slope adjacent to Saddle Peak by two skiers who were just on the other side of the southern ski area boundary. This event happened to be captured by a time-lapse camera set-up by a Montana State University Snow Science graduate student (Saly et al., 2016) (Fig. 3.31). The avalanche failed on facets in the upper part of the snowpack, which was buried by new snow and accompanied by additional wind loading (Fig. 3.32). The avalanche then stepped down to the early season depth hoar found near the bottom of the snowpack. Figure 3.31. Photo of the January 14th Saddle Peak avalanche in motion just after release, captured by a time-lapse camera (Photo: Diana Saly, 2016)   117 Figure 3.32. Photo of crown line and track, which failed on near-surface facets and depth hoar near the ground (Photo: GNFAC, 2016) During the event, the WRF model predicted 16 cm of new snow, well within the observed range of 10-18 cm. The Saddle Peak SNOWPACK model provided a fairly realistic simulation of the snowpack structure observed during the event, with 16 cm of new snow sitting on top of the early January facets which overlaid 1.5 – 2.0 mm facets and depth hoar near the bottom of the snowpack (Fig. 3.33). The modeled profile did not simulate any wind slabs at the surface, as the hardness of the new snow was only rated as fist hard. Additionally, a thin layer of 3.6 mm surface hoar was simulated below the new snow but was not present in the observed snowpack.   118 Figure 3.33. Jan 14th snow profile from the Saddle Peak SNOWPACK simulation, which shows the grain type, grain size, and the total snow depth. The general snowpack structure present during the Jan 14th avalanche was simulated fairly accurately with new snow sitting atop the early January facets and large grained facets and depth hoar near the bottom of the snowpack. A thin layer of 3.6 mm surface hoar was also modeled below the new snow, though the actual presence of this layer was doubtful as surface hoar was not observed in the area during this time. The estimated stability for the simulated profile on the 14th was rated as “Good”, though using the Bellaire and Jamieson (2013b) improved stability classification method rated the stability as MODERATE. The avalanche advisory for the day rated the avalanche danger as CONSIDERABLE for wind loaded slopes and MODERATE for all others, which reflected the instability observed in the wind loaded starting zone on Saddle Peak that day. The third case study focuses around the spring snowpack transition in early April in the Bridger Mountains and a subsequent large slab avalanche that occurred during this time period (Fig. 3.18a #3). Around the beginning of April, a ridge of high pressure was   119 centered over the western U.S., which brought warm springtime temperatures to the mountains of southwest Montana. This was the first prolonged period of above freezing temperatures for alpine areas during the 2015/16 winter. During this time a large slab avalanche released in the northern Bridger Mountains on the shoulder of Hardscrabble Peak (Fig. 3.34). The avalanche was between 0.3 – 0.9 m deep, 300 m wide, and ran 300 m vertical on an east-facing slope at an elevation of around 2700 m. The trigger was unknown but snowmobile tracks were observed near the starting zone. Figure 3.34. Photo, looking southwest, of a large slab avalanche that released on the shoulder of Hardscrabble Peak in the northern Bridger Range, MT. (Photo: GNFAC, 2016).   120 Though this was not recorded as a wet-slab avalanche, it is likely that a prolonged period of above freezing temperatures and strong spring time solar insolation led to the avalanche release, which failed on buried facets and depth hoar. Additionally, inspection of the debris suggests that at least some snow involved was wet. No co-located weather stations were available near the starting zone, but weather stations at Bridger Bowl (10- 13 km south, Figure 3.35) did show significant warm temperatures during this time. At the Alpine weather station (2255 m), temperatures were above freezing for 60 straight hours between April 2nd – 4th with daytime temperatures reaching up to 11 °C each day. At the Schlasman’s (2523 m) and Ridge (2590 m) weather stations, temperatures never dropped below freezing between the 3rd and 4th and temperatures climbed to 10 °C at both locations each day. It is likely that these temperatures were reasonably representative of the conditions at the starting zone around the time the avalanche released. Figure 3.35. Map of the Bridger Range, looking northwest, showing the location of the early April avalanche in relation to the three weather stations at Bridger Bowl.   121 During this warm spell, the WRF model predicted above freezing temperatures for most of the time period (for an elevation of 2392 m) but was slightly cooler with forecasted max temps only around 6 – 7 °C and a few hours of below freezing temperatures at night. The Bridger Bowl SNOWPACK-WRF model, though not completely representative of the snowpack conditions likely present in the starting zone, did simulate melt-water beginning to percolate through the upper part of the snowpack during the afternoon of the 2nd, with melt-water percolating down to about 65 cm below the snow surface by the 5th (Fig. 3.36). The simulation provides an estimation of the snowpack structure in the starting zone with warm temperatures and strong solar radiation likely causing melt-water to percolate down through the snowpack, weakening an old layer of buried facets or depth hoar. The simulated stability was rated as “Poor” by SNOWPACK during this time and the weakest point in the snow profile, as determined by SNOWPACK, was identified as a thin layer of 1.75 mm depth hoar located about 1 m below the snow surface. This is actually consistent with the approximate depth of the avalanche that released around April 3rd.   122 Figure 3.36. Simulated snow profile (top) and liquid water content (bottom) at Bridger Bowl during the approximate time of the Hardscrabble Peak avalanche (black line). The weakest point in the profile, as determined by SNOWPACK, is circled in the top image, which corresponds to a thin layer of 1.75 mm depth hoar. Though the simulation did not show melt-water percolating down to the depth hoar layer during the approximate time of avalanche release, weak layers closer to the snow surface may have been present in the starting zone, which could have been activated by melt-water in the upper part of the snowpack leading to the avalanche release. Later in the week, temperatures warmed up again, reaching nearly 15 °C at the Alpine weather station on the 8th. A snow pit was dug at this time at the Ramp study plot and showed the snowpack had gone completely isothermal, as melt-water had percolated   123 down to the ground. Numerous wet-loose avalanches were seen in the Bridger Mountains that day but no large wet-slab avalanches were observed. The Bridger Bowl SNOWPACK-WRF model simulated the timing of this event relatively accurately with wet snow present down to about 20 cm above the ground, indicative of melt-water percolation, with snowpack temperatures completely isothermal as well (Fig. 3.37). Numerous crusts/ice layers were observed in the snow pit but were not simulated in the model. Figure 3.37. Comparison of an observed and simulated snow profile at the Ramp study plot on April 8th showing an isothermal snowpack with wet snow observed, and mostly simulated, down to the ground. Numerous crusts were also observed in the snowpack but were not present in the simulation and were likely dissolved by melt-water in the model. Sn ow D ep th (c m ) Grain Size (mm) Grain Type Bridger Bowl SNOWPACK-WRF Simulation Ramp Study Plot 8-Apr 13:00 MST   124 3.3.2 NW Montana Two case studies were analyzed during the 2015/16 season in NW Montana, which focused on two significant avalanche cycles in the Whitefish and Swan Mountains in December and January. The first event consisted of a potent storm system in early December that brought above freezing temperatures and heavy precipitation to much of NW Montana, with rain levels rising to 2100 m leading to a significant rain-on-snow event in upper elevations (Fig. 3.18b #4). This led to a subsequent wet avalanche cycle and the formation of a thick rain crust that persisted through the rest of the winter. Between December 7th and the 9th, 31.5 mm of precipitation was measured at the Big Mountain Summit weather station (2053 m) in the southern Whitefish Range, with an estimated 18.8 mm falling as rain. The Noisy Basin SNOTEL station (1841 m) in the Swan Range recorded 61 mm of precipitation with an estimated 12.7 mm of rain. Rain saturated the upper part of the snowpack throughout most of the region leading to dangerous avalanche conditions with numerous wet-slab avalanches observed (Fig. 3.38), necessitating the issuance of an avalanche warning by the Flathead Avalanche Center for locations above 1800 m.   125 Figure 3.38. Photo of a natural avalanche in the Whitefish Range in NW Montana that occurred during the rain-on-snow event between Dec 8th and 9th. Other large avalanches were observed in the area with some running over 500 vertical meters. (Photo: FAC, 2015) Temperatures dropped below freezing after the storm departed later in the day on the 9th, forming a thick rain crust, shown in field observation a few days later as a 10 cm thick crust buried by 50 cm of subsequent snow (Fig. 3.39). This crust persisted through most of the season acting as a failure point for avalanche activity later in the winter. The WRF model provided a somewhat accurate forecast for this event with 8.2 mm of rain forecasted for Noisy Basin (compared to 12.7 mm estimated observed) falling between the night of the 8th and morning of the 9th. Only a trace of rain was forecasted at Big Mountain, though copious amounts of precipitation was forecasted during this storm as forecasted temperatures hovered at or just above the freezing mark, slightly too cold for the simulation of rain in the model. The total precipitation during this event (rain and   126 SWE) was over-forecasted at both locations with 70 mm of precipitation forecasted for Big Mountain (31.5 mm observed) and 76 mm for Noisy Basin (61 mm observed). Nevertheless, the SNOWPACK-WRF model simulated the snowpack structure fairly well during this event, reflecting the rain saturated snow near the surface on the 9th (Fig. 3.39) and a thick rain crust buried by new snow sitting atop early season facets near the ground a few days later (Fig. 3.40). Figure 3.39. Simulated snow profile from the Noisy Basin SNOWPACK-WRF model for Dec 9th, showing the wet snow at the surface from the Dec 9th rain event. Additionally, the bottom half of the snowpack consists of weak faceted snow that was also observed throughout the region.   127 Figure 3.40. Comparison of an observed and simulated snow profile on December 14th, just after the rain-on-snow event in NW Montana. The simulated profile is from the Big Mountain Summit SNOWPACK-WRF model and the observed snow pit is located in the Flathead Range, southeast of the simulated point. The model reflects the general snowpack set-up seen around the region at the time, with the Dec 9th rain crust buried below 35-50 cm of new snow with a layer of facets near the ground. (Photo: FAC, 2015) The second event in NW Montana consisted of a significant mid January storm system that led to an active avalanche cycle in parts of the Swan and Whitefish Ranges (Fig. 3.18b, #5). After an active December, a persistent area of high pressure built over western Montana bringing cold and dry conditions through the first two weeks in January, similar to the conditions experienced in the Bridger Mountains at the same time. This led to the development of weak faceted crystals in the upper portion of the snowpack in NW Montana, laying the foundation for future avalanche activity. In mid January, the ridge of high pressure broke down and a strong storm system with abundant   128 moisture moved into NW Montana. Between January 13th and 15th, the Noisy Basin SNOTEL station recorded 137 mm of precipitation and 114 cm of snow. An avalanche warning was issued for the Swan Range as numerous natural avalanches were observed, failing on the new snow/facet interface, as well as a thin freezing rain crust that formed just prior to the storm (Fig. 3.41). Figure 3.41. Photo of a large avalanche that released on an east facing slope at 1950 m in Noisy Basin in the Swan Range of NW Montana. The crown line was over 1 m high and failed on facets and a freezing rain crust below the new storm snow. (Photo: FAC, 2016) The Noisy Basin SNOWPACK-WRF model predicted this event, though some features were not simulated. The WRF model forecasted the timing of the storm system accurately but total precipitation was only about half of what was observed (68 mm forecasted vs. 137 mm observed). The SNOWPACK-WRF model also under-forecasted the total snowfall between the 13th and 14th (76 cm forecasted vs. 114 cm observed),   129 though less than error than the precipitation forecast. The model simulated the underlying snowpack structure observed during the subsequent avalanche cycle, with 60 cm of new snow overlaying the weak faceted snow from early January (Fig. 3.42). Field data revealed the presence of a thin freezing rain crust that formed on top of the January facets a few days prior to the major storm, though this was not reflected in the model. The estimated stability for the simulated profile on the 14th was rated “Fair”, though using the Bellaire and Jamieson (2013b) improved stability classification method increased the danger rating to HIGH. The Flathead Avalanche Center rated the avalanche danger as HIGH that day and issued an avalanche warning for the region. Figure 3.42. Noisy Basin SNOWPACK-WRF simulation during the Jan 14th storm and avalanche cycle in NW Montana. The model accurately predicted the general snowpack structure during the event, though total snowfall was underestimated and a freezing rain crust that was observed in-between the new snow/facet interface was not modeled.   130 In summary, the 2015/16 WRF model provided more accurate precipitation forecasts at most locations than compared to the precipitation forecast at Bridger Bowl during the 2014/15 season. This led to more realistic simulations of snow depth, SWE, and seasonal evolution of the snowpack microstructure when coupling with the SNOWPACK model. Over-forecasted winds were a continual problem at all sites so bias corrections were applied to achieve more reasonable results. An analysis of the individual case studies show the simulations adequately predicted the general snowpack structure during the major events but were missing a few key features that were present in the actual snowpack. In the next chapter, I will discuss these results and synthesize the data presented.   131 4. DISCUSSION 4.1. 2014/15 WRF Model Performance The coupled SNOWPACK-WRF model reasonably simulated the snowpack during the first season but was limited by the quality of the forecasted weather data input. Specifically, the WRF model tended to significantly over-forecast precipitation, winds, and ISWR throughout the season, though forecasted air temperature and relative humidity, and to a lesser degree snow surface temperature and OSWR, were fairly accurate. Most of the WRF forecast errors stemmed from issues associated with the lack of horizontal resolution needed to accurately predict meteorological variables in complex terrain. Ideally, the resolution of a NWP model grid would be fine enough so that resolved scales of motion are smaller than those of the dominant topographic features present in the model grid, which would be on the order of tens to hundreds of meters in mountainous terrain (Chow et al., 2012). Due to computational limitations, this type of resolution is not yet available, with the finest operational NWP model grids only having resolutions of around 1 km. In this study, the WRF horizontal resolution was only 2-km during the 2014/15 season and 3-km during the 2015/16 season, much larger than the dominant topographic features present in the area covered by the WRF grids. This especially impacted the wind, radiation, and precipitation forecasts during the two seasons of this study.   132 During the 2014/15 season, measured wind speeds were relatively light compared to other locations at Bridger Bowl. This was due to the sheltered nature of the study site in conjunction with the placement of the wind sensor next to a large strand of trees (see Fig. 2.3). Results showed the propensity for the WRF model to over-forecast winds at the Alpine study plot, especially during periods of light observed winds, which constituted a majority of the observations (Table 3.8 and 3.9). The accuracy of the model did increase during periods of higher observed winds, though a slight decrease can be seen during the highest wind gusts, where the winds were usually under-forecasted. Typically, the strongest winds at the Alpine weather station are seen from the northeast during periods of upper level northerly flow where air is funneled through upstream terrain gaps, which can lead to wind gusts of up to 25 m/s. This type of mesoscale effect is difficult to simulate, even with the highest resolution NWP models, and is likely a reason why a decrease in WRF performance can be seen during those events. Overall, the WRF terrain resolution was not high enough to simulate the frictional effects experienced at the study plot that led to the low observed wind speeds, leading to consistently over-forecasted winds. Forecasted winds tended to correlate better with observed wind speeds at exposed, higher elevation locations on the mountain, where frictional effects were not as prevalent. In addition to forecasted wind speeds, forecasted ISWR was also routinely over- predicted (Table 3.10). A likely reason for the large forecast discrepancy may not entirely stem from issues with the WRF model, but may also be due to topographic and vegetative shading effects present at the study plot. Because the study plot was located   133 on the east side of the Bridger Bowl ridgeline, the site was shaded by terrain from the southwest and west which blocked direct sunlight, especially during the darkest winter months. Additionally, shading from nearby canopy cover also blocked direct sunlight at certain times throughout most of the winter. The WRF model does not take into account site-specific shading and provides an estimation of ISWR valid more for an open flat field, leading to an over-prediction of incident radiation for locations where shading is prevalent. Site-specific vegetative effects also likely impacted observed values of ILWR, leading to consistent biases in the forecasts. The WRF model does not take into account most sources of terrestrial longwave radiation, which were prevalent at the study plot with the presence of tall coniferous trees nearby. The vegetation led to higher values of ILWR, especially during clear nights, than would be expected at a completely exposed site. This discrepancy is reflected in the results with the cold negative bias of the WRF ILWR values (Table 3.12). The WRF model grid point location likely played a role in the heavily over- forecasted precipitation throughout most of the first season. Prior to the study, a grid cell in the WRF model domain was chosen that was nearest to, and most representative of, the study plot. The chosen grid cell was centered on the windward (west) side of the Bridger Mountains, while the study plot and respective weather stations were located on the leeward (east) side of the range (see Figure 2.9). The model then likely overestimated orographically enhanced precipitation on the windward side of the range, especially during periods of strong moist westerly flow where estimated orographic lift and   134 precipitation would likely be high. Forecast errors may have been slightly exaggerated, especially when compared to precipitation measurements from the automatic station at the Alpine study plot, which used a heated tipping-bucket rain gauge. Though this data was corrected for wind-induced under-catch, the measurements likely still underrepresented the true precipitation amount that fell at the site. In fact, the daily manual measurements at the study plot and the measurements from the nearby Brackett Creek SNOTEL station both had higher accumulated precipitation amounts than the automatic station through the entire winter (Table 3.4 and Fig. 3.4). Even more important than total accumulated precipitation amount is precipitation timing, which was analyzed using the 24-hour precipitation values (24HNW). Results in Table 3.4 and 3.5 show decreasing accuracy of forecasted 24HNW compared to total accumulated precipitation, with accuracy decreasing from about 65% down to 50%, and decreasing even below 50% when evaluating only during days with observed precipitation. Digging deeper into the analysis, the WRF model did forecast individual precipitation events in close proximity to their actual occurrence, but just tended to over- forecast their actual amounts. For example, the WRF forecasted precipitation on 91% of the days when precipitation was observed, while forecasting precipitation on 43% of days when no precipitation was observed. This shows the tendency of the WRF model to forecast precipitation during periods where conditions may have been marginally conducive for precipitation, but not enough to actually produce any. Other forecasted meteorological variables that the WRF model struggled with during certain situations included snow surface temperature and OSWR. Snow surface   135 temperature, which was derived directly from the skin temperature parameter in WRF, had a warm bias, especially during periods of cold observed snow surface temperatures. Issues may have arisen from the fact that the WRF model determines skin temperature based on an integrated land-surface model (LSM). The WRF LSM includes a simulated multi-layer snow model that uses a simple parameterization of fractional snow cover, which may be vastly different than the snow cover simulated in SNOWPACK (Skamarock et al., 2008). To deal with this, SNOWPACK allows the user to use a measured snow surface temperature or allow SNOWPACK to calculate the value internally, as discussed in section 2.4. To determine which model had the most accurate snow surface temperatures, the SNOWPACK modeled temperatures were verified against the observed values, as seen in Table 4.1. SNOWPACK provided an overall less accurate prediction when compared to the WRF model, with a higher MAE and lower RF. Additionally, SNOWPACK snow surface temperatures contained a cold bias, though errors were less than those from the WRF model when observed temperatures were below -20 °C, with a MAE 3.5 °C lower than the WRF. The overall poor performance of the SNOWPACK derived snow surface temperatures was partly due to glitches in the calculations during certain times, which would lead to large, unrealistic fluctuations in the modeled snow surface temperature for one or two time steps. This is seen in Figure 4.1, where very cold modeled snow surface temperatures can be seen deviating far away from the observed values. This is a known issue in the model and the SLF has been working to fix these bugs for future releases.   136 Figure 4.1. Scatter plot of observed vs SNOWPACK modeled snow surface temperatures for the 2014/15 winter. Large negative deviations can be seen in the modeled values, indicative of the erroneous fluctuations. In order to evaluate the accuracy of the SNOWPACK snow surface temperatures while ignoring the erroneous fluctuations, the data was manually filtered so that all obvious erroneous fluctuations were removed. This procedure provided improved temperature predictions over the unfiltered results and outperformed WRF during the coldest periods, with improved MAE and RF values (Table 4.1). Overall, SNOWPACK still trailed behind the WRF model even when the errors were removed. For this reason, I used the WRF forecasted snow surface temperatures for input into SNOWPACK. -90.00 -80.00 -70.00 -60.00 -50.00 -40.00 -30.00 -20.00 -10.00 0.00 -40.00 -35.00 -30.00 -25.00 -20.00 -15.00 -10.00 -5.00 0.00 SN O W PA C K (°C ) Observed (°C)   137 Table 4.1. Forecasted snow surface temperature verification results for the WRF, SNOWPACK, and SNOWPACK filtered models. The WRF model consistently under-forecasted OSWR, though biases were much lower compared to ISWR forecasts. These biases were likely due to the snow cover parameterization in the WRF LSM, where imprecise estimations of snow age, density, and temperature would have led to errors in albedo estimates. In summary, significant meteorological forecast errors were directly related to spatial resolution issues within the WRF model, which is an overall common problem with NWP models in complex terrain. Additionally, the WRF model still contains imprecise parameterizations of the microphysical processes needed to effectively simulate and forecast the complex meteorological phenomena present in mountainous terrain, adding a certain degree of uncertainty and error to the model’s predictions. This in turn can lead to compounding errors in a SNOWPACK simulation, where small errors in the meteorological forcing can lead to significant errors in the snow cover simulation, as discussed in the following sections. 4.2. 2014/15 SNOWPACK Performance Both the SNOWPACK-Obs and SNOWPACK-WRF models generally simulated the observed snowpack during the 2014/15 season, though some significant discrepancies B MAE RF (+/- 1.5 °C) B MAE RF (+/- 1.5 °C) WRF +2.0%°C 2.5%°C 49.9% +9.7%°C 9.7%°C 1.7% SNOWPACK .1.9%°C 3.3%°C 47.0% .1.7%°C 6.2%°C 18.8% SNOWPACK Filtered .1.4%°C 2.8%°C 48.2% .1.7%°C 6.2%°C 18.8% vs All Observed Values vs Observed Values < -20 °C   138 were observed in the SNOWPACK-WRF model, which were driven mostly by errors in the forecasted weather data. For example, the SNOWPACK-WRF model tended to transition new snow to decomposing new snow and rounds much more quickly than in the SNOWPACK-Obs model and in the weekly snow profiles (Fig. 4.2). The higher wind speeds in the WRF model acted to break down the new snow much faster than was actually observed. Results in Table 3.17 numerically confirm this visual observation, as the accuracy of new precipitation grain types in the SNOWPACK-Obs model was 84% while the SNOWPACK-WRF model only had an accuracy between 63% and 40%, with rounded grains being the second most common grain type simulated when new or decomposing new snow was observed.   139 SNOWPACK-Obs SNOWPACK-WRF Figure 4.2. Snow grain simulations for the SNOWPACK-Obs (left) and SNOWPACK- WRF (right) models during the 2014/15 winter at the Alpine study plot. The figure highlights the rapid transition of new snow (light green) to decomposing new snow (dark green) and rounds (pink) by the SNOWPACK-WRF model while the transition was much more gradual in the SNOWPACK-Obs model, which better reflected field observations. The rapid metamorphism of new snow to rounds also led to overall denser snow in the SNOWPACK-WRF model, with snowpack density over-predicted by an average of +39.2 kg/m3 (Table 3.13) and new snow density by an average of +31.0 kg/m3 (Table 3.23). The denser new snow in the SNOWPACK-WRF model aligns with the fact that total snow accumulation was much closer to observed totals even though total precipitation was well over-forecasted, which would have been due to increased new snow density. It is interesting to note that differences between measured and SNOWPACK- WRF modeled snow depth (Table 3.18) and total snowfall (Table 3.22) were far lower   140 than what would be expected with the amount of precipitation that was forecasted throughout the winter. It seems that the over-forecasted wind speeds helped to somewhat negate the over-forecasted precipitation, simulating higher density snowfall, which led to lowered snowfall amounts and snow depths throughout the season. In fact, after bias correcting the forecasted wind speeds, which decreased the average wind speed from 4.8 m/s to 1.3 m/s, modeled snow depths and snowfall increased greatly, with the model over-prediction bias increasing from +15 cm to +51 cm for snow depth, +0.63 cm to +2.52 cm for 24-hr snowfall, and +106 cm to +385 cm for total accumulated snowfall. Neither model predicted any surface hoar formation during the 2014/15 season. This generally reflected observations as surface hoar was only observed three times at the study plot and was small and quickly decomposing at the time of observation. Vegetation coverage at the site and a limited number of clear cold nights prevented any persistent surface hoar from forming and being buried during the season at the study plot. A notable discrepancy in the SNOWPACK-WRF simulation occurred during the beginning of the spring snowpack transition in mid-March. Between March 14th and 16th, temperatures stayed above freezing for 72 hours and melt-water began to percolate down through the snowpack, which was observed in a snow pit dug on the 16th (Fig. 4.3).   141 Figure 4.3. Observed snow profile on March 16th at the Alpine study plot, showing the presence of melt-water beginning to percolate through the upper part of the snowpack. The SNOWPACK-WRF model simulated the timing of this event quite well, with 72-hours of above freezing temperatures and melt-water production forecasted during the same time. The amount of simulated melt-water and rate of percolation, though, was over estimated. The model simulated melt-water percolating 180 cm down through the entire snowpack in less than 36 hours, with an associated 45 cm decrease in HS at the same time (Fig. 4.4). The SNOWPACK-Obs model provided a much more realistic simulation of this event, with melt-water only percolating 50 cm down through the snowpack and a 10-15 cm decrease in snow depth by the 16th (Fig. 4.4), which was much closer to what was actually observed in the snow pit.   142 SNOWPACK-Obs SNOWPACK-WRF Figure 4.4. Snow grain simulation for the SNOWPACK-Obs (left) and SNOWPACK- WRF (right) models during the mid-March melting event. The red colored snow grain type represents melt-forms, which indicates the presence of melt-water in the snowpack. The SNOWPACK-WRF model shows melt-water unrealistically percolating through the entire snowpack in 36 hours, while the SNOWPACK-Obs model provides a much more realistic simulation of the event. The cause of this rapid melting in the SNOWPACK-WRF model was mostly due to the over-forecasted wind speeds, where increased turbulent fluxes added more heat to the snowpack, increasing the melting-rate. Running the SNOWPACK-WRF model with bias corrected winds simulated the event much more realistically, with melt-water only percolating 50 cm down through the snowpack over 36 hours and snow depth only decreasing 10 cm over the same time period, showing the impact of over-forecasted wind speeds during this event.   143 The quantitative analysis of snowpack parameters in sections 3.1.2.1 and 3.1.2.2 highlight the strengths and weaknesses of the SNOWPACK-WRF model compared to field data as well as to the SNOWPACK model forced with locally observed weather data. Evaluating only parameters associated with the internal structure of the snowpack, which were measured manually via observed snow profiles (e.g. snowpack temperature, density, grain size, hand hardness, and grain type), the SNOWPACK-WRF model showed only small differences in performance when compared to the SNOWPACK-Obs model (Table 3.13), though both models provided only satisfactory results at best for most parameters Results were also comparable to those from previous studies. For example, Lundy et al. (2001) ran a snowpack simulation using observed weather data and evaluated the performance of modeled snowpack temperature, density, grain size, and grain type. Using the same statistical measures (Bias (B), Root Mean Square Error (RMSE), Pearson’s Correlation Coefficient (r) for numerical data and Cramer’s Phi for quantitative data) results from the SNOWPACK-Obs model (also forced with observed weather data) were fairly similar to the Lundy study, with slightly decreased performance for temperature and grain size and increased performance for density and grain type. The SNOWPACK-WRF model over-predicted temperature, and density, but performed better than the Lundy study at predicting grain type (Table 4.2). Using the agreement score method for grain type comparison (section 2.5.2.1), results for both models were nearly identical to results from a study in the Swiss Alps as presented in Lehning et al. (2001). The grain type agreement score for both models during the 2014/15 season was 0.71,   144 slightly higher than the 0.70 agreement score in Lehning et al. (2001). This demonstrated the ability of the SNOWPACK-WRF model to match the performance of models that used locally observed weather data. It is interesting to note the seemingly lack of improvement in model performance from the Lundy and Lehning studies in 2001 to this study nearly 15 years later, where locally observed weather data was used. Though some parameters showed improvement (e.g. density and grain type), overall, the model did not provide the amount of improvement one would expect after 15 years of upgrades and advancements. Table 4.2. Comparison of statistical measures of model error and agreement between results from the Lundy et al. (2001) study and the two SNOWPACK models during the 2014/15 season. Increased performance in indicated by lower values of bias (B) and root mean square error (RMSE) and higher values of Pearson product momentum correlation coefficient (r) and Cramer’s Phi. For a complete explanation of the statistical measures used below, see Lundy et al. (2001). The SNOWPACK-WRF model seemed to be impacted most by the consistently over-forecasted precipitation throughout the year. This was reflected in the accuracy of snow depth and SWE, where differences in accuracies between the two SNOWPACK simulations were much larger compared to other parameters (Table 3.24). Obviously, a Lundy&et&al.&(2001) SNOWPACK9WRF SNOWPACK9Obs B&(°C) +"0.1 +"0.7 +"0.5 RMSE&(°C) 0.97 1.70 1.20 r 0.90 0.88 0.94 B&(kg/m3) ,48.15 +37.9 +10.4 RMSE&(kg/m3) 69.15 71.90 57.30 r 0.85 0.84 0.87 B&(mm) ,0.08 ,0.33 ,0.32 RMSE&(mm) 0.42 0.65 0.66 r 0.30 0.42 0.40 Cramer's&Phi&(Primary) 0.41 0.50 0.51 Cramer's&Phi&(Secondary) 0.34 0.42 0.51 Snowpack& Temperature& Snowpack& Density Grain&Size Grain&Type   145 major reason for this discrepancy is due to the fact that SNOWPACK-Obs was forced with measured snow depth. Though not used exclusively by SNOWPACK to determine snow depth, calculated snow depth values tend to match measured values very closely when snow depth is provided as input, hence the 98% snow depth accuracy of the SNOWPACK-Obs model. Nevertheless, if measured snow depth is not provided, the accuracy of the modeled snow depth and SWE in SNOWPACK is strongly dependent on the accuracy of the precipitation input. Schmucki et al. (2014) evaluated the accuracy of snow depth and SWE at several locations in the Swiss Alps using SNOWPACK forced with measured precipitation without using measured snow depth. They evaluated results using several combinations of raw and calibrated precipitation values and found that the accuracy of the modeled snow depth increased from 65% to 96% when using raw vs. calibrated precipitation input. Additionally, the modeled SWE accuracy increased from 56% to 86% when using raw vs. calibrated precipitation. The SNOWPACK-WRF model, which also only used raw precipitation input (though forecasted), showed slightly better accuracy for snow depth (69%) and SWE (66%) compared to the Schmucki et al. (2014) results using uncorrected precipitation, but fared much worse compared to the results using calibrated precipitation input. This result is promising as it showed that a SNOWPACK simulation using raw forecasted precipitation input was able to provide an overall more accurate prediction of snow depth and SWE than a simulation using raw measured precipitation, though this comparison was only for one season in different climatic regions. Additionally, this showed how properly bias correcting precipitation   146 data could drastically improve results, which is clearly important when running SNOWPACK without measured snow depth. 4.2.1 Bias Corrections A common theme in this discussion so far has centered around how poorly forecasted weather variables can impact the results of a SNOWPACK simulation, specifically winds and precipitation, and how bias correcting these parameters may or may not improve the simulation. To demonstrate the sensitivity of these two parameters on SNOWPACK, SNOWPACK-WRF was run using four different setups: uncorrected winds and precipitation, bias-corrected precipitation, bias-corrected winds, and bias-corrected winds and precipitation. Modeled snow depth from the four simulations and measured snow depth from the MSU weather station were compared to each other. (Fig. 4.5 and Table 4.3).   147 Figure 4.5. Snow depth comparison of the four different SNOWPACK-WRF simulations and the observed values for the 2014/15 season, showing the results of bias correcting either the precipitation or winds, or both. Table 4.3. Statistical comparison of snow depth for the four different SNOWPACK- WRF simulations during the 2014/15 season. Figure 4.5 and Table 4.3 show that bias-correcting the winds, where lower (more accurate) wind speeds were combined with over-forecasted precipitation, led to much higher than observed snow depths, while bias-correcting the precipitation, where lower 0 40 80 120 160 200 240 1-N ov 1-D ec 1-J an 1-F eb 1-M ar 1-A pr 1-M ay 1-J un Sn ow D ep th (c m ) Date SNOWPACK-WRF Bias Correction Comparison for Snow Depth MSU Weather Station SNOWPACK-WRF Uncorrected SNOWPACK-WRF Precip Bias Cor SNOWPACK-WRF Wind Bias Cor SNOWPACK-WRF Precip and Wind Bias Cor SNOWPACK-WRF Bias (cm) MAE (cm) Accuracy Uncorrected +15.1 31.8 69% Bias Corrected Precip -32.9 33.5 64% Bias Corrected Wind +50.8 50.8 64% Bias Corrected Wind and Precip -3.5 14.9 80% Snow Depth   148 precipitation amounts (more accurate) were combined with over-forecasted wind speeds, led to lower than observed snow depths. As expected, bias-correcting both parameters led to the most realistic and accurate snow depth simulation, approaching the level of accuracy presented by Schmucki et al. (2014). It is also interesting to note the impact of higher wind speeds during the mid-March melting event, as discussed in the previous section. The two simulations where winds were left uncorrected (red and blue lines in Fig. 4.5) show a sharp drop in snow depth during the event, while the two simulations where winds were bias-corrected (orange and green lines in Fig. 4.5) show only smaller drops in snow depth at this time, which more closely matched the observed snow depth. This further demonstrates SNOWPACK’s sensitivity to wind speed input, especially during periods of warmer weather, where turbulent fluxes from strong winds can rapidly add heat to the snowpack and increase melting. Overall results compared across the four SNOWPACK-WRF simulations for the 2014/15 season at the Alpine study plot are shown below in Table 4.4. Results show little difference in overall accuracy, with only the bias-corrected wind simulation having slightly lower accuracy than the other three, which was due to larger snow depth and SWE errors from a much deeper than observed snowpack. With the exception of 50 cm temperature, variables associated with internal snowpack structure (e.g. density, temperature, grain size/shape, and hand hardness) showed little difference between the four simulations while variables affected more by precipitation and/or wind (e.g. HS, SWE, and total snowfall) displayed the greatest difference. It is interesting to note the increased accuracy of SWE and 50 cm temperature for the precipitation bias-corrected   149 simulation, even though the snow depth accuracy was low. As mentioned earlier, the lower snow depth values for this simulation were due to a combination of lower precipitation values and higher wind speeds, which compacted the snow. Regardless, the corrected precipitation led to a more accurate simulation of mass added to the snowpack, even though snow depths were lower, leading to more accurate SWE values. Higher SWE accuracy was also seen in the simulation where both precipitation and winds were bias-corrected, showing how the accuracy of SWE increased when bias-corrected precipitation values were used. It can be surmised that modeled SWE accuracy is tied closely to the accuracy of precipitation input, while snow depth accuracy is tied closely to the accuracy of both precipitation and wind inputs. Furthermore, 50 cm temperatures were more accurate in the precipitation corrected simulation as the shallower snow depths led to an overall colder snowpack, counteracting the otherwise warm snowpack temperature bias seen in the SNOWPACK-WRF model. Table 4.4. Accuracy comparison of all parameters for each of the four SNOWPACK- WRF simulations for the 2014/15 season at the Alpine study plot. Uncorrected Bias Corrected Precip Bias Corrected Wind Bias Corrected Wind and Precip Density 83% 85% 83% 85% Temperature 76% 76% 71% 69% Grain Size 65% 68% 57% 62% Hand Hardness 52% 38% 56% 47% Grain Type (Agreement Method) 71% 70% 68% 66% HS 69% 64% 64% 80% SWE 66% 88% 56% 87% 50 cm Temperature 79% 99% 66% 72% 24HN 58% 48% 50% 50% Total Snowfall 84% 77% 60% 91% New Snow Density 78% 78% 89% 87% Overall 71% 72% 65% 72%   150 In summary, results from the 2014/15 season demonstrated SNOWPACK’s sensitivity to precipitation and wind inputs for simulating parameters like snow depth, SWE, and total snowfall. At the same time, results showed a decreased response to bias corrections and input type for parameters associated with the internal snowpack structure. The SNOWPACK-WRF model did provide a somewhat reliable prediction of the snowpack and was comparable to that of the SNOWPACK-Obs model, but struggled in several areas, decreasing the usefulness of the model for avalanche forecasting. 4.3. 2015/16 WRF Model Performance WRF performance varied across each site during the 2015/16 season but, seemed to improve compared to the prior season at Bridger Bowl. WRF accumulated precipitation, though still over-forecasted at all locations, had the most notable performance increase with a higher accuracy for four out of five sites compared to the previous season at Bridger Bowl (65%). In fact, two locations had accuracies above 90% while the accuracy at Bridger Bowl was around 85%, 20% higher than the previous season. Conversely, 24-hour precipitation forecasts did not improve over the 1st season, with a positive bias and accuracies only around 50%. Precipitation was typically forecasted on days when precipitation was observed, but also often forecasted when none was observed. Additionally, the WRF struggled during significant precipitation events. To show this, the forecast and occurrence of significant storms (24HNW > 20 mm) was analyzed for all 5 locations during the 2015/16 season. Throughout the winter, the WRF model had a 66% false alarm rate for significant storms, with an average bias of +10.4   151 mm and precipitation accuracy of 52% during those days. On days when significant storms were observed, the WRF performed slightly better, with a 57% probability of detection. The WRF did tend to under-forecast these larger events, with an average bias of -8.1 mm and a precipitation accuracy of 63% during days with observed significant storms. Overall, the WRF tended to over-forecast the frequency of precipitation events, especially large ones, but was able to predict the occurrence of large events when they did occur, though usually underestimating the total amount of precipitation. Results highlight the difficulty in accurately forecasting significant events, where complex local effects can enhance certain synoptic scale systems leading to more precipitation than would normally be expected. These kinds of microscale effects are difficult to simulate, even with the highest resolution NWP models, making it difficult to forecast the full magnitude of certain wintertime storms. This shows continual issues with the prediction of individual storms/precipitation events, with seasonal accumulated precipitation amounts more accurately forecasted over individual events. Forecasted temperatures tended to be less accurate than the previous season, with the percentage of forecasts within 1.5 °C (RF %) decreasing from 44% during the 2014/15 season to 32% during the 2015/16 season, after bias correcting for elevation. Overall, WRF temperatures contained a cold bias, which was further exacerbated when applying a bias correction for elevation. For example, after applying the bias correction during the 2015/16 season, the average bias for all 5 locations decreased -1.1 °C (-1.1 °C to -2.2 °C), the MAE increased 0.5 °C (2.33 °C to 2.83 °C), and the RF decreased 13% (46% to 32%), which was more than double the decrease seen during the previous season   152 (see Table 3.2). Grid point location had a significant role in these errors, as the WRF model grid point for each site, with the exception of Lolo Pass, was lower than the actual location, ranging between 187 m lower at Bridger Bowl to 473 m lower at Noisy Basin. The elevation corrections were obviously too large, leading to colder than observed temperatures, especially for Noisy Basin and Stuart Mountain, where the forecasted temperatures were on average between 2.5 °C and 3.5 °C too cold. Wind speeds were still over-forecasted at each site, ranging between 30% and 90% higher than observed. More exposed locations (e.g. Big Mountain Summit) had the lowest errors while more sheltered locations (e.g. Lolo Pass) had the highest errors. The lack of appropriate horizontal resolution in complex terrain limits the amount of friction taken into account in the model, and WRF wind speeds end up reflecting more of the free air wind velocity at the grid point elevation rather than the wind velocity at a more sheltered location. Differences in the WRF model set-up likely played a role in the model performance variation between the two seasons. The amount of grid nesting decreased in the second season, from three to two nested WRF grids (see figures 2.8 and 2.10). This in-turn reduced the number of lateral boundaries between coarser and finer grids. Issues can arise with sudden jumps in grid resolution at the lateral grid boundaries, including unrealistic interpolation of meteorological variables from coarser to larger grids that can leading to numerical instabilities, and large inconsistencies in topography at the boundaries (Chow et al., 2012). These issues can be compounded as the number of lateral boundaries increase due to an increase in the number of nested domains, especially   153 over complex terrain. It is generally recommended that grid domain boundaries lie away from important topographic features that need to be resolved in order to avoid the aforementioned issues. This may have led to issues during the first season as the 2 km domain used for Bridger Bowl was small, with the lateral boundaries only around 50 km away from the Bridger Mountains. Precipitation coverage was often exaggerated over the Bridgers, leading to a 69% over-prediction of precipitation at Bridger Bowl throughout the winter. Conversely, the inner WRF nest during the 2015/16 season was much larger and the domain boundaries were further from the five sites used during the winter. Additionally, there were less lateral boundaries since only two nested domains were used, limiting the amount of issues associated with numerous lateral domain boundaries. This likely improved precipitation forecasts during the second season, even though the resolution of the inner WRF grid was slightly coarser than the one used during the 2014/15 winter. Grid point location is another important factor that can determine the accuracy of a high resolution NWP model. The closest grid point to Bridger Bowl during the 2014/15 season was located on the opposite side of the Bridger Range (see figure 2.9), which was on the windward side of the range. This likely led to unrealistic orographic enhancement in the model physics, accentuating the already over-forecasted precipitation values. During the 2015/16 season, the grid points for each location were on the same side of the mountain range as the observed location leading to overall more accurate precipitation forecasts. Some discrepancies were observed, for example Stuart Mountain, where precipitation was heavily over-forecasted through the winter even though the model grid   154 point was located only 1.1 km SW and 298 m lower than that of the actual location (Fig. 4.5). Conversely, accumulated precipitation was much more accurate at Noisy Basin (Table 3.26) even though the grid point was located 2.3 km W and 473 m lower than that of the actual location (Fig. 4.6). It should be noted that, as mentioned earlier, forecasted temperatures were adjusted to correct for these elevation differences. Figure 4.6. Map, as seen in Google Earth looking southwest, showing the location of the Stuart Mountain WRF grid point compared to the actual location of the SNOTEL station   155 Figure 4.7. Map, as seen in Google Earth looking northeast, showing the location of the Noisy Basin WRF grid point compared to the actual location of the SNOTEL station This underscores site-specific resolution issues with the WRF model, and NWP models in general, in predicting local scale effects, where complex, and seemingly chaotic, microphysical processes drive weather over mountainous terrain. In general, as the results from this study show, increased model performance can be achieved if larger nested high-resolution model grids, centered over the area of interest, are used and grid points most representative of the desired location are chosen. Still, it is important to understand model biases and to know how to correct for them prior to use in an operational setting.   156 4.4. 2015/16 SNOWPACK Performance Snow depth and SWE predictions throughout the 2015/16 season were generally more accurate and realistic than the previous season, though winds were bias corrected to account for the systematic over-prediction at the sheltered sites. The more realistic estimation of accumulated precipitation at most sites led directly to increased SWE accuracy, as three out of four locations had accuracies higher than the previous season at Bridger Bowl. Additionally, the combination of bias corrected winds and more accurate precipitation forecasts led to increased snow depth accuracies for four out of five sites. Stuart Mountain was the main outlier, as over-forecasted precipitation and colder forecasted temperatures led to higher SWE and snow depth values throughout the winter, decreasing the overall accuracy. Though the exact reason for these forecasted meteorological errors are unclear, it is likely localized effects were not sufficiently resolved in the WRF model throughout the season. It is interesting to note the discrepancy between modeled and observed snow depth at Lolo Pass. Accumulated precipitation had an accuracy of 96%, which led to an 86% accuracy of SWE, matching observed values quite closely throughout the winter. Conversely, modeled snow depth had an accuracy of only 68% and an overall under- prediction bias, which was most prevalent during the middle of the season (Fig. 3.22). Though the reason for this discrepancy is not known for certain, shading at the site may have had a role. The Lolo Pass SNOTEL station is located well below tree line in a small opening surrounded by thick trees, leading to very low wind speeds and high amounts of vegetative shading, decreasing the amount of incident solar radiation reaching the site.   157 This likely led to decreased compaction and melting of the snowpack compared to what would have been expected in a more exposed location. Forecasted ISWR values were likely much higher than what the site actually received, which would have increased melting during sunny periods, leading to lower modeled snow depths. Furthermore, the observed snow depth begins to diverge from the modeled snow depth around the third week in December, with an increase of around 70 cm over a 7-day period, while the modeled snow depth increased only around 30 cm during the same time period. Forecasted and observed precipitation during this time was nearly identical, around 84 mm, with 90 cm of new snow predicted by the SNOWPACK-WRF model. It appears that the snowpack settlement was over-estimated in the model and/or there were issues with the snow depth sensor at the SNOTEL station. Nevertheless, these results show how the SNOWPACK-WRF model can sometimes struggle to accurately simulate snow cover in sheltered locations, even after bias corrections are made. Additional bias corrections, like decreased winds, decreased ISWR, and increased ILWR, may be necessary if running a coupled SNOWPACK-WRF model in sheltered areas to account for these issues. 4.5. 2015/16 Case Studies The SNOWPACK model was able to simulate the general snowpack structure and most of the major weak layers associated with the avalanche activity for the five case studies presented in section 3.3. Most of the snowpack instabilities in the case studies were associated with facets and depth hoar, either near the bottom of the snowpack or   158 near the surface. SNOWPACK was able to simulate the development and general location of these weak layers relatively accurately due to its sophisticated parameterization of kinetic growth metamorphism (Lehning et al., 2002a). Additionally, the WRF model accurately forecasted extended periods of fair weather across the state, leading to the necessary conditions for facet and depth hoar development. There were differences in grain size between modeled and observed weak layers, with SNOWPACK generally underestimating the size, though still within the typical range of the respective grain form observed in the field (e.g. depth hoar sizes of 1.5 – 2.0 mm in SNOWPACK and 2.0 – 3.0 mm in field observations). Additionally, layer hand hardness was usually too soft compared to field observations, which may have been due to density, grain size, and hand hardness parameterizations. The first two case studies evaluated two events on Saddle Peak in the Bridger Mountains, which used a combination of forecasted (precipitation and radiation) and observed (temperature, RH, wind) weather data. The WRF model grid point used for the Saddle Peak simulation was the same grid point used for the Bridger Bowl simulation, which meant it was over 500 m lower than the simulated location on Saddle Peak. Precipitation was likely under-forecasted for Saddle Peak due to this elevation difference, though no gauges were present to confirm this. Nevertheless, the modeled snowpack structure was similar to what was observed in manual snow profiles and avalanche observations from the same location (Fig. 3.26 and Appendix C). Additionally, the modeled snow depth was within the range of values that were measured periodically   159 throughout the season, though typically on the shallower end of the measurements (see Saddle Peak snow profiles in Appendix C). Since snow surface temperatures were not provided as input (WRF modeled snow surface temperatures would not have been representative of a location 500 m higher), this parameter was calculated internally using SNOWPACK’s surface energy balance. As discussed in section 4.1, snow surface temperatures derived in SNOWPACK tend to have a cold bias, which can lead to artificially high temperature gradients between the snow surface and the air directly above it, especially during cold clear nights. This led to the formation of several surface hoar layers in the Saddle Peak simulation throughout the winter that were not observed in field observations (Fig. 4.7). Figure 4.8. Simulated snow grain type for Saddle Peak during the 2015/16 season. The red arrows point to 3 surface hoar layers (pink lines) that were modeled but not observed.   160 These surface hoar layers were present in the simulation for the two Saddle Peak case studies (Fig. 3.29 and 3.33), though they were not identified by SNOWPACK as the weakest point in the snowpack at the time of avalanche release. At other times though, stability indices in SNOWPACK tended to focus on these layers and predicted low stability (high avalanche danger) when the danger was actually only moderate or low, with the surface hoar layer identified as the critical weak layer. None of the other simulations, which used WRF modeled snow surface temperatures, predicted any surface hoar layers during the season, even though some layers were observed sporadically during the year throughout the same region as the modeled locations (e.g. near Big Mountain Summit and Noisy Basin in NW Montana). The reason for this discrepancy is most likely tied to the snow surface temperature input, where SNOWPACK derived snow surface temperatures were typically too cold, sometimes leading to erroneous surface hoar formation, while WRF modeled snow surface temperatures were too warm, preventing any surface hoar from forming even when actual conditions were ideal for formation. Results from Horton and Jamison (2016) showed similar trends, where simulations using SNOWPACK derived snow surface temperatures over-predicted surface hoar formation by 40%. To evaluate whether snow surface temperature was the only parameter impacting surface hoar development, simulations at each location were compared using WRF and SNOWPACK modeled snow surface temperatures, with all other input data remaining unchanged. The SNOWPACK version produced numerous surface hoar layers for most locations while the WRF version failed to produce any. This highlights the dependence   161 on accurate snow surface temperatures for realistic surface hoar modeling using a coupled weather and snow cover model. Both the Big Mountain and Noisy Basin SNOWPACK-WRF simulations predicted the December 9th rain-on-snow event in NW Montana, with a trace of rain modeled at Big Mountain and 8.2 mm modeled at Noisy Basin between the 8th and 9th. A subsequent 10 cm thick rain crust was observed throughout the region, forming after temperatures dropped below freezing following the rain event. The simulation at Big Mountain showed this crust forming a few days later on the 12th (Fig. 3.4), while the crust didn’t form until nearly 3 weeks later at Noisy Basin. Further interrogation revealed forecasted snow surface temperatures at Noisy Basin remained at 0 °C for several hours after the rain event with subsequent new snowfall insulating the wet snow, preventing refreezing of the wet snow layer. It wasn’t until a prolonged cold outbreak a few weeks later, where consistent sub-freezing temperatures were able to penetrate deep enough into the modeled snowpack, that the wet snow layer finally refroze forming the rain crust. Forecasted temperatures were colder following the rain event at Big Mountain, allowing sub-freezing temperatures to penetrate more quickly down to the wet snow layer, forming the rain crust just a few days afterwards. This shows how minute errors in forecasted weather data, in this case slightly warmer temperatures, can lead to significant errors in a SNOWPACK simulation. The case studies also revealed some issues with SNOWPACK’s inability to simulate certain phenomena that can dramatically impact avalanche danger. For example, a freezing rain crust was observed in the Noisy Basin area just prior to the large   162 snowstorm on January 14th. Avalanches were observed failing on the new snow/crust interface, stepping down to facets and depth hoar below. The freezing rain crust was not simulated in the Noisy Basin simulation because SNOWPACK does not have the ability to predict freezing rain, since this type of precipitation is dependent on mid-level atmospheric temperatures and forms when surface temperatures are below freezing. SNOWPACK uses a static empirical rain/snow threshold of 1.2 °C, so any precipitation forecasted when temperatures are below this threshold is simulated as snow. The Noisy Basin WRF forecast did predict precipitation on the 12th, but since the forecasted air temperatures were below 1.2 °C during the time of precipitation, snow was instead modeled. Additionally, SNOWPACK cannot discriminate between precipitation type, besides rain and snow, so important grain forms like graupel, ice pellets, and rime can’t be simulated in the model. If overlaid by a significant load, these forms have the potential to act as a weak layer, so exclusion from a SNOWPACK simulation can lead to significant underestimation of the avalanche danger. Adding precipitation type to SNOWPACK input would prove beneficial to the accuracy of the model and predicting avalanche danger. This type of data would likely only be available when using NWP model input, since remote weather stations typically do not include a precipitation identification sensor. Precipitation type from a NWP model is usually available as an output parameter and could easily be included in SNOWPACK’s meteorological input file. Another limitation of the SNOWPACK model is the difficulty in simulating wind loading, which strongly influences the evolution of avalanche danger. Proper simulation   163 of wind loading includes knowledge of small-scale terrain features (e.g. ridge crests, gullies, convex rollovers), wind speed and direction, and the amount of snow available for transport on windward slopes (McClung and Schaerer, 2006). Wind loading is typically a small-scale feature with high spatial variability, making simulation even more difficult. SNOWPACK usually simulates snow cover for a single flat slope but does provide an option to calculate snowpack conditions on multiple slopes/angles from a single meteorological input file, in order to attempt to simulate snow redistribution and wind loading. The Saddle Peak simulation was set-up to include multiple slopes in order to simulate wind loading on the east facing (leeward) side, where snow and avalanche observations were recorded. Results did not show much, if any, signs of snow redistribution in the simulated snowpack, though wind speed and direction from the Schlasman’s weather station were used, which is located about 100 m below the ridge line (Fig. 3.35). The winds were likely slower and from a different direction at different times throughout the winter compared to ridge top winds, which would have been more representative of the conditions near the upper starting zones on Saddle Peak. The avalanches analyzed in the two Saddle Peak case studies were impacted by additional wind loading of new snow on leeward slopes, which was not modeled in the simulation as the upper layers of the snowpack during these events were all modeled with a “Fist” hand hardness. Nevertheless, the underlying snowpack structure and weak layers leading to the avalanche release were all simulated relatively accurately. In order to simulate snow cover conditions for starting zones located on leeward slopes that typically experience significant wind loading, it is recommended to run a coupled NWP-   164 SNOWPACK model using multiple slopes without bias correcting forecasted winds. To simulate conditions further down from a ridgeline or crest, bias-correcting winds will most likely be necessary, or snow may be unrealistically eroded by the stronger winds. The simulations analyzed during the case studies included stability indices for the simulated profiles. During large events (e.g. mid-January storm at Noisy Basin), the stability rating was usually representative of the actual avalanche danger at the time, but tended to diverge during other times (e.g. mid-January avalanche on Saddle Peak). Additionally, during fairly benign and low danger periods, the stability rating would be rated as poor (high danger) due to the presence of non-existent weak layers. Using an improved stability classification method (Bellaire and Jamieson, 2013b) did provide more realistic danger ratings, though it could only be applied to dry avalanches. The modeled snowpack stability can provide a ballpark estimation of the avalanche danger but small differences in modeled layers and errors in meteorological input can drastically impact the stability rating. Forecasters should not rely solely on the SNOWPACK stability rating when estimating the real-world avalanche danger, but should instead focus on the snowpack stratigraphy to identify the presence of potential weak layers, significant, loading, and/or the percolation of melt-water through the snowpack. Results from this study show how errors in meteorological input can lead to compounding errors in a snow cover simulation and how bias correcting known systematic forecast errors is necessary to provide a more realistic simulation. The tendency for the WRF model to over-predict winds can impact snow depth, new snow metamorphism, and added heat to the snowpack through turbulent fluxes. NWP model   165 grid point location can determine precipitation accuracy, which is one of the most important parameters in a snow cover model that directly impacts snow depth, SWE, and the accuracy of the simulated snowpack stratigraphy through the winter. Though some key features in the snowpack were either missing or incorrectly modeled, simulations during both seasons provided an overall realistic prediction of the observed snow cover and stratigraphy, simulating the snowpack structure observed around several prominent avalanche cycles across western Montana. 4.6. Usefulness and Application of a coupled SNOWPACK- WRF model in an Operational Avalanche Forecast Setting So far in this study, results from the coupled SNOWPACK-WRF model were evaluated and compared against snow and avalanche observations from the field to determine the overall accuracy and performance of the model. A major goal of this thesis was to determine, based on the validation results, whether or not a coupled SNOWPACK-WRF forecast system would be useful and beneficial to avalanche forecasters in an operational setting. Avalanche forecasters rely on manually observed snow pit data and stability test results from the field to determine the internal snowpack structure and current/future snowpack stability to produce an avalanche forecast. A main objective in developing snow cover models for avalanche forecasting was to supplement the need for manual snow profiles from the field, especially in remote and dangerous locations where this data is difficult to gather. A snow cover model would be considered useful and beneficial to forecasters if it could provide a more reliable and accurate prediction of the snowpack   166 structure and stability than a forecaster could by only utilizing past, present, and forecasted weather data. Additionally, the reliability of the continuous temporal output provided by the snow cover model should be more advantageous than data gathered from only sporadic manual snowpack observations from a remote location. If the model cannot meet any of these conditions, then the system would not be very beneficial as a forecast tool and not worth the time to set-up and implement operationally. Throughout both seasons, the coupled SNOWPACK-WRF model did a good job simulating the ‘synoptic scale’ features in the snowpack (i.e., at the mountain range scale) as they occurred, including the formation of facets and depth hoar during prolonged periods of cold and dry weather, significant snow accumulation periods, and major melting events with the percolation of melt-water through the snowpack. At the same time, the model struggled with some of the finer details of the snowpack, including thin weak layers like surface hoar, near-surface facets, and some melt-freeze crusts. For example, as mentioned in section 4.5, several layers of surface hoar were incorrectly modeled in the Saddle Peak simulation and some surface hoar layers were missing from the Noisy Basin and Big Mountain simulations, which were observed intermittently throughout the region during the 2015/16 winter. Additionally, a prominent freezing rain crust was missing in the Noisy Basin simulation, which was successively buried by over a meter of snow during a high avalanche danger period. These fine-scale features in the snowpack are extremely sensitive to meteorological forcing, where small errors in the meteorological forcing, like air temperature and radiation, can determine whether or not a critical weak layer develops. These potential errors, combined with the somewhat   167 imprecise parameterizations of the microphysical processes in the SNOWPACK model, make it quite difficult for a coupled NWP-SNOWPACK model to accurately predict thin critical weak layers. This degrades the usefulness of a coupled model for avalanche prediction as the presence or absence of thin critical weak layers typically plays a major role in the stability and avalanche danger of a snowpack. One could argue that any experienced forecaster with access to the past, present, and forecasted weather conditions could, in theory, determine the general snowpack stratigraphy and stability of a region without any observed snowpack data. For example, a forecaster could deduce that a prolonged early season cold and dry period would produce facets and depth hoar in a shallow snowpack and, if buried by a significant load of new snow, would drastically increase the avalanche danger for that area. At the same time, most forecasters would struggle with accurately predicting the formation of most thin critical weak layers like surface hoar, near surface facets, and melt-freeze crusts, and their impact on the snowpack’s stability, highlighting the importance of field observations in avalanche forecasting. Though the SNOWPACK model tended to struggle with simulating the finer details observed in the snowpack, the model still simulated some observed thin weak layers, for example, mid to late season melt-freeze crusts in the upper snowpack at Bridger Bowl during both seasons, a thick rain crust in the Big Mountain and Noisy Basin simulations, and mid-January near-surface facets at most locations during the 2015/16 season. Additionally, several studies have highlighted the ability of a coupled NWP-SNOWPACK model to simulate critical weak layers with some degree of   168 reliability (Bellaire and Jamieson, 2013; Horton and Jamieson, 2016), though still not at a level where the model could be routinely and confidently used in an operational setting. Nevertheless, predicting and tracking these critical weak layers would be quite difficult and time consuming for a human forecaster in the absence of field data, using only observed or forecasted weather data. In this sense, a coupled NWP-SNOWPACK model would be beneficial as it provides a unique and intuitive way to visually assess and track the seasonal evolution of the snowpack structure with high precision, though with lower than desired levels of accuracy, which would be very difficult and time consuming for a human forecaster to do. Where a coupled NWP-SNOWPACK model would prove most useful is in data sparse areas, providing forecasters a quick ‘first look’ or ‘first guess’ at the snowpack conditions. For example, if a search and rescue mission is planned into a remote area and the rescue team needs to know the current and forecasted avalanche danger, a forecaster could quickly pull up the current snow cover simulation to ascertain the general snowpack structure in the search area. Then, based off of the weather forecast, the forecaster could provide the search and rescue crew a general synopsis of the expected conditions and level of avalanche danger the rescuers could expect to encounter. In this type of situation, using a coupled NWP-SNOWPACK model would save the forecaster valuable time and could significantly reduce the amount of uncertainty and guesswork when producing a forecast for a time-sensitive mission in a remote and data sparse area. A coupled SNOWPACK-WRF forecast system could be implemented operationally at an avalanche forecast center several different ways. First, gridded   169 forecast weather data could be produced from a WRF model domain over a specific mountain range or forecast region. This data, after being bias corrected as much as possible, could be combined, or used in conjunction with, available observed weather data from remote weather stations located in the same domain to produce numerous meteorological input points for SNOWPACK. Then, SNOWPACK simulations could be run for each point in the domain. These simulations could be made available to forecasters and updated as often as new meteorological data is available, either on an hourly basis for points using only in-situ weather data, or every 6 to 24 hours if using WRF data. Additionally, a program like the Alpine 3D model (Lehning et al., 2006) could be used, which can automatically distribute point or gridded meteorological data spatially across a DEM for a predefined domain. The meteorological data is downscaled and extrapolated across all areas of the domain while taking into account different effects like topographic shading, reflections by the surrounding terrain, and snow redistribution processes. Point SNOWPACK simulations could then be set-up for pre-selected locations within the domain (e.g. starting zones above large avalanche paths), providing forecasters with numerous spatially distributed snow cover simulations across a particular mountain range or area of interest. In conclusion, a coupled NWP-SNOWPACK model system could be of some value to an operational avalanche forecasting program. Results highlighted the ability of the system to simulate the general snowpack structure as observed in the field as well as the formation of some finer scale features like thin weak layers. Though results show this system is still not accurate enough to be used confidently and routinely in an operational   170 setting and replace the need for observed snow profiles, it can provide some estimation of the snowpack structure in areas where data is not available.   171 5. CONCLUSIONS Predicting avalanche danger in remote and data sparse areas is difficult, if not impossible, due to a lack of observed weather and snowpack data, increasing the uncertainty and challenge of avalanche forecasting in these locations. With the recent advent of snow cover models like SNOWPACK, detailed and continuous snow cover information can be simulated for any location in the world that has the required meteorological data, augmenting the necessity of manual snow cover observations for avalanche forecasting. Coupling these snow cover models with high-resolution numerical weather models can bypass the need for expensive, and sometimes faulty, remote weather stations, providing continual spatial and temporal coverage at a fraction of the cost. Numerous studies have evaluated the applicability of using SNOWPACK in an operational setting for avalanche forecasting (Lehning et al., 1999, Barlet and Lehning, 2002, Hirashima et al., 2010), with several focusing on using a coupled weather and SNOWPACK model (Bellaire et al., 2011, Bellaire and Jamieson, 2013a,b, Bellaire et al., 2014, Horton and Jamieson, 2016, Quéno et al., 2016). To date, there has not been any published research conducted in the U.S. where SNOWPACK was used focusing on avalanche prediction, especially where U.S. developed numerical weather models were coupled with SNOWPACK. In this thesis, I coupled the Weather Research and Forecasting (WRF) model with the SNOWPACK model in order to predict snowpack structure and avalanche danger in the intermountain snow climate of western Montana. The study was broken up into two seasons, where the first season focused on determining the quantitative accuracy of a   172 coupled SNOWPACK-WRF model for a specific location over the course of an entire winter season. Additionally, the SNOWPACK-WRF results were compared against results from a SNOWPACK simulation run using locally observed weather data, with the objective of determining how much, if any, accuracy was lost by running SNOWPACK with WRF model data. The second season focused on qualitatively assessing the ability of the SNOWPACK-WRF model to predict snowpack stratigraphy and avalanche potential at a regional scale, analyzing the model’s performance during significant events and case studies at several locations across western Montana. Fundamentally, my research question focused on assessing the ability of a coupled NWP-SNOWPACK system to forecast avalanches in remote settings and whether or not the system would be a useful tool for operational avalanche forecasting in the future. 5.1 Key Findings The performance of the coupled SNOWPACK-WRF model depended heavily on the accuracy of the forecasted weather data. During the first season, precipitation, winds, and solar radiation were all over-forecasted affecting the results of several snowpack parameters including snow depth, SWE, total snowfall, and new snow density. Over- forecasted precipitation resulted in higher than observed snow depths and much higher than observed SWE. Over-forecasted winds resulted in higher new snow densities and tended to increase the rate of new snow decomposition. Additionally, the higher wind speeds acted to balance out the higher precipitation sums, leading to more realistic snow depth values and total snowfall amounts, though still generally over-predicting those   173 parameters. It was also observed that the over-forecasted wind speeds unrealistically sped up the spring snowpack transition as the higher winds increased the amount of turbulent fluxes and heat added to the snowpack, especially during extended warm periods. Even with these problems, the overall accuracy of the SNOWPACK-WRF model was 71%, only 8% less than the model driven with locally observed weather data. The parameters with the biggest difference between the two models were snow depth and SWE, which were connected to either the over-forecasted precipitation and/or wind speeds. Applying bias corrections to either, or both, of these parameters showed that SWE accuracy is heavily dependent on the precipitation input, while snow depth is dependent on both precipitation and wind speed inputs. Regardless, there was little difference in performance between the two models for parameters associated with the internal snowpack structure, like density, temperature, grain size and type, and hand hardness, though both models struggled with under-predicting grain size and hand hardness. Surprisingly, both simulations had a grain type agreement score of 71%, showing how SNOWPACK provided a somewhat reliable simulation of the seasonal snowpack evolution and structure observed in the field, regardless of the type of input used. Though snow depth and SWE had larger errors, these parameters are of less importance to avalanche forecasters, who are more interested in the underlying snowpack structure and stratigraphy. These results are encouraging because they suggest we can run SNOWPACK with relatively inexpensive model output instead of expensive weather station observations and get similar results.   174 During the second season, the WRF precipitation forecast accuracy increased due to improvements in model domain placement and grid point selection, though the frequency of precipitation events was still over-forecasted and the amount of precipitation during significant storms was typically under-forecasted. Additionally, wind speeds were still systematically over-forecasted; so bias corrections were applied before using as input into SNOWPACK to increase the realism of the simulations. Results showed improved snow depth and SWE accuracy at most locations compared to the accuracy during the 2014/15 season at Bridger Bowl. The general snowpack structure and seasonal evolution was simulated relatively accurately at each location with the development of early season weak layers, prediction of a prominent rain-on-snow event, the presence of various rain and melt-freeze crusts, accurate prediction of timing and magnitude of significant mid-season storms, and the transition to a spring-time snowpack with melt-water percolating through the snowpack. The snowpack structure and stratigraphy simulated during several significant events and avalanche cycles generally reflected the stratigraphy observed in the field at the time, though some features were either not simulated or incorrectly modeled. The simulated stability ratings during these events were less useful though, and did not provide a consistent and reliable depiction of the observed avalanche danger. Results from the 2nd season highlighted the feasibility of extrapolating point model output to predict avalanche danger at a regional scale. For example, during the April 2016 avalanche case study in the Bridger Mountains, model output from a SNOWPACK-WRF simulation 10 km away was used to identify the potential of   175 increased avalanche danger in the area due to the simulation of melt-water percolating through the snowpack. Additionally, results from the Saddle Peak simulation underscores the usefulness of running a simulation using already available meteorological data from a remote weather station and supplementing the remaining required data with NWP model output. Overall, quantitative verification results showed little model improvement compared to studies nearly 15 years prior, much less than would be expected with numerous upgrades and advancements in the model physics since that time. The coupled model was able to simulate the ‘synoptic scale’ features observed in the snowpack and during significant avalanche cycles, but struggled to adequately simulate some finer scale features like thin weak layers. Though the coupled model still lacks the accuracy and precision to be confidently and routinely used in an operational setting, the system could still be useful to an avalanche forecasting program, providing a relatively inexpensive way to estimate and visualize the snowpack structure in data spare areas. 5.2 Limitations and Future Work The results of this thesis are limited temporally and spatially, with the study taking place over the course of only two winters for a few locations in a region with somewhat similar climatic characteristics. Field data was subject to human interpretation, including grain type and hand hardness, increasing the possibility of measurement errors. Additionally, the study was run using output data from only one NWP model. Therefore, results from this thesis can then only be applied to areas with   176 similar climatic conditions as western Montana using a coupled SNOWPACK-WRF model. Future work could expand spatial and temporal coverage of the study, to include numerous locations in different climatic regions over the span of many years. Additionally, a study evaluating the performance of coupled NWP-SNOWPACK models using different NWPs, including low resolution global models (e.g. the Global Forecast System (GFS) model or the European global model), high resolution mesoscale models (e.g. the High Resolution Rapid Refresh (HRRR)), and ensemble models (e.g. the North American Ensemble Forecast System (NAEFS)), would be beneficial. Results from this thesis highlight the importance of properly bias-correcting forecasted meteorological data prior to being used to as input into SNOWPACK. Bias correcting is typically accomplished by adjusting forecasted data based on observed data in the same location, as was done in this study for winds and precipitation. In reality, a coupled SNOWPACK-NWP model would most likely be used operationally for locations without observed weather data, making this method not viable. Instead, WRF output could be bias corrected by automatically downscaling raw output based on site characteristics like elevation, surrounding topography, and exposure. To accurately accomplish this, a multi-year study of NWP model biases in mountainous terrain is needed in order to understand how various site characteristics like elevation, azimuth, sky view, and snow climate, as well as model characteristics such as resolution, domain structure, and grid point location, impact the forecast accuracy of different meteorological parameters. Then, based on the site characteristics of a desired location   177 and results from the study, predefined bias corrections could be applied to raw output data from a model grid point prior to using as input into SNOWPACK, maximizing the accuracy and realism of the simulation. In conclusion, this thesis highlighted the feasibility of using a coupled weather and snow cover model to provide some estimation of the snowpack structure and avalanche danger at local and regional scales. Ideally, SNOWPACK should be driven with observed data from a weather station located in the immediate area of interest to achieve the most realistic and accurate results. When all, or even some, of the required meteorological data is not available, NWP model data can be used to drive SNOWPACK, substituting any missing data. Implementation of these models in the U.S. is feasible, as output from gridded high-resolution NWP model data could be automatically downloaded and run through SNOWPACK, producing seasonal snow cover simulations for hundreds of pre-defined locations across the western U.S. NWP output could also be supplemented with available data from pre-existing weather stations in the same area, improving the quality of the simulations. In the future, just as was the case for weather forecasting, avalanche forecasting will greatly benefit from, and rely more upon, numerical models to provide guidance to forecasters. Implications of the results from this thesis may vary based upon the interests and needs of the practitioner, but provides an overall look at the advantages and limitations of a coupled weather and snow cover model.   178 REFERENCES CITED   179 Armstrong, R.L. and Brun, E., 2008. Snow and Climate: Physical Processes, Surface Energy Exchange and Modeling. Cambridge University Press. p. 256. Bellaire, S., Jamieson, B., and Fierz, C., 2011. Forcing the snow-cover model SNOWPACK with forecasted weather data. The Cryosphere 5 (4), 1115–1125. Bellaire, S. and Jamieson, B., 2013a. Forecasting the formation of critical snow layers using a coupled snow cover and weather model. Cold Regions Science and Technology. 94, pp. 37–44. Bellaire, S. and Jamieson, B, 2013b. On estimating avalanche danger from simulated snow profiles. Proceedings of the 2013 International Snow Science Workshop, Grenoble, France. Bellaire, S., Katurji, M., Schulmann, T., and Hobman, A., 2014. Towards a high resolution operational forecasting tool for the Southern Alps – New Zealand. Proceedings of the 2014 International Snow Science Workshop, Banff, Canada, pp. 388-393. Birkeland, K.W. and Mock, C.J., 1996. Atmospheric circulation patterns associated with heavy snowfall events, Bridger Bowl, Montana, U.S.A. Mountain Research and Development 16: 281–286. Brun E., Martin, E., Simon, V., Gendre, C., and Coléou, C., 1989. An energy and mass model of snow cover suitable for operational avalanche forecasting, Journal of Glaciology, 35(121), 333-342. Brun E., David, P., Sudul, M., and Brunot, G., 1992. A numerical model to simulate snowcover stratigraphy for operational avalanche forecasting, Journal of Glaciology, 38(128), 13-22. CAIC, 2016. Colorado Avalanche Information Center: Statistics and Reporting. Retrieved May, 2016. URL: http://avalanche.state.co.us/accidents/statistics-and- reporting/ Chow, F., De Wekker, S., and Snyder, B., 2012. Mountain Weather Research and Forecasting: Recent Progress and Current Challenges. Springer. p. 749. Colbeck, S.C., Akitaya, E., Armstrong, R., Gubler, H., Lafeuille, J., Lied, K., McClung, D., and Morris, E., 1990. The international classification of seasonal snow on the ground. International Commission on Snow and Ice (ICSI). International Association of Scientific Hydrology, Wallingford, Oxon, U.K. p. 23.   180 Climate Prediction Center, 2015. El Nino/Southern Oscillation (ENSO) Diagnostic Discussion. Retrieved August, 2016. URL: http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/enso_disc_oct2015/ensodis c.pdf Dostie, C., 2010. Backcountry Skiing by the Numbers. Retrieved May, 2016. URL: https://www.wildsnow.com/3694/backcountry-skiing-statistics/ Durand, Y., Brun, E., Mérindol, L., Guyomarc’h, G., L, B., and Eric, M., 1993. A meteorological estimation of relevant parameters for snow models, Annals of Glaciology, 18, 65-71. Durand, Y., Giraud, G., Brun, E., Merindol, L., and Martin, E., 1999. A computer-based system simulating snowpack structures as a tool for regional avalanche forecasting, Journal of Glaciology, 45, pp. 469–484. Gill, D. and Pyle, M., 2012. Nesting in WRF. Retrieved May, 2016. URL: http://www2.mmm.ucar.edu/wrf/users/tutorial/201201/WRFNesting.ppt.pdf Greene, E., Atkins, D., Birkeland, K., Elder, K., Landry, C., Lazar, B., McCammon, I., Moore, M., Sharaf, D., Sterbenz, C., Tremper, B., and Williams, K., 2010. Snow, Weather, and Avalanches: Observational Guidelines for Avalanche Programs in the United States. American Avalanche Association, Pagosa Springs CO, p. 152. Goodison, B. E., Louie, P. Y. T., and Yang, D., 1998. WMO solid precipitation measurement intercomparison. Instruments and Observing Methods Rep. 67 (WMO/TD 872), World Meteorological Organization, Geneva, Switzerland, p. 212. Hamon, W.R., 1973. Computing actual precipitation in mountainous areas. Proc. WMOIAHS Symp on the Distribution of Precipitation in Mountainous Areas, WMO, No. 326, pp. 159–173. Hendrikx, J., Hreinsson, E.Ö., Clark, M.P., and Mullan, A.B., 2012. The potential impact of climate change on seasonal snow in New Zealand: part I—an analysis using 12 GCMs. Theoretial and Applied Climatology, 110(4), pp. 607–618. Hirashima, H., Kamiishi, I., Yamaguchi, S., Sato, A., and Lehning, M., 2010. Application of a numerical snowpack model to estimate full depth avalanche danger. Proceedings of the 2010 International Snow Science Workshop, Grenoble, France, pp. 47-52. Horton, S. and Jamieson, B., 2016. Modeling surface hoar layers across western Canada with a coupled weather and snow cover model. Cold Regions Science and Technology, 128, pp. 22-31.   181 Hyndman, R., and Athanasopoulos, G., 2013. Forecasting: principles and practice. Online at http://otexts.org/fpp/. Jamieson, J.B. and Johnston, C.D, 1998. Refinements to the stability index for skier- triggered dry-slab avalanches. Annals of Glaciology, v. 26, pp. 296–302. LaChapelle, E. R., 1980. The fundamental processes in conventional avalanche forecasting. Journal of Glaciology, v. 26(94), pp. 75–84. Lehning, M., Bartelt, P., Brown, B., Russi, T., Stoeckli, U., and Zimmerli, M., 1999. SNOWPACK model calculations for avalanche warning based upon a new network of weather and snow stations. Cold Regions Science and Technology, 30, pp. 145– 157. Lehning, M., Fierz, C., and Lundy C., 2001. An objective snow profile comparison method and its application to SNOWPACK. Cold Regions Science and Technology, 33, pp. 253-261. Lehning, M. and Bartelt, P., 2002. A physical SNOWPACK model for the Swiss avalanche warning: Part I. numerical model, Cold Regions Science and Technology, 35, pp. 123-145. Lehning, M., Bartelt, P., Brown, R. L., Fierz, C., and Satyawali, P. K., 2002a. A physical SNOWPACK model for the Swiss avalanche warning: Part II. Snow microstructure, Cold Regions Science and Technology, 35, pp. 147–167. Lehning, M., Bartelt, P., Brown, R. L., and Fierz, C., 2002b. A physical SNOWPACK model for the Swiss avalanche warning; Part III: meteorological forcing, thin layer formation and evaluation, Cold Regions Science and Technology, 35, pp. 169–184. Lehning, M., Volksch, I., Gustafsson, D., Nguyen, T. A., Stahli, M., and Zappa, M., 2006. ALPINE3D: A detailed model of mountain surface processes and its application to snow hydrology, Hydrological Processes, 20, pp. 2111–2128. Lundy, C., Brown, R.L., Adams, E.E., Birkeland, K.W., and Lehning, M., 2001. A statistical validation of the SNOWPACK model in a Montana climate. Cold Regions Science and Technology, 33, pp. 237– 247. Mass, C. F., Ovens, D., Westrick, K., and Colle, B. A., 2002, Does increasing horizontal resolution produce more skillful forecasts? Bulletin of the American Meteorological Society, 83, pp. 407–430. McClung, D. M., 2002. The elements of applied forecasting - part II: The physical issues and the rules of applied avalanche forecasting, Natural Hazards, 26(2), pp. 131–146.   182 McClung, D. and Schaerer, P., 2006. The Avalanche Handbook, third ed. The Mountaineers Books, Seattle, WA, USA, p. 342. Minder, J. R., Mote, P.W., and Lundquist, J.D., 2010. Surface temperature lapse rates over complex terrain: Lessons from the Cascade Mountains, Journal of Geophysical Research, 115, p. D14122 Mingo, L. and McClung, D, 1998. Crocus test results for snowpack modeling in two snow climates with respect to avalanche forecasting, Annals of Glaciology, 26, pp. 347– 356. Mock, C.J. and Birkeland, K.W., 2000. Snow avalanche climatology of the western United States mountain ranges. Bulletin of the American Meteorological Society, v. 81(10), pp. 2367-2392. NWS, n.d. NDFD Verification - Relative Frequency. Retrieved June 2016. URL: http://www.nws.noaa.gov/ndfd/verification/help/relfreq.htm Quéno, L., Vionnet, V., Dombrowski-Etchevers, I., Lafaysse, M., Dumont, M., and Karbou, K., 2016. Snowpack modelling in the Pyrenees driven by kilometric-resolution meteorological forecasts. The Cryosphere 10, pp. 1571–1589. Saly, D., Hendrix, J., Johnson, J., and Richmond, D., 2016. Using time-lapse photography to monitor avalanche terrain. Proceedings of the 2016 International Snow Science Workshop, Breckenridge, CO. Schirmer, M., Schweizer, J., and Lehning, M., 2010. Statistical evaluation of local to regional snowpack stability using simulated snow-cover data. Cold Regions Science and Technology 64 (2), pp. 110–118. Schirmer, M. and Jamieson, B., 2015. Verification of analysed and forecasted winter precipitation in complex terrain. The Cryosphere, 9 (2), pp. 587–601. Schmucki, E., Marty, C., Fierz, C., and Lehning, M., 2014. Evaluation of modeled snow depth and snow water equivalent at three contrasting sites in Switzerland using SNOWPACK simulations driven by different meteorological data input. Cold Regions Science and Technology, 99, pp. 27–37. Schweizer, J., Jamieson, J. B., and Schneebeli, M., 2003. Snow avalanche formation. Reviews of Geophysics. 41, pp. 1016–1041.   183 Schweizer, J., Bellaire, S., Fierz, C., Lehning, M., and Pielmeier, C., 2006. Evaluating and improving the stability predictions of the snow cover model SNOWPACK. Cold Regions Science and Technology, 46 (1), pp. 52–59. Skamarock, W. C., Klemp, J. B., Dudhia, J., Gill, D. O., Barker, D. M., Duda, M.G., Huang, X.Y., Wange, W., and Powers, J. G., 2008. A Description of the Advanced Research WRF Version 3. NCAR Tech. Note NCAR/TN-475+STR, 113 p. USDA Soil Conservation Service, 1959. Snow Survey Sampling Guide. Agriculture Handbook No. 169. Vionnet, V., Brun, E., Morin, S., Boone, A., Faroux, S., Le Moigne, P., Martin, E., and Willemet, J.M., 2012. The detailed snowpack scheme Crocus and its implementation in SURFEX v7.2, Geoscience Model Development, 5, pp. 773–791. Willmott, C. J., 1981. On the validation of models, Physical Geography, 2:2, pp. 184- 194. Willmott, C. J. and Matsuura, K., 2005. Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance. Climate Research, 30 (2005), pp. 79–82   184 APPENDICES   185 APPENDIX A SNOWPACK INITIALIZATION SET-UPS   186 Initialization Set-Up for 2014/15 SNOWPACK-Obs Simulation [GENERAL] BUFF_CHUNK_SIZE = 370 BUFF_BEFORE = 1.5 [INPUT] COORDSYS = CH1903 TIME_ZONE = -6 METEO = SMET METEOPATH = /Users/kylevanpeursem/Documents/School/Snowpack/simulations/input STATION1 = Obs_1415 SNOWPATH = /Users/kylevanpeursem/Documents/School/Snowpack/simulations/input SNOW = SMET SNOWFILE1 = Obs_1415 [OUTPUT] COORDSYS = CH1903 TIME_ZONE = 0 METEOPATH = /Users/kylevanpeursem/Documents/School/Snowpack/simulations/output EXPERIMENT = Obs_1415 TS_WRITE = TRUE TS_START = 0.0 TS_DAYS_BETWEEN = 0.041666 PROF_WRITE = true PROFILE_FORMAT = PRO AGGREG_PRF = false PROF_START = 0.0 PROF_DAYS_BETWEEN = 0.041666 HARDNESS_IN_NEWTON = false SNOW = SMET OUT_CANOPY = false OUT_HAZ = true OUT_HEAT = false OUT_LOAD = false OUT_LW = true OUT_MASS = true OUT_METEO = true OUT_STAB = true   187 OUT_SW = true OUT_T = true [SNOWPACK] CALCULATION_STEP_LENGTH = 60.0 ROUGHNESS_LENGTH = 0.002 HEIGHT_OF_METEO_VALUES = 2 HEIGHT_OF_WIND_VALUE = 5.0 NUMBER_SLOPES = 1 MEAS_TSS = TRUE ENFORCE_MEASURED_SNOW_HEIGHTS = true SW_MODE = BOTH ATMOSPHERIC_STABILITY = MONIN_OBUKHOV ROUGHNESS_LENGTH = 0.002 CHANGE_BC = TRUE THRESH_CHANGE_BC = -1.0 SNP_SOIL = false SOIL_FLUX = false GEO_HEAT = 0.06 CANOPY = false [SNOWPACKADVANCED] ALLOW_ADAPTIVE_TIMESTEPPING = TRUE ADVECTIVE_HEAT = 0.0 ALBEDO_AVERAGE_SCHMUCKI = ALL_DATA ALBEDO_FIXEDVALUE = -999. ALBEDO_PARAMETERIZATION = LEHNING_2 ALPINE3D = false COMBINE_ELEMENTS = true DETECT_GRASS = true FIXED_POSITIONS = 0.5 FORCE_RH_WATER = true FORCE_SW_MODE = true HARDNESS_PARAMETERIZATION = MONTI HEAT_BEGIN = 0.0 HEAT_END = 0.0 HEIGHT_NEW_ELEM = 0.005 HN_DENSITY = PARAMETERIZED HN_DENSITY_MODEL = ZWART HN_DENSITY_PARAMETERIZATION = LEHNING_NEW HOAR_DENSITY_BURIED = 125. HOAR_DENSITY_SURF = 100. HOAR_MIN_SIZE_BURIED = 2. HOAR_MIN_SIZE_SURF = 0.5   188 HOAR_THRESH_RH = 0.97 HOAR_THRESH_VW = 3.5 JAM = true MASS_BALANCE = false MEAS_INCOMING_LONGWAVE = TRUE METAMORPHISM_MODEL = DEFAULT MINIMUM_L_ELEMENT = 0.0025 MIN_DEPTH_SUBSURF = 0.07 NEW_SNOW_GRAIN_RAD = 0.3 NUMBER_FIXED_RATES = 0 PERP_TO_SLOPE = false PLASTIC = false PREVAILING_WIND_DIR = 0. RESEARCH = false SALTATION_MODEL = SORENSEN SNOW_ALBEDO = PARAMETERIZED SNOW_EROSION = true SNOW_REDISTRIBUTION = false STRENGTH_MODEL = DEFAULT SW_ABSORPTION_SCHEME = MULTI_BAND THRESH_DTEMP_AIR_SNOW = 3.0 THRESH_RAIN = 1.2 THRESH_RAIN_RANGE = 0.0 THRESH_RH = 0.5 T_CRAZY_MAX = 340. T_CRAZY_MIN = 210. VARIANT = DEFAULT VISCOSITY_MODEL = DEFAULT WATERTRANSPORTMODEL_SNOW = BUCKET WATERTRANSPORTMODEL_SOIL = BUCKET WATER_LAYER = false WIND_SCALING_FACTOR = 1.0 [INTERPOLATIONS1D] WINDOW_SIZE = +439200   189 Initialization Set-Up for all SNOWPACK-WRF Simulations (both seasons) [GENERAL] BUFF_CHUNK_SIZE = 370 BUFF_BEFORE = 1.5 [INPUT] COORDSYS = CH1903 TIME_ZONE = -6 METEO = SMET METEOPATH = /Users/kylevanpeursem/Documents/School/Snowpack/simulations/input STATION1 = WRF_1516 SNOWPATH = /Users/kylevanpeursem/Documents/School/Snowpack/simulations/input SNOW = SMET SNOWFILE1 = WRF_1516 [OUTPUT] COORDSYS = CH1903 TIME_ZONE = 0 METEOPATH = /Users/kylevanpeursem/Documents/School/Snowpack/simulations/output EXPERIMENT = WRF_1516 TS_WRITE = TRUE TS_START = 0.0 TS_DAYS_BETWEEN = 0.041666 PROF_WRITE = true PROFILE_FORMAT = PRO AGGREG_PRF = false PROF_START = 0.0 PROF_DAYS_BETWEEN = 0.041666 HARDNESS_IN_NEWTON = false SNOW = SMET OUT_CANOPY = false OUT_HAZ = true OUT_HEAT = false OUT_LOAD = false OUT_LW = true OUT_MASS = true OUT_METEO = true OUT_STAB = true OUT_SW = true   190 OUT_T = true [SNOWPACK] CALCULATION_STEP_LENGTH = 60.0 ROUGHNESS_LENGTH = 0.002 HEIGHT_OF_METEO_VALUES = 2 HEIGHT_OF_WIND_VALUE = 10 NUMBER_SLOPES = 1 MEAS_TSS = TRUE ENFORCE_MEASURED_SNOW_HEIGHTS = false SW_MODE = BOTH ATMOSPHERIC_STABILITY = MONIN_OBUKHOV ROUGHNESS_LENGTH = 0.002 CHANGE_BC = TRUE THRESH_CHANGE_BC = -1.0 SNP_SOIL = false SOIL_FLUX = false GEO_HEAT = 0.06 CANOPY = false [SNOWPACKADVANCED] ALLOW_ADAPTIVE_TIMESTEPPING = TRUE ADVECTIVE_HEAT = 0.0 ALBEDO_AVERAGE_SCHMUCKI = ALL_DATA ALBEDO_FIXEDVALUE = -999. ALBEDO_PARAMETERIZATION = LEHNING_2 ALPINE3D = false COMBINE_ELEMENTS = true DETECT_GRASS = true FORCE_RH_WATER = true FORCE_SW_MODE = true HARDNESS_PARAMETERIZATION = MONTI HEAT_BEGIN = 0.0 HEAT_END = 0.0 HEIGHT_NEW_ELEM = 0.005 HN_DENSITY = PARAMETERIZED HN_DENSITY_MODEL = ZWART HN_DENSITY_PARAMETERIZATION = LEHNING_NEW HOAR_DENSITY_BURIED = 125 HOAR_DENSITY_SURF = 100 HOAR_MIN_SIZE_BURIED = 2.0 HOAR_MIN_SIZE_SURF = 0.5 HOAR_THRESH_RH = 0.97 HOAR_THRESH_VW = 3.5   191 JAM = true MASS_BALANCE = false MEAS_INCOMING_LONGWAVE = true METAMORPHISM_MODEL = DEFAULT MINIMUM_L_ELEMENT = 0.0025 MIN_DEPTH_SUBSURF = 0.07 NEW_SNOW_GRAIN_RAD = .3 NUMBER_FIXED_RATES = 0 PERP_TO_SLOPE = false PLASTIC = false PREVAILING_WIND_DIR = 0. RESEARCH = true SALTATION_MODEL = SORENSEN SNOW_ALBEDO = PARAMETERIZED SNOW_EROSION = true SNOW_REDISTRIBUTION = false STRENGTH_MODEL = DEFAULT SW_ABSORPTION_SCHEME = MULTI_BAND THRESH_DTEMP_AIR_SNOW = 3.0 THRESH_RAIN = 1.2 THRESH_RAIN_RANGE = 0.0 THRESH_RH = 0.5 T_CRAZY_MAX = 340. T_CRAZY_MIN = 210. VARIANT = DEFAULT VISCOSITY_MODEL = DEFAULT WATERTRANSPORTMODEL_SNOW = BUCKET WATERTRANSPORTMODEL_SOIL = BUCKET WATER_LAYER = false WIND_SCALING_FACTOR = 1.0 [INTERPOLATIONS1D] WINDOW_SIZE = +439200   192 Initialization Set-Up for the 2015/16 Saddle Peak SNOWPACK Simulation [GENERAL] BUFF_CHUNK_SIZE = 370 BUFF_BEFORE = 1.5 [INPUT] COORDSYS = CH1903 TIME_ZONE = -6 METEO = SMET METEOPATH = /Users/kylevanpeursem/Documents/School/Snowpack/simulations/input STATION1 = FBF SNOWPATH = /Users/kylevanpeursem/Documents/School/Snowpack/simulations/input SNOW = SMET SNOWFILE1 = FBF [OUTPUT] COORDSYS = CH1903 TIME_ZONE = -6 METEOPATH = /Users/kylevanpeursem/Documents/School/Snowpack/simulations/output EXPERIMENT = FBF TS_WRITE = TRUE TS_START = 0.0 TS_DAYS_BETWEEN = 0.041666 PROF_WRITE = true PROFILE_FORMAT = PRO AGGREG_PRF = false PROF_START = 0.0 PROF_DAYS_BETWEEN = 0.041666 HARDNESS_IN_NEWTON = false SNOW = SMET OUT_CANOPY = false OUT_HAZ = true OUT_HEAT = false OUT_LOAD = false OUT_LW = true OUT_MASS = true OUT_METEO = true OUT_STAB = true   193 OUT_SW = true OUT_T = true [SNOWPACK] CALCULATION_STEP_LENGTH = 60.0 ROUGHNESS_LENGTH = 0.002 HEIGHT_OF_METEO_VALUES = 2 HEIGHT_OF_WIND_VALUE = 5 MEAS_TSS = false ENFORCE_MEASURED_SNOW_HEIGHTS = false SW_MODE = BOTH ATMOSPHERIC_STABILITY = MONIN_OBUKHOV ROUGHNESS_LENGTH = 0.002 CHANGE_BC = false THRESH_CHANGE_BC = -1.0 SNP_SOIL = false SOIL_FLUX = false GEO_HEAT = 0.06 CANOPY = false NUMBER_SLOPES = 3 SNOW_REDISTRIBUTION = TRUE INCOMING_LONGWAVE = TRUE [SNOWPACKADVANCED] ALLOW_ADAPTIVE_TIMESTEPPING = TRUE ADVECTIVE_HEAT = 0.0 ALBEDO_AVERAGE_SCHMUCKI = ALL_DATA ALBEDO_FIXEDVALUE = -999. ALBEDO_PARAMETERIZATION = LEHNING_2 ALPINE3D = false COMBINE_ELEMENTS = true DETECT_GRASS = true FORCE_RH_WATER = true FORCE_SW_MODE = true HARDNESS_PARAMETERIZATION = MONTI HEAT_BEGIN = 0.0 HEAT_END = 0.0 HEIGHT_NEW_ELEM = 0.005 HN_DENSITY = PARAMETERIZED HN_DENSITY_MODEL = ZWART HN_DENSITY_PARAMETERIZATION = LEHNING_NEW   194 HOAR_DENSITY_BURIED = 125 HOAR_DENSITY_SURF = 100 HOAR_MIN_SIZE_BURIED = 2.0 HOAR_MIN_SIZE_SURF = 0.5 HOAR_THRESH_RH = 0.97 HOAR_THRESH_VW = 3.5 JAM = true MASS_BALANCE = false MEAS_INCOMING_LONGWAVE = true METAMORPHISM_MODEL = DEFAULT MINIMUM_L_ELEMENT = 0.0025 MIN_DEPTH_SUBSURF = 0.07 NEW_SNOW_GRAIN_RAD = .3 NUMBER_FIXED_RATES = 0 PERP_TO_SLOPE = false PLASTIC = false PREVAILING_WIND_DIR = 0. RESEARCH = true SALTATION_MODEL = SORENSEN SNOW_ALBEDO = PARAMETERIZED SNOW_EROSION = true SNOW_REDISTRIBUTION = true STRENGTH_MODEL = DEFAULT SW_ABSORPTION_SCHEME = MULTI_BAND THRESH_DTEMP_AIR_SNOW = 3.0 THRESH_RAIN = 1.2 THRESH_RAIN_RANGE = 0.0 THRESH_RH = 0.5 T_CRAZY_MAX = 340. T_CRAZY_MIN = 210. VARIANT = DEFAULT VISCOSITY_MODEL = DEFAULT WATERTRANSPORTMODEL_SNOW = BUCKET WATERTRANSPORTMODEL_SOIL = BUCKET WATER_LAYER = false WIND_SCALING_FACTOR = 1.0 NUMBER_SLOPES = 3 [INTERPOLATIONS1D] WINDOW_SIZE = 439200   195 APPENDIX B SNOWPIT PROFILES FROM THE 2014/15 SEASON   196 Figure B 1. Snowpit profile from November 30, 2014 at the Alpine study plot.   197 Figure B 2. Snowpit profile from December 8, 2014 at the Alpine study plot.   198 Figure B 3. Snowpit profile from December 19, 2014 at the Alpine study plot.   199 Figure B 4. Snowpit profile from December 29, 2014 at the Alpine study plot.   200 Figure B 5. Snowpit profile from January 5, 2015 at the Alpine study plot.   201 Figure B 6. Snowpit profile from January 12, 2015 at the Alpine study plot.   202 Figure B 7. Snowpit profile from January 19, 2015 at the Alpine study plot.   203 Figure B 8. Snowpit profile from January 26, 2015 at the Alpine study plot.   204 Figure B 9. Snowpit profile from February 2, 2015 at the Alpine study plot.   205 Figure B 10. Snowpit profile from February 9, 2015 at the Alpine study plot.   206 Figure B 11. Snowpit profile from February 16, 2015 at the Alpine study plot.   207 Figure B 12. Snowpit profile from February 22, 2015 at the Alpine study plot.   208 Figure B 13. Snowpit profile from March 2, 2015 at the Alpine study plot.   209 Figure B 14. Snowpit profile from March 9, 2015 at the Alpine study plot.   210 Figure B 15. Snowpit profile from March 16, 2015 at the Alpine study plot.   211 Figure B 16. Snowpit profile from March 23, 2015 at the Alpine study plot.   212 Figure B 17. Snowpit profile from March 30, 2015 at the Alpine study plot.   213 Figure B 18. Snowpit profile from April 3, 2015 at the Alpine study plot.   214 APPENDIX C SNOWPIT PROFILES FROM THE 2015/16 SEASON   215 Figure C 1. Snowpit profile from November 10, 2015 at the Ramp study plot.   216 Figure C 2. Snowpit profile from November 25, 2015 at the Ramp study plot.   217 Figure C 3. Snowpit profile from December 15, 2015 at the Ramp study plot.   218 Figure C 4. Snowpit profile from December 23, 2015 at the Ramp study plot.   219 Figure C 5. Snowpit profile from January 4, 2016 at the Ramp study plot.   220 Figure C 6. Snowpit profile from January 22, 2016 at the Ramp study plot.   221 Figure C 7. Snowpit profile from February 5, 2016 at the Ramp study plot.   222 Figure C 8. Snowpit profile from February 12, 2016 at the Ramp study plot.   223 Figure C 9. Snowpit profile from February 26, 2016 at the Ramp study plot.   224 Figure C 10. Snowpit profile from March 7, 2016 on Saddle Peak.   225 Figure C 11. Snowpit profile from March 18, 2016 on Saddle Peak.   226 Figure C 12. Snowpit profile from March 18, 2016 at the Ramp study plot.   227 Figure C 13. Snowpit profile from March 30, 2016 on Saddle Peak.   228 Figure C 14. Snowpit profile from March 30, 2016 at the Ramp study plot.   229 Figure C 15. Snowpit profile from April 8, 2016 at the Ramp study plot.