Verifying Cloud Forecasts with Satellite Brightness Temperatures

Sarah
Griffin
CIMSS/UW-Madison
Jason Otkin, CIMSS/UW-Madison
Oral
One of the most common ways to verify forecasts of cloud cover is through comparing simulated and observed satellite brightness temperatures (BTs). While various techniques can be used to assess forecast accuracy, this presentation will employ object-based analysis. By creating and utilizing objects based on satellite brightness temperatures as a proxy for clouds, different assessments of clouds can be made. For example, forecast accuracy can be assessed using the Object-based Threat Score (OTS). The OTS uses object area and an interest score in its calculation, which is defined using the Method for Object-Based Diagnostic Evaluation (MODE) tool. These interest scores can further be broken down into components of object shape and distance between paired objects. Analysis of three different weather phenomena from one week in December 2021: a tornado, heavy snow, and a derecho indicated that The High-Resolution Rapid Refresh (HRRR) forecasts are most accurate for the snow event, because the interest scores were higher. The derecho forecasts were the next most accurate, followed by the tornado forecast.
This presentation will also focus on how different model physics can impact simulated BTs. It was found that the Thompson microphysics scheme has the most accurate upper-level simulated clouds, while National Severe Storms Laboratory producing too few. Changing the planetary boundary layer scheme from MYNN to Shin-Hong or Eddy-Diffusivity Mass-Flux resulted in slightly lower cloud accuracy. RUC land surface model produces too many clouds compared to Noah, and is further enhanced when using the MYNN surface layer instead of GFS.
Presentation file