The Influence of Feature Aggregation for Explainable AI for High Dimensional Geoscience Applications
Evan
Krell
Conrad Blucher Institute for Surveying and Science, Texas A&M University - Corpus Christi; NSF AI Institute for Research on Trustworthy AI in Weather, Climate and Coastal Oceanography
Oral
High-dimensional gridded spatial data has become increasingly used to develop complex machine learning models of atmospheric phenomena such as fog and clouds. For example, FogNet is a 3D convolutional neural network whose input data is composed of gridded numerical weather predictions and satellite imagery. With complex learning techniques and gridded inputs, models can be trained to extract spatial patterns to represent highly nonlinear functions. However, it is challenging to understand how the trained model behaves. For critical applications, it is important for experts to verify that the model has learned physically realistic strategies. Insight into how the model works can increase user’s trust in the model and better inform decision-makers. Explainable artificial intelligence (XAI) is a class of techniques to expose how models operate. XAI can be used to highlight which cells of a gridded input were most influential for a given model output. A major challenge when using XAI to explain gridded spatial data is that XAI techniques are often very sensitive to correlations among the input features. Spatial data typically exhibits a high degree of spatial autocorrelation. A proposed solution, often used for tabular data, is to group the correlated features. However, it is challenging to select the optimal clusters to group spatial data. Using FogNet as a case study, we experiment with multiple approaches to feature aggregation for geospatial data. We demonstrate that choices in how the grid cells are formed into groups can greatly influence the model explanations. We also show that the discrepancies among explanations can be reasonably explained based on the nature of the input features. These differences can be used to gain some insight into the scale of the spatial features learned. While a single grouping scheme may produce misleading XAI results, we show how a hierarchy of groups can aid model interpretation.
Presentation file
YouTube link
Meeting homepage