AIA is All You Need: SDO MEGS A&B virtualization via Convolutional Deep Learning
Daniel
Gass
University of Central Lancashire
Will Fawcett - University of Cambridge
Manuel Indaco - Auburn University
Richard Galvez - DataTalk AI
Andres Muñoz-Jaramillo - Southwest Research Institute Boulder
Paul Wright - Dublin Institute for Advanced Studies
Manuel Indaco - Auburn University
Richard Galvez - DataTalk AI
Andres Muñoz-Jaramillo - Southwest Research Institute Boulder
Paul Wright - Dublin Institute for Advanced Studies
Oral
(Student Speaker)
Spacecraft and their on-board instruments are vulnerable to damage caused by the harsh operating conditions of space, where energetic particles and solar radiation are constant. An example of this is the Solar Dynamics Observatory’s (SDO) Extreme-ultraviolet Variability Experiment (EVE), which has seen its Multiple EUV Grating Spectrograph (MEGS) A channel being disabled entirely, and the B channel operating at severely limited cadence due to detector degradation. This has resulted in a decrease in the quantity and quality of detailed EUV irradiance data available to space weather researchers working to predict the impact of extreme solar events, such as geomagnetic storms and thermospheric drag. As such, valid alternatives to this damaged instrument are demanded, and can be provided using Machine Learning (ML) techniques.
We present a near-real-time ensemble deep learning architecture for the synthesis of the Solar Dynamics Observatory (SDO) Extreme Ultraviolet Variability Experiment (EVE) Multiple EUV Grating Spectrograph (MEGS) A and B spectral irradiance data. Our architecture is made of a linear component, needed to learn overall irradiance trends, and a convolutional component, which is capable of capturing non-linearities. Nine channels of the Atmospheric Imaging Assembly (AIA) are used as input, as well as three magnetic field components available from the Helioseismic and Magnetic Imager (HMI). Surprisingly, our results demonstrate how the information provided by the AIA images are sufficient to predict solar irradiance within a small percentage error, with information added by HMI resulting in background noise. Our model, an advancement of previous studies by Galvez et al. (2019) and Wright et al. (2018), can now generate near-real-time EUV irradiance intensities, allowing for analysis of EUV irradiance in the affected spectral range beyond 2014.
This work has been enabled by FDL-X (fdlxhelio.org); a derivative of Frontier Development Lab (FDL.ai); as a public/private partnership between NASA, Trillium Technologies and commercial AI partners Google Cloud and Nvidia.
We present a near-real-time ensemble deep learning architecture for the synthesis of the Solar Dynamics Observatory (SDO) Extreme Ultraviolet Variability Experiment (EVE) Multiple EUV Grating Spectrograph (MEGS) A and B spectral irradiance data. Our architecture is made of a linear component, needed to learn overall irradiance trends, and a convolutional component, which is capable of capturing non-linearities. Nine channels of the Atmospheric Imaging Assembly (AIA) are used as input, as well as three magnetic field components available from the Helioseismic and Magnetic Imager (HMI). Surprisingly, our results demonstrate how the information provided by the AIA images are sufficient to predict solar irradiance within a small percentage error, with information added by HMI resulting in background noise. Our model, an advancement of previous studies by Galvez et al. (2019) and Wright et al. (2018), can now generate near-real-time EUV irradiance intensities, allowing for analysis of EUV irradiance in the affected spectral range beyond 2014.
This work has been enabled by FDL-X (fdlxhelio.org); a derivative of Frontier Development Lab (FDL.ai); as a public/private partnership between NASA, Trillium Technologies and commercial AI partners Google Cloud and Nvidia.
Presentation file
YouTube link
Meeting homepage