Dalek — a deep-learning emulator for TARDIS. (arXiv:2007.01868v1 [astro-ph.IM])

<a href="http://arxiv.org/find/astro-ph/1/au:+Kerzendorf_W/0/1/0/all/0/1">Wolfgang E. Kerzendorf</a>, <a href="http://arxiv.org/find/astro-ph/1/au:+Vogl_C/0/1/0/all/0/1">Christian Vogl</a>, <a href="http://arxiv.org/find/astro-ph/1/au:+Buchner_J/0/1/0/all/0/1">Johannes Buchner</a>, <a href="http://arxiv.org/find/astro-ph/1/au:+Contardo_G/0/1/0/all/0/1">Gabriella Contardo</a>, <a href="http://arxiv.org/find/astro-ph/1/au:+Williamson_M/0/1/0/all/0/1">Marc Williamson</a>, <a href="http://arxiv.org/find/astro-ph/1/au:+Smagt_P/0/1/0/all/0/1">Patrick van der Smagt</a>

Supernova spectral time series contain a wealth of information about the

progenitor and explosion process of these energetic events. The modeling of

these data requires the exploration of very high dimensional posterior

probabilities with expensive radiative transfer codes. Even modest

parametrizations of supernovae contain more than ten parameters and a detailed

exploration demands at least several million function evaluations. Physically

realistic models require at least tens of CPU minutes per evaluation putting a

detailed reconstruction of the explosion out of reach of traditional

methodology. The advent of widely available libraries for the training of

neural networks combined with their ability to approximate almost arbitrary

functions with high precision allows for a new approach to this problem.

Instead of evaluating the radiative transfer model itself, one can build a

neural network proxy trained on the simulations but evaluating orders of

magnitude faster. Such a framework is called an emulator or surrogate model. In

this work, we present an emulator for the TARDIS supernova radiative transfer

code applied to Type Ia supernova spectra. We show that we can train an

emulator for this problem given a modest training set of a hundred thousand

spectra (easily calculable on modern supercomputers). The results show an

accuracy on the percent level (that are dominated by the Monte Carlo nature of

TARDIS and not the emulator) with a speedup of several orders of magnitude.

This method has a much broader set of applications and is not limited to the

presented problem.

Supernova spectral time series contain a wealth of information about the

progenitor and explosion process of these energetic events. The modeling of

these data requires the exploration of very high dimensional posterior

probabilities with expensive radiative transfer codes. Even modest

parametrizations of supernovae contain more than ten parameters and a detailed

exploration demands at least several million function evaluations. Physically

realistic models require at least tens of CPU minutes per evaluation putting a

detailed reconstruction of the explosion out of reach of traditional

methodology. The advent of widely available libraries for the training of

neural networks combined with their ability to approximate almost arbitrary

functions with high precision allows for a new approach to this problem.

Instead of evaluating the radiative transfer model itself, one can build a

neural network proxy trained on the simulations but evaluating orders of

magnitude faster. Such a framework is called an emulator or surrogate model. In

this work, we present an emulator for the TARDIS supernova radiative transfer

code applied to Type Ia supernova spectra. We show that we can train an

emulator for this problem given a modest training set of a hundred thousand

spectra (easily calculable on modern supercomputers). The results show an

accuracy on the percent level (that are dominated by the Monte Carlo nature of

TARDIS and not the emulator) with a speedup of several orders of magnitude.

This method has a much broader set of applications and is not limited to the

presented problem.

http://arxiv.org/icons/sfx.gif