Gravitational wave populations and cosmology with neural posterior estimation. (arXiv:2311.12093v1 [gr-qc])
<a href="http://arxiv.org/find/gr-qc/1/au:+Leyde_K/0/1/0/all/0/1">Konstantin Leyde</a>, <a href="http://arxiv.org/find/gr-qc/1/au:+Green_S/0/1/0/all/0/1">Stephen R. Green</a>, <a href="http://arxiv.org/find/gr-qc/1/au:+Toubiana_A/0/1/0/all/0/1">Alexandre Toubiana</a>, <a href="http://arxiv.org/find/gr-qc/1/au:+Gair_J/0/1/0/all/0/1">Jonathan Gair</a>

We apply neural posterior estimation for fast-and-accurate hierarchical
Bayesian inference of gravitational wave populations. We use a normalizing flow
to estimate directly the population hyper-parameters from a collection of
individual source observations. This approach provides complete freedom in
event representation, automatic inclusion of selection effects, and (in
contrast to likelihood estimation) without the need for stochastic samplers to
obtain posterior samples. Since the number of events may be unknown when the
network is trained, we split into sub-population analyses that we later
recombine; this allows for fast sequential analyses as additional events are
observed. We demonstrate our method on a toy problem of dark siren cosmology,
and show that inference takes just a few minutes and scales to $sim 600$
events before performance degrades. We argue that neural posterior estimation
therefore represents a promising avenue for population inference with large
numbers of events.

We apply neural posterior estimation for fast-and-accurate hierarchical
Bayesian inference of gravitational wave populations. We use a normalizing flow
to estimate directly the population hyper-parameters from a collection of
individual source observations. This approach provides complete freedom in
event representation, automatic inclusion of selection effects, and (in
contrast to likelihood estimation) without the need for stochastic samplers to
obtain posterior samples. Since the number of events may be unknown when the
network is trained, we split into sub-population analyses that we later
recombine; this allows for fast sequential analyses as additional events are
observed. We demonstrate our method on a toy problem of dark siren cosmology,
and show that inference takes just a few minutes and scales to $sim 600$
events before performance degrades. We argue that neural posterior estimation
therefore represents a promising avenue for population inference with large
numbers of events.

http://arxiv.org/icons/sfx.gif